[jira] [Updated] (HADOOP-14060) KMS /logs servlet should require authentication and authorization

2017-08-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14060:

Summary: KMS /logs servlet should require authentication and authorization  
(was: KMS /logs servlet should have access control)

> KMS /logs servlet should require authentication and authorization
> -
>
> Key: HADOOP-14060
> URL: https://issues.apache.org/jira/browse/HADOOP-14060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14060-tmp.001.patch
>
>
> HADOOP-14047 makes KMS call {{HttpServer2#setACL}}. Access control works fine 
> for /conf, /jmx, /logLevel, and /stacks, but not for /logs.
> The code in {{AdminAuthorizedServlet#doGet}} for /logs and 
> {{ConfServlet#doGet}} for /conf are quite similar. This makes me believe that 
> /logs should subject to the same access control as intended by the original 
> developer.
> IMHO this could either be my misconfiguration or there is a bug somewhere in 
> {{HttpServer2}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14786) HTTP default servlets do not require authentication when kerberos is enabled

2017-08-17 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131610#comment-16131610
 ] 

John Zhuge edited comment on HADOOP-14786 at 8/18/17 5:03 AM:
--

These HTTP servers built on top of HttpServer2 are affected:
* NameNodeHttpServer
* SecondaryNameNode InfoServer
* JournalNodeHttpServer
* DatanodeHttpServer
* Nfs3HttpServer
* ResourceManager
* NodeTimelineCollectorManager
* TimelineReaderServer

The exceptions are KMSWebServer and HttpFSServerWebServer. Even though they are 
also built on top of HttpServer2, they provide their own authFilter in web.xml.


was (Author: jzhuge):
This issue applies to all HTTP server built on top of HttpServer2:
* NameNodeHttpServer
* SecondaryNameNode InfoServer
* JournalNodeHttpServer
* DatanodeHttpServer
* Nfs3HttpServer
* ResourceManager
* NodeTimelineCollectorManager
* TimelineReaderServer

> HTTP default servlets do not require authentication when kerberos is enabled
> 
>
> Key: HADOOP-14786
> URL: https://issues.apache.org/jira/browse/HADOOP-14786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The default HttpServer2 servlet /jmx, /conf, /logLevel, and /stack do not 
> require authentication when Kerberos is enabled.
> {code:java|title=HttpServer2#addDefaultServlets}
>   // set up default servlets
>   addServlet("stacks", "/stacks", StackServlet.class);
>   addServlet("logLevel", "/logLevel", LogLevel.Servlet.class);
>   addServlet("jmx", "/jmx", JMXJsonServlet.class);
>   addServlet("conf", "/conf", ConfServlet.class);
> {code}
> {code:java|title=HttpServer2#addServlet}
> public void addServlet(String name, String pathSpec,
>Class clazz) {
>   addInternalServlet(name, pathSpec, clazz, false);
>   addFilterPathMapping(pathSpec, webAppContext);
> {code}
> {code:java|title=Httpserver2#addInternalServlet}
> addInternalServlet(…, bool requireAuth)
> …
> if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
>   LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
> {code}
> {{requireAuth}} is {{false}} for the default servlets inside 
> {{addInternalServlet}}.
> The issue can be verified by running the following curl command against 
> NameNode web address when Kerberos is enabled:
> {noformat}
> kdestroy
> curl --negotiate -u: -k -sS 'https://:9871/jmx'
> {noformat}
> Expect curl to fail, but it returns JMX anyway.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131716#comment-16131716
 ] 

Xiao Chen commented on HADOOP-14705:


Test failures are not related to the changes here. KDiag failure is tracked at 
HADOOP-14030.

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, 
> HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-08-17 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu reassigned HADOOP-14194:
--

Assignee: Genmao Yu  (was: Xiaobing Zhou)

> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Genmao Yu
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a default value.
> {code}
> String endPoint = conf.getTrimmed(ENDPOINT_KEY, "");
> {code}
> The plain value without validation is passed to OSSClient. If the endPoint is 
> not provided (empty string) or the endPoint is not valid, users will get 
> exception from Aliyun OSS sdk with raw exception message like:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected 
> authority at index 8: https://
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359)
>   at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313)
>   at com.aliyun.oss.OSSClient.(OSSClient.java:297)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63)
>   at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47)
>   at junit.framework.TestCase.runBare(TestCase.java:139)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.net.URISyntaxException: Expected authority at index 8: 
> https://
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.failExpecting(URI.java:2854)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3102)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:588)
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357)
> {code}
> Let's check endPoint is not null or empty, catch the IllegalArgumentException 
> and log it, wrapping the exception with clearer message stating the 
> misconfiguration in endpoint or credentials.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator

2017-08-17 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131682#comment-16131682
 ] 

Genmao Yu commented on HADOOP-14787:


pending unit test

> AliyunOSS: Implement the `createNonRecursive` operator
> --
>
> Key: HADOOP-14787
> URL: https://issues.apache.org/jira/browse/HADOOP-14787
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>
> {code}
> testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 1.146 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 0.145 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 0.147 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> 

[jira] [Updated] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator

2017-08-17 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14787:
---
Summary: AliyunOSS: Implement the `createNonRecursive` operator  (was: 
AliyunOSS: Some unit test failures at beta1 branch)

> AliyunOSS: Implement the `createNonRecursive` operator
> --
>
> Key: HADOOP-14787
> URL: https://issues.apache.org/jira/browse/HADOOP-14787
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>
> {code}
> testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 1.146 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 0.145 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 0.147 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   

[jira] [Created] (HADOOP-14787) AliyunOSS: Some unit test failures at beta1 branch

2017-08-17 Thread Genmao Yu (JIRA)
Genmao Yu created HADOOP-14787:
--

 Summary: AliyunOSS: Some unit test failures at beta1 branch
 Key: HADOOP-14787
 URL: https://issues.apache.org/jira/browse/HADOOP-14787
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.0.0-beta1
Reporter: Genmao Yu
Assignee: Genmao Yu


{code}
testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
  Time elapsed: 1.146 sec  <<< ERROR!
java.io.IOException: createNonRecursive unsupported for this filesystem class 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
at 
org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
at 
org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
at 
org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178)
at 
org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
  Time elapsed: 0.145 sec  <<< ERROR!
java.io.IOException: createNonRecursive unsupported for this filesystem class 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
at 
org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
at 
org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
at 
org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133)
at 
org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
  Time elapsed: 0.147 sec  <<< ERROR!
java.io.IOException: createNonRecursive unsupported for this filesystem class 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
at 
org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
at 
org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
at 
org.apache.hadoop.fs.contract.AbstractContractCreateTest.testCreateFileOverExistingFileNoOverwrite(AbstractContractCreateTest.java:79)
at 

[jira] [Commented] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131660#comment-16131660
 ] 

Hadoop QA commented on HADOOP-14784:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
0s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14784 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882490/HADOOP-14784.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 85f7fa6080e6 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 99e558b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13067/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13067/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 

[jira] [Commented] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2017-08-17 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131627#comment-16131627
 ] 

Yeliang Cang commented on HADOOP-14784:
---

Hi, [~jojochuang], I submit a patch like you suggested. Please check it!


> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-14786) HTTP default servlets do not require authentication when kerberos is enabled

2017-08-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14786 started by John Zhuge.
---
> HTTP default servlets do not require authentication when kerberos is enabled
> 
>
> Key: HADOOP-14786
> URL: https://issues.apache.org/jira/browse/HADOOP-14786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The default HttpServer2 servlet /jmx, /conf, /logLevel, and /stack do not 
> require authentication when Kerberos is enabled.
> {code:java|title=HttpServer2#addDefaultServlets}
>   // set up default servlets
>   addServlet("stacks", "/stacks", StackServlet.class);
>   addServlet("logLevel", "/logLevel", LogLevel.Servlet.class);
>   addServlet("jmx", "/jmx", JMXJsonServlet.class);
>   addServlet("conf", "/conf", ConfServlet.class);
> {code}
> {code:java|title=HttpServer2#addServlet}
> public void addServlet(String name, String pathSpec,
>Class clazz) {
>   addInternalServlet(name, pathSpec, clazz, false);
>   addFilterPathMapping(pathSpec, webAppContext);
> {code}
> {code:java|title=Httpserver2#addInternalServlet}
> addInternalServlet(…, bool requireAuth)
> …
> if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
>   LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
> {code}
> {{requireAuth}} is {{false}} for the default servlets inside 
> {{addInternalServlet}}.
> The issue can be verified by running the following curl command against 
> NameNode web address when Kerberos is enabled:
> {noformat}
> kdestroy
> curl --negotiate -u: -k -sS 'https://:9871/jmx'
> {noformat}
> Expect curl to fail, but it returns JMX anyway.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14786) HTTP default servlets do not require authentication when kerberos is enabled

2017-08-17 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131610#comment-16131610
 ] 

John Zhuge commented on HADOOP-14786:
-

This issue applies to all HTTP server built on top of HttpServer2:
* NameNodeHttpServer
* SecondaryNameNode InfoServer
* JournalNodeHttpServer
* DatanodeHttpServer
* Nfs3HttpServer
* ResourceManager
* NodeTimelineCollectorManager
* TimelineReaderServer

> HTTP default servlets do not require authentication when kerberos is enabled
> 
>
> Key: HADOOP-14786
> URL: https://issues.apache.org/jira/browse/HADOOP-14786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The default HttpServer2 servlet /jmx, /conf, /logLevel, and /stack do not 
> require authentication when Kerberos is enabled.
> {code:java|title=HttpServer2#addDefaultServlets}
>   // set up default servlets
>   addServlet("stacks", "/stacks", StackServlet.class);
>   addServlet("logLevel", "/logLevel", LogLevel.Servlet.class);
>   addServlet("jmx", "/jmx", JMXJsonServlet.class);
>   addServlet("conf", "/conf", ConfServlet.class);
> {code}
> {code:java|title=HttpServer2#addServlet}
> public void addServlet(String name, String pathSpec,
>Class clazz) {
>   addInternalServlet(name, pathSpec, clazz, false);
>   addFilterPathMapping(pathSpec, webAppContext);
> {code}
> {code:java|title=Httpserver2#addInternalServlet}
> addInternalServlet(…, bool requireAuth)
> …
> if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
>   LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
> {code}
> {{requireAuth}} is {{false}} for the default servlets inside 
> {{addInternalServlet}}.
> The issue can be verified by running the following curl command against 
> NameNode web address when Kerberos is enabled:
> {noformat}
> kdestroy
> curl --negotiate -u: -k -sS 'https://:9871/jmx'
> {noformat}
> Expect curl to fail, but it returns JMX anyway.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2017-08-17 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-14784:
--
Attachment: HADOOP-14784.001.patch

> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2017-08-17 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-14784:
--
Status: Patch Available  (was: Open)

> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14786) HTTP default servlets do not require authentication when kerberos is enabled

2017-08-17 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14786:
---

 Summary: HTTP default servlets do not require authentication when 
kerberos is enabled
 Key: HADOOP-14786
 URL: https://issues.apache.org/jira/browse/HADOOP-14786
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge


The default HttpServer2 servlet /jmx, /conf, /logLevel, and /stack do not 
require authentication when Kerberos is enabled.


{code:java|title=HttpServer2#addDefaultServlets}
  // set up default servlets
  addServlet("stacks", "/stacks", StackServlet.class);
  addServlet("logLevel", "/logLevel", LogLevel.Servlet.class);
  addServlet("jmx", "/jmx", JMXJsonServlet.class);
  addServlet("conf", "/conf", ConfServlet.class);
{code}

{code:java|title=HttpServer2#addServlet}
public void addServlet(String name, String pathSpec,
   Class clazz) {
  addInternalServlet(name, pathSpec, clazz, false);
  addFilterPathMapping(pathSpec, webAppContext);
{code}
{code:java|title=Httpserver2#addInternalServlet}
addInternalServlet(…, bool requireAuth)
…
if(requireAuth && UserGroupInformation.isSecurityEnabled()) {
  LOG.info("Adding Kerberos (SPNEGO) filter to " + name);
{code}

{{requireAuth}} is {{false}} for the default servlets inside 
{{addInternalServlet}}.

The issue can be verified by running the following curl command against 
NameNode web address when Kerberos is enabled:
{noformat}
kdestroy
curl --negotiate -u: -k -sS 'https://:9871/jmx'
{noformat}
Expect curl to fail, but it returns JMX anyway.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.0

2017-08-17 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131584#comment-16131584
 ] 

Genmao Yu commented on HADOOP-14649:


[~drankye] [~rchiang] I will give a test offline. Generally LGTM.

> Update aliyun-sdk-oss version to 2.8.0
> --
>
> Key: HADOOP-14649
> URL: https://issues.apache.org/jira/browse/HADOOP-14649
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>
> Update the dependency
> com.aliyun.oss:aliyun-sdk-oss:2.4.1
> to the latest (2.8.0).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131581#comment-16131581
 ] 

Hudson commented on HADOOP-14398:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12209 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12209/])
HADOOP-14398. Modify documents for the FileSystem Builder API. (Lei (lei: rev 
99e558b13ba4d5832aea97374e1d07b4e78e5e39)
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/index.md
* (add) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStreamBuilder.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md


> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch, HADOOP-14398.03.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-17 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131560#comment-16131560
 ] 

Lei (Eddy) Xu commented on HADOOP-14398:


There is no actual code change, the test failures were not related.

Thanks for the reviews, [~fabbri] and [~andrew.wang]. Committed to trunk.

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch, HADOOP-14398.03.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14785) Specify the behavior of handling conflicts between must and opt parameters

2017-08-17 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-14785:
--

 Summary: Specify the behavior of handling conflicts between must 
and opt parameters 
 Key: HADOOP-14785
 URL: https://issues.apache.org/jira/browse/HADOOP-14785
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.0.0-alpha3
Reporter: Lei (Eddy) Xu


It is flexible to allow user to use strings as key/values to specify the 
behaviors of {{FSOutputStream}}, but this flexibility offers the potential 
conflicts between parameters.

It should specify a general rule of how to handle such conflicts for differnt 
file system implementations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131539#comment-16131539
 ] 

Hadoop QA commented on HADOOP-14583:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-tools/hadoop-azure: The patch generated 0 new 
+ 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
0s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14583 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882469/HADOOP-14583-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 68157c66f8ea 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4230872 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13066/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13066/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-14583-001.patch, HADOOP-14583-002.patch, 
> 

[jira] [Commented] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131538#comment-16131538
 ] 

Hadoop QA commented on HADOOP-14398:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 43s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14398 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882465/HADOOP-14398.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 89679885688c 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4230872 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13065/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13065/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13065/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs

[jira] [Commented] (HADOOP-13952) tools dependency hooks are throwing errors

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131521#comment-16131521
 ] 

Andrew Wang commented on HADOOP-13952:
--

Hey [~aw], do you have time to review Sean's patch?

> tools dependency hooks are throwing errors
> --
>
> Key: HADOOP-13952
> URL: https://issues.apache.org/jira/browse/HADOOP-13952
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13952.000.patch, HADOOP-13952.001.patch, 
> HADOOP-13952.preview.patch
>
>
> During build, we are throwing these errors:
> {code}
> ERROR: hadoop-aliyun has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
> ERROR: hadoop-archive-logs has missing dependencies: 
> jasper-compiler-5.5.23.jar
> ERROR: hadoop-archives has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aws has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-azure has missing dependencies: 
> jetty-util-ajax-9.3.11.v20160721.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
> ERROR: hadoop-extras has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-gridmix has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-kafka has missing dependencies: lz4-1.2.0.jar
> ERROR: hadoop-kafka has missing dependencies: kafka-clients-0.8.2.1.jar
> ERROR: hadoop-openstack has missing dependencies: commons-httpclient-3.1.jar
> ERROR: hadoop-rumen has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: metrics-core-3.0.1.jar
> ERROR: hadoop-streaming has missing dependencies: jasper-compiler-5.5.23.jar
> {code}
> Likely a variety of reasons for the failures.  Kafka is HADOOP-12556, but 
> others need to be investigated.  Probably just need to look at more than just 
> common/lib in dist-tools-hooks-maker now that shading has gone in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131520#comment-16131520
 ] 

Andrew Wang commented on HADOOP-14498:
--

Hey [~aw], do you have time to review Sean's patch?

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch, HADOOP-14498.002.patch, 
> HADOOP-14498.003.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14364) refresh changelog/release notes with newer Apache Yetus build

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131517#comment-16131517
 ] 

Andrew Wang commented on HADOOP-14364:
--

Another ping, should we just get this in? I can lend the obligatory +1 if you 
post an updated patch.

> refresh changelog/release notes with newer Apache Yetus build
> -
>
> Key: HADOOP-14364
> URL: https://issues.apache.org/jira/browse/HADOOP-14364
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha4
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-14364.00.patch
>
>
> A lot of fixes went into Apache Yetus 0.4.0 wrt releasedocs and how it's 
> output gets rendered with mvn site.  We should re-run releasedocs for all 
> releases and refresh the content to use the new formatting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-17 Thread Thomas Marquardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-14583:
--
Attachment: HADOOP-14583-003.patch

Attaching HADOOP-14583-003.patch.

This updates {{retrieveMetadata}} to treat {{blob.exists() == false}} and a 
file not found error from {{blob.downloadAttributes()}} in the same manner.  It 
fixes a race condition in the {{retrieveMetadata}} implementation where 
{{blob.exists()}} returns true, then the file is deleted by an external agent, 
and then {{downloadAttributes}} is called and fails with a file not found error.

I also updated the tests to use {{Callable}} instead of {{Runnable}} so we 
can more easily validate the contract of the {{FileSystem.create}} and 
{{FileSystem.delete}} APIs and check for exceptions.  The new test 
{{testConcurrentCreateDeleteFile}} fails intermittently without this fix.  I 
have run the new test and the updated existing test in a loop hundreds of times 
to ensure that they both  pass 100% of the time.

All tests are passing against my US West storage account.

Tests run: 775, Failures: 0, Errors: 0, Skipped: 155
Total time: 17:13 minutes

> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-14583-001.patch, HADOOP-14583-002.patch, 
> HADOOP-14583-003.patch
>
>
> It's a known defect of the Hadoop FS API (and one we don't explicitly test 
> for enough), but you can create a file on a path which doesn't exist. In that 
> situation, the create() logic is expectd to create the entries.
> Wasb appears to raise an exception if you try to call {{create(filepath)}} 
> without calling {{mkdirs(filepath.getParent()}} first. That's the semantics 
> expected of {{createNonRecursive()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2017-08-17 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-14784:


 Summary: [KMS] Improve KeyAuthorizationKeyProvider#toString()
 Key: HADOOP-14784
 URL: https://issues.apache.org/jira/browse/HADOOP-14784
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang
Priority: Trivial


When KMS server starts, it loads KeyProviderCryptoExtension and print the 
following message:

{noformat}
2017-08-17 04:57:13,348 INFO org.apache.hadoop.crypto.key.kms.server.KMSWebApp: 
Initialized KeyProviderCryptoExtension 
EagerKeyGeneratorKeyProviderCryptoExtension: KeyProviderCryptoExtension: 
CachingKeyProvider: jceks://file@/var/lib/kms/kms.keystore
{noformat}

However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
shown in this message. KeyAuthorizationKeyProvider#toString should be improved, 
so that in addition to its internal provider, also print its own class name 
when loaded.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14783) [KMS] Add missing configuration properties into kms-default.xml

2017-08-17 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14783:
-
Component/s: kms

> [KMS] Add missing configuration properties into kms-default.xml
> ---
>
> Key: HADOOP-14783
> URL: https://issues.apache.org/jira/browse/HADOOP-14783
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>  Labels: newbie++
>
> A few KMS configs are missing from kms-default.xml
> hadoop.kms.key.authorization.enable
> hadoop.security.kms.encrypted.key.cache.{size,low.watermark,expiry,num.fill.threads}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14783) [KMS] Add missing configuration properties into kms-default.xml

2017-08-17 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-14783:


 Summary: [KMS] Add missing configuration properties into 
kms-default.xml
 Key: HADOOP-14783
 URL: https://issues.apache.org/jira/browse/HADOOP-14783
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang
Priority: Minor


A few KMS configs are missing from kms-default.xml

hadoop.kms.key.authorization.enable
hadoop.security.kms.encrypted.key.cache.{size,low.watermark,expiry,num.fill.threads}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131508#comment-16131508
 ] 

Andrew Wang commented on HADOOP-14398:
--

Thanks Eddy, +1 on 03 pending Jenkins.

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch, HADOOP-14398.03.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131504#comment-16131504
 ] 

Hadoop QA commented on HADOOP-14705:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 
150 unchanged - 2 fixed = 150 total (was 152) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882456/HADOOP-14705.08.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e4892f3e708e 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b298948 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13064/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13064/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13064/console |
| Powered by | 

[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-17 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
Attachment: HADOOP-14398.03.patch

Thanks for the reviews, [~andrew.wang]

Will file the follow on JIRAs for further discussions.

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch, 
> HADOOP-14398.02.patch, HADOOP-14398.03.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Attachment: HADOOP-14705.08.patch

Attaching patch 8 to address all comments from Wei-Chiu. Thanks for reviewing.

bq. After adding this interface, does it deprecate the old reencrypt interface 
added in HADOOP-13827?
I don't feel strongly either way, but if we do, let's do it in a separate jira 
within beta1 timeframe.

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, 
> HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14731) Update gitignore to exclude output of site build

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131408#comment-16131408
 ] 

Andrew Wang commented on HADOOP-14731:
--

Hey Allen, any opposition to this as a stop-gap fix? Would appreciate a review.

> Update gitignore to exclude output of site build
> 
>
> Key: HADOOP-14731
> URL: https://issues.apache.org/jira/browse/HADOOP-14731
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, site
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-14731.001.patch
>
>
> Site build generates a bunch of files that aren't caught by gitignore, let's 
> update.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131387#comment-16131387
 ] 

Hadoop QA commented on HADOOP-14705:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-common-project: The patch generated 3 new 
+ 152 unchanged - 2 fixed = 155 total (was 154) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
7s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882438/HADOOP-14705.07.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3f04da0de762 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ab1a8ae |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13063/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13063/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13063/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Comment Edited] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131330#comment-16131330
 ] 

Wei-Chiu Chuang edited comment on HADOOP-14705 at 8/17/17 10:19 PM:


Thanks for the new patch, Xiao.

Looking at rev 007, I feel like adding an artificial, hard-coded limit of size 
of payload is not the best approach.
{code:title=EagerKeyGeneratorKeyProviderCryptoExtension#reencryptEncryptedKeys}
Preconditions.checkArgument(jsonPayload.size() <= MAX_NUM_PER_BATCH,
  "jsonPayload too many objects");
{code}
I would actually prefer to log a warning if the size exceed a certain limit, 
than rejecting it right away.

After adding this interface, does it deprecate the old reencrypt interface 
added in HADOOP-13827?

Regarding doc: it might be useful to mention the batch reencryption interface 
only supports EEKs in the same encryption zone (or has the same EK)


was (Author: jojochuang):
Thanks for the new patch, Xiao.

Looking at rev 007, I feel like adding an artificial, hard-coded limit of size 
of payload is not the best approach.
{code:title=EagerKeyGeneratorKeyProviderCryptoExtension#reencryptEncryptedKeys}
Preconditions.checkArgument(jsonPayload.size() <= MAX_NUM_PER_BATCH,
  "jsonPayload too many objects");
{code}
I would actually prefer to log a warning if the size exceed a certain limit, 
than rejecting it right away.

After adding this interface, does it deprecate the old reencrypt interface 
added in HADOOP-13827?

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, 
> HADOOP-14705.06.patch, HADOOP-14705.07.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-17 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131342#comment-16131342
 ] 

Daniel Templeton commented on HADOOP-14284:
---

bq. We are still in discussion for a decent solution. From my understanding, 
this shouldn't be a real blocker for beta.

[~djp], classpath isolation has been one of the top level features for Hadoop 
3.0 for as long as I can remember.  If we do actually want to ship it in Hadoop 
3.0, then we want to get the work in for beta1 if possible.  Please help us by 
sharing what you see as open issues with the current proposal(s).

Sounds like we have two options on the table:

# Shade just the client artifacts
# Shade everything

(With all caveats and explanations as have already been covered above.)  The 
idea of shading just the clients limits the impact, but it leaves open the door 
for unexpected downstream compatibility issues.  It also gives us an 
inconsistent source base if we go the route of changing the imports to the 
relocated packages.  Shading everything sounds safer, but is a larger change 
and may have a larger impact on the build process.  It sounds to me like the 
3rd party JAR should make the build process impact small.  For these reasons, 
I'd would say to continue on the original path.  Either way, though, let's make 
a call.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131330#comment-16131330
 ] 

Wei-Chiu Chuang commented on HADOOP-14705:
--

Thanks for the new patch, Xiao.

Looking at rev 007, I feel like adding an artificial, hard-coded limit of size 
of payload is not the best approach.
{code:title=EagerKeyGeneratorKeyProviderCryptoExtension#reencryptEncryptedKeys}
Preconditions.checkArgument(jsonPayload.size() <= MAX_NUM_PER_BATCH,
  "jsonPayload too many objects");
{code}
I would actually prefer to log a warning if the size exceed a certain limit, 
than rejecting it right away.

After adding this interface, does it deprecate the old reencrypt interface 
added in HADOOP-13827?

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, 
> HADOOP-14705.06.patch, HADOOP-14705.07.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13948) Create automated scripts to update LICENSE/NOTICE

2017-08-17 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131310#comment-16131310
 ] 

Xiao Chen commented on HADOOP-13948:


Andrew, sorry I wasn't able to work on this. Will try after next Wednesday...
In the meantime if you or someone working on beta is interested, feel free to 
give a try. Scripts are on HADOOP-13780 and has very intuitional names 
(step1.sh -> step5.sh)...

> Create automated scripts to update LICENSE/NOTICE
> -
>
> Key: HADOOP-13948
> URL: https://issues.apache.org/jira/browse/HADOOP-13948
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131281#comment-16131281
 ] 

Xiao Chen commented on HADOOP-14705:


Thanks a lot for the prompt review [~shahrs87], and apologies for the 
difficulty in applying the patch.

All comments addressed in patch 7 with exception and explanations below.

bq. Do we need to increment AdminCallsMeter ?
I think it's the other way. These meters are exposed to the jmx, where you 
would see things like:
{noformat}
"name" : "metrics:name=hadoop.kms.unauthorized.calls.meter",
"name" : "metrics:name=hadoop.kms.invalid.calls.meter",
"name" : "metrics:name=hadoop.kms.decrypt_eek.calls.meter",
"name" : "metrics:name=hadoop.kms.admin.calls.meter",
"name" : "metrics:name=hadoop.kms.generate_eek.calls.meter",
"name" : "metrics:name=hadoop.kms.key.calls.meter",
"name" : "metrics:name=hadoop.kms.unauthenticated.calls.meter",
{noformat}
>From my understanding, the goal for these is purely for maintenance and 
>statistics. Since key level operations are rare, they're aggregated to the 
>same meters - either admin.calls or key.calls.
For eek calls (generate/decrypt), they both have their own meters. reencrypt 
fits into this category.

bq. We can remove the try catch block around user.doAs context and let the 
outer try catch block handle the exception propagating from the doAs call.
Problem with that is, the precondition checks also become IOEs. 
I think the current way is more consistent with other methods in KMS and 
creates least surprise.
The outer wrapper is only to make it possible to log a debug message in the KMS 
if things go wrong, the inner exception seems to consider provider-thrown 
exceptions more serious and log an error I don't fully get the history for 
this, so didn't change.

bq. ... in the test ... I think we are comparing apples to oranges.
Agreed. Looking at {{testGenerateEncryptedKey}} this case is also there, and I 
think it doesn't hurt to make sure they're different fruits. :) Also added your 
suggestion #5 which does the more important comparison of apples to apples.

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, 
> HADOOP-14705.06.patch, HADOOP-14705.07.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Attachment: HADOOP-14705.07.patch

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, 
> HADOOP-14705.06.patch, HADOOP-14705.07.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-17 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131213#comment-16131213
 ] 

Thomas Marquardt edited comment on HADOOP-14553 at 8/17/17 8:32 PM:


It is great to see the work that has been done here!  We can now run the tests 
in half the time or less!  This is a *huge* improvement to the engineering 
process!  Thanks for enabling the tests to be run in parallel.

Here are my comments on the change:

1) *mvn test* must run all the tests.  This behavior should not change.  I 
think this should be the standard command to run tests for a project.  Also, 
people will not be aware of the change we are making, so please add new 
arguments to mvn but do not change existing behavior.

2) The Results summary is no longer consolidated.  For example, when you run 
*mvn -T 1C -Dparallel-tests clean verify*, at the end it looks like only a few 
tests were run, but if you scroll up in the console output you see that there 
were several runs and reports.  Lets summarize the results in a single line at 
the end.

3) The tests currently marked scale are not *all* scale tests.  Several are 
functional tests, like those in ITestBlockBlobInputStream.java.  It is 
important for these tests to be run prior to each check-in.  Unless you have 
added new scale tests (I did not look at the history of every scale test), all 
of the tests need to be run before check-in when we run *mvn test*.  

4) It took me 12 minutes to run *mvn -T 1C clean verify -Dscale*.  Two things: 
i) the Results summary looks almost identical to the output from *mvn -T 1C 
clean verify* except the latter had 156 skipped tests in the 2nd set of 
results.  ii) it looks like it is running the same tests that are run when you 
don't include -Dscale:

 *mvn -T 1C clean verify -Dscale*
Tests run: 214, Failures: 0, Errors: 0, Skipped: 35
Tests run: 519, Failures: 0, Errors: 0, Skipped: 120

 *mvn -T 1C clean verify*
Tests run: 214, Failures: 0, Errors: 0, Skipped: 35
Tests run: 519, Failures: 0, Errors: 0, Skipped: 156

5) Before this patch, the test results summary read *Tests run 775, Failures: 
0, Errors: 0, Skipped: 155*.  It appears that some tests were removed, as the 
total Tests run is no longer 775.  Here are the results I had for different 
commands:


 *mvn test*
Tests run: 214, Failures: 0, Errors: 0, Skipped: 35
Total time: 01:23 min

 *mvn -T 1C clean verify*
Tests run: 214, Failures: 0, Errors: 0, Skipped: 35
Tests run: 519, Failures: 0, Errors: 0, Skipped: 156
Total time: 21:36 min

 *mvn -T 1C -Dparallel-tests clean verify*
Tests run: 213, Failures: 0, Errors: 0, Skipped: 35
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
Tests run: 323, Failures: 0, Errors: 0, Skipped: 98
Tests run: 150, Failures: 0, Errors: 0, Skipped: 58
Total time: 07:12 min

 *mvn -T 1C clean verify -Dscale*
Tests run: 214, Failures: 0, Errors: 0, Skipped: 35
Tests run: 519, Failures: 0, Errors: 0, Skipped: 120
Total time: 12:14 min

I realize this is a big patch and we want to commit it, as it is expensive to 
refresh.  As long as we can fix 1) and determine why in 5) the total tests run 
has been reduced from the former 775 tests, we can commit this and continue 
working on it.

Thanks!




was (Author: tmarquardt):
It is great to see the work that has been done here!  We can now run the tests 
in half the time or less!  This is a *huge* improvement to the engineering 
process!  Thanks for enabling the tests to be run in parallel.

Here are my comments on the change:

1) *mvn test* must run all the tests.  This behavior should not change.  I 
think this should be the standard command to run tests for a project.  Also, 
people will not be aware of the change we are making, so please add new 
arguments to mvn but do not change existing behavior.

2) The Results summary is no longer consolidated.  For example, when you run 
*mvn -T 1C -Dparallel-tests clean verify*, at the end it looks like only a few 
tests were run, but if you scroll up in the console output you see that there 
were several runs and reports.  Lets summarize the results in a single line at 
the end.

3) The tests currently marked scale are not *all* scale tests.  Several are 
functional tests, like those in ITestBlockBlobInputStream.java.  It is 
important for these tests to be run prior to each check-in.  Unless you have 
added new scale tests (I did not look at the history of every scale test), all 
of the tests need to be run before check-in when we run *mvn test*.  

4) It took me 12 minutes to run *mvn -T 1C clean verify -Dscale*.  Two things: 
i) the Results summary looks almost identical to the output from *mvn -T 1C 
clean verify* except the latter had 156 skipped tests in the 2nd set of 
results.  ii) it looks like it is running the same tests that are run when you 
don't include 

[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-17 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131213#comment-16131213
 ] 

Thomas Marquardt commented on HADOOP-14553:
---

It is great to see the work that has been done here!  We can now run the tests 
in half the time or less!  This is a *huge* improvement to the engineering 
process!  Thanks for enabling the tests to be run in parallel.

Here are my comments on the change:

1) *mvn test* must run all the tests.  This behavior should not change.  I 
think this should be the standard command to run tests for a project.  Also, 
people will not be aware of the change we are making, so please add new 
arguments to mvn but do not change existing behavior.

2) The Results summary is no longer consolidated.  For example, when you run 
*mvn -T 1C -Dparallel-tests clean verify*, at the end it looks like only a few 
tests were run, but if you scroll up in the console output you see that there 
were several runs and reports.  Lets summarize the results in a single line at 
the end.

3) The tests currently marked scale are not *all* scale tests.  Several are 
functional tests, like those in ITestBlockBlobInputStream.java.  It is 
important for these tests to be run prior to each check-in.  Unless you have 
added new scale tests (I did not look at the history of every scale test), all 
of the tests need to be run before check-in when we run *mvn test*.  

4) It took me 12 minutes to run *mvn -T 1C clean verify -Dscale*.  Two things: 
i) the Results summary looks almost identical to the output from *mvn -T 1C 
clean verify* except the latter had 156 skipped tests in the 2nd set of 
results.  ii) it looks like it is running the same tests that are run when you 
don't include -Dscale:

 *mvn -T 1C clean verify -Dscale*
Tests run: 214, Failures: 0, Errors: 0, Skipped: 35
Tests run: 519, Failures: 0, Errors: 0, Skipped: 120

 *mvn -T 1C clean verify*
Tests run: 214, Failures: 0, Errors: 0, Skipped: 35
Tests run: 519, Failures: 0, Errors: 0, Skipped: 156

4) Before this patch, the test results summary read *Tests run 775, Failures: 
0, Errors: 0, Skipped: 155*.  It appears that some tests were removed, as the 
total Tests run is no longer 775.  Here are the results I had for different 
commands:


 *mvn test*
Tests run: 214, Failures: 0, Errors: 0, Skipped: 35
Total time: 01:23 min

 *mvn -T 1C clean verify*
Tests run: 214, Failures: 0, Errors: 0, Skipped: 35
Tests run: 519, Failures: 0, Errors: 0, Skipped: 156
Total time: 21:36 min

 *mvn -T 1C -Dparallel-tests clean verify*
Tests run: 213, Failures: 0, Errors: 0, Skipped: 35
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
Tests run: 323, Failures: 0, Errors: 0, Skipped: 98
Tests run: 150, Failures: 0, Errors: 0, Skipped: 58
Total time: 07:12 min

 *mvn -T 1C clean verify -Dscale*
Tests run: 214, Failures: 0, Errors: 0, Skipped: 35
Tests run: 519, Failures: 0, Errors: 0, Skipped: 120
Total time: 12:14 min

I realize this is a big patch and we want to commit it, as it is expensive to 
refresh.  As long as we can fix 1) and determine why the total tests run has 
been reduced from the former 775 tests, we can commit this and continue working 
on it.

Thanks!



> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch, HADOOP-14553-010.patch, HADOOP-14553-011.patch, 
> HADOOP-14553-012.patch, HADOOP-14553-014.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.
> *UPDATE August 4, 2017*:  Adding a list of requirements to clarify the 
> acceptance criteria for this JIRA:
> # Parallelize test execution
> # Define test groups: i) UnitTests - self-contained, executed by Jenkins, ii) 
> IntegrationTests - requires Azure Storage account, executed by engineers 
> prior to check-in, and if needed, iii) ScaleTests – long running performance 
> and scalability tests.
> # Define configuration profiles to run tests with different settings.  

[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131023#comment-16131023
 ] 

Hadoop QA commented on HADOOP-14705:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 
150 unchanged - 2 fixed = 150 total (was 152) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 43s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882403/HADOOP-14705.06.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d42b00258d5a 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dd7916d |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13062/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13062/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13062/console |
| Powered by | Apache 

[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131015#comment-16131015
 ] 

Rushabh S Shah commented on HADOOP-14705:
-

Reviewed patch#4 since patch#5 didn't apply cleanly when I started reviewing.

+KMS#reencryptEncryptedKeys()+
# Do we need to increment AdminCallsMeter ? 
Its not clear from the class which calls are supposed to be admin and which 
ones are non-admin.
Since we are planning to create a new thread in namenode for batch 
re-encrypting, I think it would be admin call.
Please comment otherwise.
# We can remove the try catch block around {{user.doAs}} context and let the 
outer try catch block handle the exception propagating from the {{doAs}} call.

+KeyAuthorizationKeyProvider#reencryptEncryptedKeys(List 
ekvs)+
# The following chunk of code is redundant.
{noformat}
   if (keyName == null) {
  keyName = ekv.getEncryptionKeyName();
} else {
  if (!keyName.equals(ekv.getEncryptionKeyName())) {
throw new IllegalArgumentException(String.format(
"multiple keyname '%s' '%s' found for reencryptEncryptedKeys",
keyName, ekv.getEncryptionKeyName()));
  }
{noformat}
We already do this check in {{KMS#reencryptEncryptedKeys}}

+KMSUtil.java+
* parseJSONEncKeyVersions
** Why are we using LinkedList ?
We are just traversing the list using listIerator() and caling Iterator#set().
set() operation is O(1) in both cases.
The memory footprint is more in LinkedList compared to ArrayList.

* toJSON(KeyProvider.KeyVersion keyVersion) and  {{toJSON(EncryptedKeyVersion 
encryptedKeyVersion)}} are still using LinkedHashMap.
We can make it HashMap.



+TestKeyProviderCryptoExtension#testReencryptEncryptedKeys()+
# Minor nit: 
{noformat}
 assertEquals("Version name of EEK should be EEK",
  KeyProviderCryptoExtension.EEK,
  ekv.getEncryptedKeyVersion().getVersionName());
{noformat}
The error message should be "Version name should be EEK"
# Another minor nit.
 assertEquals("Name of EEK should be encryption key name",
Something doesn't seems right with this error message.
According to me, it should be EEK's key name should be fookey.
# Following code.
{noformat} 
 if (Arrays.equals(ekv.getEncryptedKeyVersion().getMaterial(),
  encryptionKey.getMaterial())) {
fail("Encrypted key material should not equal decrypted key material");
  }
{noformat} 
  Instead you can use 
{noformat} 
Assert.assertNotEquals("Encrypted key material should not equal decrypted key 
material", new String(ekv.getEncryptedKeyVersion().getMaterial(),new 
String(encryptionKey.getMaterial()))
{noformat}
# Following code. 
{noformat}
encryptionKey = kp.createKey(ENCRYPTION_KEY_NAME, SecureRandom.getSeed(16), 
options);
final KeyVersion kv = kpExt.decryptEncryptedKey(ekv);
// Following lines are in patch.  
if (Arrays.equals(kv.getMaterial(), encryptionKey.getMaterial())) {
fail("Encrypted key material should not equal encryption key material");
}
{noformat}
This comparison didn't made sense to me.
This is comparing the secret that is stored in the backend with the material 
that is being generated while calling decryptEncryptedKey()
These both are bound to be different.
I think we are comparing apples to oranges.
# I would like to see the following test case.
The material returned after decrypting original ekv and decrypting new ekv 
should be same.
{noformat}
  for (int i = 0; i < ekvs.size(); ++i) {
 final EncryptedKeyVersion ekv = ekvs.get(i);
 final EncryptedKeyVersion orig = ekvsOrig.get(i);
 KeyVersion decryptedEkv = kpExt.decryptEncryptedKey(ekv);
 KeyVersion origEkv = kpExt.decryptEncryptedKey(orig);
Assert.equals(decryptedEkv.getMaterial(), origEkv.getMaerial());
}
{noformat}

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, 
> HADOOP-14705.06.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130894#comment-16130894
 ] 

Haibo Chen commented on HADOOP-14284:
-

That's good to know. The TimelineServiceV2 backend is designed to be pluggable, 
so doing this will help other backend implementations, if any.  Regardless, if 
we choose to shade server modules in addition to client modules, we don't need 
to think of whether we are in a client or server module because of the 
consistency and implementing the dependency enforcer will probably be easier.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Status: Patch Available  (was: Open)

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, 
> HADOOP-14705.06.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14705:
---
Attachment: HADOOP-14705.06.patch

Attaching a patch that has both patch 5, and HADOOP-14779 for review.
Discussed [~jojochuang] who prefers to have both of the changes incorporated in 
1 jira.

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, 
> HADOOP-14705.06.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14648) Bump commons-configuration2 to 2.1.1

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130880#comment-16130880
 ] 

Hadoop QA commented on HADOOP-14648:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
8s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14648 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878851/HADOOP-14648.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux e1624f49bc60 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dd7916d |
| Default Java | 1.8.0_144 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13061/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13061/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Bump commons-configuration2 to 2.1.1
> 
>
> Key: HADOOP-14648
> URL: https://issues.apache.org/jira/browse/HADOOP-14648
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14648.001.patch
>
>
> Update the dependency
> org.apache.commons: commons-configuration2: 2.1
> to the latest (2.1.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14479) Erasurecode testcase failures with native enabled

2017-08-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130855#comment-16130855
 ] 

Andrew Wang commented on HADOOP-14479:
--

[~drankye] did we ever file that JIRA to get ISA-L re-enabled? ISA-L is 
required for production usage of EC, so having testing is really important.

> Erasurecode testcase failures with native enabled
> -
>
> Key: HADOOP-14479
> URL: https://issues.apache.org/jira/browse/HADOOP-14479
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha3
> Environment: x86_64 Ubuntu 16.04.02 LTS
>Reporter: Ayappan
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14479.001.patch
>
>
> I built hadoop with ISA-L support. I took the ISA-L code from 
> https://github.com/01org/isa-l  (tag v2.18.0) and built it. While running the 
> UTs , following three testcases are failing
> 1)TestHHXORErasureCoder
> Tests run: 7, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 1.106 sec <<< 
> FAILURE! - in org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder
> testCodingDirectBuffer_10x4_erasing_p1(org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder)
>   Time elapsed: 0.029 sec  <<< FAILURE!
> java.lang.AssertionError: Decoding and comparing failed.
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> org.apache.hadoop.io.erasurecode.TestCoderBase.compareAndVerify(TestCoderBase.java:170)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.compareAndVerify(TestErasureCoderBase.java:141)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.performTestCoding(TestErasureCoderBase.java:98)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestErasureCoderBase.testCoding(TestErasureCoderBase.java:69)
> at 
> org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder.testCodingDirectBuffer_10x4_erasing_p1(TestHHXORErasureCoder.java:64)
> 2)TestRSErasureCoder
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.591 sec - 
> in org.apache.hadoop.io.erasurecode.coder.TestXORCoder
> Running org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7f486a28a6e4, pid=8970, tid=0x7f4850927700
> #
> # JRE version: OpenJDK Runtime Environment (8.0_121-b13) (build 
> 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)
> # Java VM: OpenJDK 64-Bit Server VM (25.121-b13 mixed mode linux-amd64 
> compressed oops)
> # Problematic frame:
> # C  [libc.so.6+0x8e6e4]
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core 
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # An error report file with more information is saved as:
> # /home/ayappan/hadoop/hadoop-common-project/hadoop-common/hs_err_pid8970.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> 3)TestCodecRawCoderMapping
> Running org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec <<< 
> FAILURE! - in org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
> testRSDefaultRawCoder(org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping)
>   Time elapsed: 0.015 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping.testRSDefaultRawCoder(TestCodecRawCoderMapping.java:58)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14648) Bump commons-configuration2 to 2.1.1

2017-08-17 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14648:

Status: Patch Available  (was: Open)

> Bump commons-configuration2 to 2.1.1
> 
>
> Key: HADOOP-14648
> URL: https://issues.apache.org/jira/browse/HADOOP-14648
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14648.001.patch
>
>
> Update the dependency
> org.apache.commons: commons-configuration2: 2.1
> to the latest (2.1.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14779) Refactor decryptEncryptedKey in KeyProviderCryptoExtension

2017-08-17 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130750#comment-16130750
 ] 

Rushabh S Shah commented on HADOOP-14779:
-

+1 non-binding.

> Refactor decryptEncryptedKey in KeyProviderCryptoExtension
> --
>
> Key: HADOOP-14779
> URL: https://issues.apache.org/jira/browse/HADOOP-14779
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-14779.01.patch
>
>
> We could separate out the actual decrypt logic from the 
> {{decryptEncryptedKey}}. This enables reencrypt calls to possibly reuse the 
> codec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130712#comment-16130712
 ] 

Xiao Chen commented on HADOOP-14705:


bq. It is based on HADOOP-14779, and optimizes reencryptEncryptedKeys to use 
the same codec (and en/de-cryptor), and only getCurrentKey once from the key 
name.
Thanks for looking. As 
[noted|https://issues.apache.org/jira/browse/HADOOP-14705?focusedCommentId=16129130=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16129130],
 could you please apply HADOOP-14779, then patch 5 here? Only reason to 
separate out HADOOP-14779 is for cleanness.


> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-08-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130437#comment-16130437
 ] 

Steve Loughran commented on HADOOP-13786:
-

thx. I'll look @ this. 

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: cloud-intergration-test-failure.log, 
> HADOOP-13786-HADOOP-13345-001.patch, HADOOP-13786-HADOOP-13345-002.patch, 
> HADOOP-13786-HADOOP-13345-003.patch, HADOOP-13786-HADOOP-13345-004.patch, 
> HADOOP-13786-HADOOP-13345-005.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-007.patch, 
> HADOOP-13786-HADOOP-13345-009.patch, HADOOP-13786-HADOOP-13345-010.patch, 
> HADOOP-13786-HADOOP-13345-011.patch, HADOOP-13786-HADOOP-13345-012.patch, 
> HADOOP-13786-HADOOP-13345-013.patch, HADOOP-13786-HADOOP-13345-015.patch, 
> HADOOP-13786-HADOOP-13345-016.patch, HADOOP-13786-HADOOP-13345-017.patch, 
> HADOOP-13786-HADOOP-13345-018.patch, HADOOP-13786-HADOOP-13345-019.patch, 
> HADOOP-13786-HADOOP-13345-020.patch, HADOOP-13786-HADOOP-13345-021.patch, 
> HADOOP-13786-HADOOP-13345-022.patch, HADOOP-13786-HADOOP-13345-023.patch, 
> HADOOP-13786-HADOOP-13345-024.patch, HADOOP-13786-HADOOP-13345-025.patch, 
> HADOOP-13786-HADOOP-13345-026.patch, HADOOP-13786-HADOOP-13345-027.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-029.patch, HADOOP-13786-HADOOP-13345-030.patch, 
> HADOOP-13786-HADOOP-13345-031.patch, HADOOP-13786-HADOOP-13345-032.patch, 
> HADOOP-13786-HADOOP-13345-033.patch, HADOOP-13786-HADOOP-13345-035.patch, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-17 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130425#comment-16130425
 ] 

Rushabh S Shah commented on HADOOP-14705:
-

The patch 5 doesn't apply cleanly on trunk specifically 
{{KeyProviderCryptoExtension.java}}.
[~xiaochen]: Can you please rebase and then submit the patch ?

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130399#comment-16130399
 ] 

Steve Loughran commented on HADOOP-14520:
-

Production code is looking pretty good, so I've just gone through the tests in 
detail too now. Sorry.


1. size of buffers/compaction blocks

I'm worried about what happens when large buffers have been flushed & then a 
compaction starts. The size of the buffer needed will be that of 
sum(size(blocks)), won't it? I don't see any checks on those limits, such as a 
decision to set a maximum size of a compacted block & break up compactions if 
the total block count to compact is > that.

2. Failure handling on the compaction process. Does a failure on a compaction 
download & upload in {{blockCompaction()} }need to fail the entire write 
process? If it's a transient error it could be overkill. However, if it is a 
sign that {{flush()}} isn't reliably working then the current behaviour is the 
one to run with.

3. One thing I'd like (but which won't mandate) is for the stream to count the 
#of compaction events, bytes compacted and total duration. then provide some 
@VisibleForTesting @ Unstable getters, *and print them in the {{toString()}} 
call. That would line things up for moving to FS-level instrumentation, and can 
be used immediately .

h3. {{BlockBlobAppendStream}}: 
* L349: use constant in {{StorageErrorCodeStrings}}
* Use {{org.apache.hadoop.util.DirectBufferPool}} to pool the buffers; stable 
code, uses weak refs to ensure GCs will recover free buffers from the pool.
* Make sure that {{blockCompaction}} uses a buffer from the pool too; I don't 
think it does right now.
* {{UploaderThreadFactory}}: idle thought: would it make sense to include the 
container ID or container & key in the thread? I don't know of anything else 
which does this, but it would aid thread dump diagnostics.

h3. {{SelfRenewingLease}}

L82: use the constants in {{StorageErrorCodeStrings}}

h2. Test code

* There's no concurrency test, which would be nice. Could one go into 
{{TestNativeAzureFileSystemConcurrency}}
* Maybe also think about having {{TestBlockBlobInputStream}} use this stream as 
its upload mechanism; insert some flushes through the loop and see what 
actually happens on larger scale files. The small tests, while nice and fast, 
don't check things like buffer sizing if you have large blocks to combine.


h3. {{TestNativeAzureFileSystemBlockCompaction}}


As background, I like to review tests from the following use case "its got a 
transient jenkins failure and all you have is the stack trace to debug what 
failed". Which means I expect tests to: preserve all stack traces, add as much 
diagnostics information in asserts, including text for every simple 
assertTrue/assertFalse —enough to get an idea what's wrong without pasting the 
stack in the IDE to find out which specific assert actually failed.

h4.  {{verifyFileData}} & {{verifyAppend}}:

I'm not actually sure these work properly if the created file is > the 
generated test data, and, by swallowing exceptions, they don't actually report 
underlying failures, merely trigger an assertion failure somewhere in the 
calling code. 

I'd replace these entirely with {{ContractTestUtils.verifyFileContents()}}, 
which does report failures and is widely enough used that it's considered 
stable.


h4. {{testCompaction()}}

* once the verify calls rethrow all exceptions, some of the asserts here can be 
cut
* there's a lot of copy-and-paste duplication fo the 
write/write/write/flush/verify sequences; these should be factored out into 
shared methods.
* if the stream.toString() call logs the compaction history, then includng the 
stream toString in all asserts would help diagnose problems.

h4. other 

* {{verifyBlockList}}: don't bother catching & asserting on exception, just 
throw it all the way up & let JUnit report it.
* {{testCompactionDisabled}: use try-with-resource or 
{{IOUtils.cleanupWithLogger}}.


h3. checkstyle


# Most of those "is a magic number" complaints are just about common values in 
the test...if they were pulled out into some shared variables then it'd shut up 
checkstyle
# there is that "15 minutes" constant in production. How about moving that up 
from an inline constant to a static constant "CLOSE_UPLOAD_DELAY" or similar in 
the class —so at least its obvious what the number is for/where the delay is 
chosen. At some point in the future, if ever felt to be an issue, then it could 
be made a config option, with all the trouble that ensues.
# javadoc is still unhappy.. I'm actually surprised that it's not complaining 
about all the missing "."' chars at the end of each sentence ... maybe the 
latest update to java 8.x has got javadocs complaining less. Lovely as that may 
be, we have to worry about java9 too, so please: review the diff and add them 
to the new javadoc comments.

# Probably a good time 

[jira] [Commented] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130287#comment-16130287
 ] 

Hadoop QA commented on HADOOP-14583:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 
new + 0 unchanged - 1 fixed = 1 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
7s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14583 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882178/HADOOP-14583-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f898354fe5b3 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f9a0e23 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13060/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13060/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13060/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>  

[jira] [Commented] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130254#comment-16130254
 ] 

Steve Loughran commented on HADOOP-14583:
-

# remember to hit the "submit patch" button for Yetus to do its patch audit
# change to {{AzureNativeFileSystemStore}} LGTM,  just some test tuning to do
#  {{testMultiThreadedCreateDeletes}} If an unexpected exception is caught, it 
needs to be rethrown so that testers can work out what's gone wrong. Just 
assign it to a variable, and then after do a {{if (unexpected != null) throw 
unexpected;}}
# {{HelperThreadBase}} new fields should be final

> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-14583-001.patch, HADOOP-14583-002.patch
>
>
> It's a known defect of the Hadoop FS API (and one we don't explicitly test 
> for enough), but you can create a file on a path which doesn't exist. In that 
> situation, the create() logic is expectd to create the entries.
> Wasb appears to raise an exception if you try to call {{create(filepath)}} 
> without calling {{mkdirs(filepath.getParent()}} first. That's the semantics 
> expected of {{createNonRecursive()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-08-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14583:

Status: Patch Available  (was: Open)

> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-14583-001.patch, HADOOP-14583-002.patch
>
>
> It's a known defect of the Hadoop FS API (and one we don't explicitly test 
> for enough), but you can create a file on a path which doesn't exist. In that 
> situation, the create() logic is expectd to create the entries.
> Wasb appears to raise an exception if you try to call {{create(filepath)}} 
> without calling {{mkdirs(filepath.getParent()}} first. That's the semantics 
> expected of {{createNonRecursive()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14777) S3Guard premerge changes: java 7 build & test tuning

2017-08-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14777:

   Resolution: Fixed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

> S3Guard premerge changes: java 7 build & test tuning
> 
>
> Key: HADOOP-14777
> URL: https://issues.apache.org/jira/browse/HADOOP-14777
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14777-HADOOP-13345-001.patch
>
>
> Another set of changes for S3Guard in preparation for merging via HADOOP-13998
> * checkstyle issues
> * Made Java 7 friendly (indeed, tested applied to branch-2 with some POM 
> changes & tested there)
> * improve diagnostics on some test failure. This would address HADOOP-14750.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14777) S3Guard premerge changes: java 7 build & test tuning

2017-08-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130227#comment-16130227
 ] 

Steve Loughran commented on HADOOP-14777:
-

oops, forgot the policy change. we can work on that post merge anyway

> S3Guard premerge changes: java 7 build & test tuning
> 
>
> Key: HADOOP-14777
> URL: https://issues.apache.org/jira/browse/HADOOP-14777
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14777-HADOOP-13345-001.patch
>
>
> Another set of changes for S3Guard in preparation for merging via HADOOP-13998
> * checkstyle issues
> * Made Java 7 friendly (indeed, tested applied to branch-2 with some POM 
> changes & tested there)
> * improve diagnostics on some test failure. This would address HADOOP-14750.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14777) S3Guard premerge changes: java 7 build & test tuning

2017-08-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130225#comment-16130225
 ] 

Steve Loughran commented on HADOOP-14777:
-

OK, committing this with the extra test policy changes. At which point we 
should be good to go

> S3Guard premerge changes: java 7 build & test tuning
> 
>
> Key: HADOOP-14777
> URL: https://issues.apache.org/jira/browse/HADOOP-14777
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14777-HADOOP-13345-001.patch
>
>
> Another set of changes for S3Guard in preparation for merging via HADOOP-13998
> * checkstyle issues
> * Made Java 7 friendly (indeed, tested applied to branch-2 with some POM 
> changes & tested there)
> * improve diagnostics on some test failure. This would address HADOOP-14750.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130207#comment-16130207
 ] 

Hadoop QA commented on HADOOP-14163:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 37 line(s) with tabs. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882307/HADOOP-14163.006.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 09d67cac6243 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f9a0e23 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13059/artifact/patchprocess/whitespace-tabs.txt
 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13059/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor existing hadoop site to use more usable static website generator
> -
>
> Key: HADOOP-14163
> URL: https://issues.apache.org/jira/browse/HADOOP-14163
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: site
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, 
> HADOOP-14163-003.zip, HADOOP-14163.004.patch, HADOOP-14163.005.patch, 
> HADOOP-14163.006.patch, hadoop-site.tar.gz, hadop-site-rendered.tar.gz
>
>
> From the dev mailing list:
> "Publishing can be attacked via a mix of scripting and revamping the darned 
> website. Forrest is pretty bad compared to the newer static site generators 
> out there (e.g. need to write XML instead of markdown, it's hard to review a 
> staging site because of all the absolute links, hard to customize, did I 
> mention XML?), and the look and feel of the site is from the 00s. We don't 
> actually have that much site content, so it should be possible to migrate to 
> a new system."
> This issue is find a solution to migrate the old site to a new modern static 
> site generator using a more contemprary theme.
> Goals: 
>  * existing links should work (or at least redirected)
>  * It should be easy to add more content required by a release automatically 
> (most probably with creating separated markdown files)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-08-17 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130157#comment-16130157
 ] 

Akira Ajisaka commented on HADOOP-13835:


Thanks [~leftnoteasy]. I'm +1 for backporting this to branch-2. Two comments:
* We don't need to add pom.xml or CMakeLists.txt in 
hadoop-mapreduce-client-nativetask module.
* re2j is added to LICENSE.txt in the patch, but I'm thinking re2j is not 
related.

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch, HADOOP-13835.007.patch, 
> HADOOP-13835.branch-2.007.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-08-17 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14163:
---
Status: Patch Available  (was: Open)

> Refactor existing hadoop site to use more usable static website generator
> -
>
> Key: HADOOP-14163
> URL: https://issues.apache.org/jira/browse/HADOOP-14163
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: site
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, 
> HADOOP-14163-003.zip, HADOOP-14163.004.patch, HADOOP-14163.005.patch, 
> HADOOP-14163.006.patch, hadoop-site.tar.gz, hadop-site-rendered.tar.gz
>
>
> From the dev mailing list:
> "Publishing can be attacked via a mix of scripting and revamping the darned 
> website. Forrest is pretty bad compared to the newer static site generators 
> out there (e.g. need to write XML instead of markdown, it's hard to review a 
> staging site because of all the absolute links, hard to customize, did I 
> mention XML?), and the look and feel of the site is from the 00s. We don't 
> actually have that much site content, so it should be possible to migrate to 
> a new system."
> This issue is find a solution to migrate the old site to a new modern static 
> site generator using a more contemprary theme.
> Goals: 
>  * existing links should work (or at least redirected)
>  * It should be easy to add more content required by a release automatically 
> (most probably with creating separated markdown files)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-08-17 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14163:
---
Attachment: HADOOP-14163.006.patch

006 patch:
* Removed trailing whitespaces
* Move the front matters above license headers in markdown files, that way web 
site is generated correctly.

> Refactor existing hadoop site to use more usable static website generator
> -
>
> Key: HADOOP-14163
> URL: https://issues.apache.org/jira/browse/HADOOP-14163
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: site
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, 
> HADOOP-14163-003.zip, HADOOP-14163.004.patch, HADOOP-14163.005.patch, 
> HADOOP-14163.006.patch, hadoop-site.tar.gz, hadop-site-rendered.tar.gz
>
>
> From the dev mailing list:
> "Publishing can be attacked via a mix of scripting and revamping the darned 
> website. Forrest is pretty bad compared to the newer static site generators 
> out there (e.g. need to write XML instead of markdown, it's hard to review a 
> staging site because of all the absolute links, hard to customize, did I 
> mention XML?), and the look and feel of the site is from the 00s. We don't 
> actually have that much site content, so it should be possible to migrate to 
> a new system."
> This issue is find a solution to migrate the old site to a new modern static 
> site generator using a more contemprary theme.
> Goals: 
>  * existing links should work (or at least redirected)
>  * It should be easy to add more content required by a release automatically 
> (most probably with creating separated markdown files)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-08-17 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14163:
---
Status: Open  (was: Patch Available)

Cancelling the latest patch. Hugo fails to generate web site correctly when 
license headers are added.

> Refactor existing hadoop site to use more usable static website generator
> -
>
> Key: HADOOP-14163
> URL: https://issues.apache.org/jira/browse/HADOOP-14163
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: site
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, 
> HADOOP-14163-003.zip, HADOOP-14163.004.patch, HADOOP-14163.005.patch, 
> hadoop-site.tar.gz, hadop-site-rendered.tar.gz
>
>
> From the dev mailing list:
> "Publishing can be attacked via a mix of scripting and revamping the darned 
> website. Forrest is pretty bad compared to the newer static site generators 
> out there (e.g. need to write XML instead of markdown, it's hard to review a 
> staging site because of all the absolute links, hard to customize, did I 
> mention XML?), and the look and feel of the site is from the 00s. We don't 
> actually have that much site content, so it should be possible to migrate to 
> a new system."
> This issue is find a solution to migrate the old site to a new modern static 
> site generator using a more contemprary theme.
> Goals: 
>  * existing links should work (or at least redirected)
>  * It should be easy to add more content required by a release automatically 
> (most probably with creating separated markdown files)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130125#comment-16130125
 ] 

Hadoop QA commented on HADOOP-14520:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 
64 new + 82 unchanged - 4 fixed = 146 total (was 86) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-azure in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882298/HADOOP-14520-006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7a4f3a8eac5b 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1f04cb4 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13058/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13058/artifact/patchprocess/patch-javadoc-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13058/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13058/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>

[jira] [Updated] (HADOOP-14520) WASB: Block compaction for Azure Block Blobs

2017-08-17 Thread Thomas Marquardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-14520:
--
Attachment: HADOOP-14520-006.patch

Attaching HADOOP-14520-006.patch.

Thanks for the feedback.  I've updated the patch to address your feedback and 
feedback from my own review.

All tests are passing against my tmarql3 account:

Tests run: 776, Failures: 0, Errors: 0, Skipped: 155

> WASB: Block compaction for Azure Block Blobs
> 
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-006.patch, HADOOP-14520-05.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP-14520-05.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 707, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130099#comment-16130099
 ] 

Hadoop QA commented on HADOOP-14163:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 37 line(s) with tabs. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882286/HADOOP-14163.005.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux badff8639bf5 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 96b3a6b |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13057/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13057/artifact/patchprocess/whitespace-tabs.txt
 |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13057/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13057/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor existing hadoop site to use more usable static website generator
> -
>
> Key: HADOOP-14163
> URL: https://issues.apache.org/jira/browse/HADOOP-14163
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: site
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, 
> HADOOP-14163-003.zip, HADOOP-14163.004.patch, HADOOP-14163.005.patch, 
> hadoop-site.tar.gz, hadop-site-rendered.tar.gz
>
>
> From the dev mailing list:
> "Publishing can be attacked via a mix of scripting and revamping the darned 
> website. Forrest is pretty bad compared to the newer static site generators 
> out there (e.g. need to write XML instead of markdown, it's hard to review a 
> staging site because of all the absolute links, hard to customize, did I 
> mention XML?), and the look and feel of the site is from the 00s. We don't 
> actually have that much site content, so it should be possible to migrate to 
> a new system."
> This issue is find a solution to migrate the old site to a new modern static 
> site generator using a more contemprary theme.
> Goals: 
>  * existing links should work (or at least redirected)
>  * It should be easy to add more content required by a release automatically 
> (most probably with creating separated markdown files)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14560) Make HttpServer2 backlog size configurable

2017-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130092#comment-16130092
 ] 

Hudson commented on HADOOP-14560:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12202 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12202/])
HADOOP-14560. Make HttpServer2 backlog size configurable. Contributed by 
(jzhuge: rev 1f04cb45f70648678840cdafbec68d534b03fe95)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java


> Make HttpServer2 backlog size configurable
> --
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>Assignee: Alexander Krasheninnikov
>  Labels: webhdfs
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14560.002.patch
>
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14560) Make HttpServer2 backlog size configurable

2017-08-17 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14560:

   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~krash] for the contribution!

> Make HttpServer2 backlog size configurable
> --
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>Assignee: Alexander Krasheninnikov
>  Labels: webhdfs
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14560.002.patch
>
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14560) Make HttpServer2 backlog size configurable

2017-08-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130073#comment-16130073
 ] 

ASF GitHub Bot commented on HADOOP-14560:
-

Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/242


> Make HttpServer2 backlog size configurable
> --
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>Assignee: Alexander Krasheninnikov
>  Labels: webhdfs
> Attachments: HADOOP-14560.002.patch
>
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-08-17 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14163:
---
Attachment: HADOOP-14163.005.patch

005 patch:
* Added license header
* Replace tabs with whitespaces in ./asf-site/layouts/partials/footer.html

> Refactor existing hadoop site to use more usable static website generator
> -
>
> Key: HADOOP-14163
> URL: https://issues.apache.org/jira/browse/HADOOP-14163
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: site
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, 
> HADOOP-14163-003.zip, HADOOP-14163.004.patch, HADOOP-14163.005.patch, 
> hadoop-site.tar.gz, hadop-site-rendered.tar.gz
>
>
> From the dev mailing list:
> "Publishing can be attacked via a mix of scripting and revamping the darned 
> website. Forrest is pretty bad compared to the newer static site generators 
> out there (e.g. need to write XML instead of markdown, it's hard to review a 
> staging site because of all the absolute links, hard to customize, did I 
> mention XML?), and the look and feel of the site is from the 00s. We don't 
> actually have that much site content, so it should be possible to migrate to 
> a new system."
> This issue is find a solution to migrate the old site to a new modern static 
> site generator using a more contemprary theme.
> Goals: 
>  * existing links should work (or at least redirected)
>  * It should be easy to add more content required by a release automatically 
> (most probably with creating separated markdown files)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14560) Make HttpServer2 backlog size configurable

2017-08-17 Thread Alexander Krasheninnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130043#comment-16130043
 ] 

Alexander Krasheninnikov commented on HADOOP-14560:
---

[~jzhuge], yeah, name is ok.

> Make HttpServer2 backlog size configurable
> --
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>Assignee: Alexander Krasheninnikov
>  Labels: webhdfs
> Attachments: HADOOP-14560.002.patch
>
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130038#comment-16130038
 ] 

Hadoop QA commented on HADOOP-14729:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 76 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
8s{color} | {color:green} root generated 0 new + 1315 unchanged - 2 fixed = 
1315 total (was 1317) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 18s{color} | {color:orange} root: The patch generated 42 new + 1136 
unchanged - 110 fixed = 1178 total (was 1246) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 14m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 53s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
13s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
42s{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
0s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
29s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}113m 
32s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
3s{color} | {color:green} hadoop-mapreduce-client-nativetask in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-mapreduce-examples in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
12s{color} | {color:green} hadoop-streaming in the 

[jira] [Commented] (HADOOP-14560) Make HttpServer2 backlog size configurable

2017-08-17 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1612#comment-1612
 ] 

John Zhuge commented on HADOOP-14560:
-

[~krash] Are you ok with the property name {{hadoop.http.socket.backlog.size}} 
in patch 002? If yes, I will commit tomorrow.

> Make HttpServer2 backlog size configurable
> --
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>Assignee: Alexander Krasheninnikov
>  Labels: webhdfs
> Attachments: HADOOP-14560.002.patch
>
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14560) Make HttpServer2 backlog size configurable

2017-08-17 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129995#comment-16129995
 ] 

John Zhuge commented on HADOOP-14560:
-

Passed test-patch locally, all green:
{noformat}
$ dev-support/bin/test-patch ~/patches/HADOOP-14560.002.patch

+1 overall

 __
< Success! >
 --
 \ /\  ___  /\
  \   // \/   \/ \\
 ((O O))
  \\ / \ //
   \/  | |  \/
|  | |  |
|  | |  |
|   o   |
| |   | |
|m|   |m|


| Vote |  Subsystem |  Runtime   | Comment

|   0  |  findbugs  |   0m  1s   | Findbugs executables are not available.
|  +1  |   @author  |   0m  0s   | The patch does not contain any @author
|  ||| tags.
|  +1  |test4tests  |   0m  0s   | The patch appears to include 1 new or
|  ||| modified test files.
|  +1  |mvninstall  |  11m 25s   | trunk passed
|  +1  |   compile  |  11m 33s   | trunk passed
|  +1  |checkstyle  |   0m 28s   | trunk passed
|  +1  |   mvnsite  |   1m  2s   | trunk passed
|  +1  |mvneclipse  |   0m 14s   | trunk passed
|  +1  |   javadoc  |   0m 41s   | trunk passed
|  +1  |mvninstall  |   0m 32s   | the patch passed
|  +1  |   compile  |   8m  3s   | the patch passed
|  +1  | javac  |   8m  3s   | the patch passed
|  +1  |checkstyle  |   0m 26s   | the patch passed
|  +1  |   mvnsite  |   1m  3s   | the patch passed
|  +1  |mvneclipse  |   0m 13s   | the patch passed
|  +1  |whitespace  |   0m  0s   | The patch has no whitespace issues.
|  +1  |   javadoc  |   0m 40s   | the patch passed
|  +1  |asflicense  |   0m 22s   | The patch does not generate ASF License
|  ||| warnings.
|  ||  37m 20s   |
{noformat}

> Make HttpServer2 backlog size configurable
> --
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>Assignee: Alexander Krasheninnikov
>  Labels: webhdfs
> Attachments: HADOOP-14560.002.patch
>
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129979#comment-16129979
 ] 

Hadoop QA commented on HADOOP-14163:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 38 line(s) with tabs. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 111 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882268/HADOOP-14163.004.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux ff4c21a5bc2e 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 96b3a6b |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13056/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13056/artifact/patchprocess/whitespace-tabs.txt
 |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13056/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13056/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor existing hadoop site to use more usable static website generator
> -
>
> Key: HADOOP-14163
> URL: https://issues.apache.org/jira/browse/HADOOP-14163
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: site
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, 
> HADOOP-14163-003.zip, HADOOP-14163.004.patch, hadoop-site.tar.gz, 
> hadop-site-rendered.tar.gz
>
>
> From the dev mailing list:
> "Publishing can be attacked via a mix of scripting and revamping the darned 
> website. Forrest is pretty bad compared to the newer static site generators 
> out there (e.g. need to write XML instead of markdown, it's hard to review a 
> staging site because of all the absolute links, hard to customize, did I 
> mention XML?), and the look and feel of the site is from the 00s. We don't 
> actually have that much site content, so it should be possible to migrate to 
> a new system."
> This issue is find a solution to migrate the old site to a new modern static 
> site generator using a more contemprary theme.
> Goals: 
>  * existing links should work (or at least redirected)
>  * It should be easy to add more content required by a release automatically 
> (most probably with creating separated markdown files)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org