[jira] [Updated] (HADOOP-14322) Incorrect host info may be reported in failover message

2017-04-18 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-14322:
---
Attachment: HADOOP-14322.001.patch

> Incorrect host info may be reported in failover message
> ---
>
> Key: HADOOP-14322
> URL: https://issues.apache.org/jira/browse/HADOOP-14322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-14322.001.patch
>
>
> This may apply to other components, but using HDFS as an example.
> When multiple threads use the same DFSClient to make RPC calls, they may 
> report incorrect NN host name in the failover message:
> {code}
> INFO [pool-3-thread-13] retry.RetryInvocationHandler 
> (RetryInvocationHandler.java:invoke(148)) - Exception while invoking delete 
> of class ClientNamenodeProtocolTranslatorPB over *a.b.c.d*:8020. Trying to 
> fail over immediately.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category WRITE is not supported in state standby. Visit 
> https://s.apache.org/sbnn-error
> {code}
> where *a.b.c.d* is the RPC proxy corresponds to the active NN, which confuses 
> user to think failover is not behaving correctly. Because *a.b.c.d*  is 
> expected to be the proxy corresponding to the standby NN here instead.
> The reason is that when the ProxyDescriptor data field of 
> RetryInvocationHandler may be shared by multiple threads that do the RPC 
> calls, the failover done by one thread (which changed the rpc proxy) may be 
> visible to other threads when the other threads report the above message. 
> An example sequence: 
> # multiple threads start with the same SNN to do RPC calls, 
> # all threads discover that a failover is needed, 
> # thread X failover first, and changed the ProxyDescriptor's proxyInfo to ANN
> # other threads reports the above message with the proxyInfo changed by 
> thread X, and reported ANN instead of SNN in the message.
> Some details:
> RetryInvocationHandler does the following when failing over:
> {code}
>   synchronized void failover(long expectedFailoverCount, Method method,
>int callId) {
>   // Make sure that concurrent failed invocations only cause a single
>   // actual failover.
>   if (failoverCount == expectedFailoverCount) {
> fpp.performFailover(proxyInfo.proxy);
> failoverCount++;
>   } else {
> LOG.warn("A failover has occurred since the start of call #" + callId
> + " " + proxyInfo.getString(method.getName()));
>   }
>   proxyInfo = fpp.getProxy();
> }
> {code}
> and changed the proxyInfo in the ProxyDescriptor.
> While the log method below report message with ProxyDescriotor's proxyinfo:
> {code}
> private void log(final Method method, final boolean isFailover,
>   final int failovers, final long delay, final Exception ex) {
> ..
>final StringBuilder b = new StringBuilder()
> .append(ex + ", while invoking ")
> .append(proxyDescriptor.getProxyInfo().getString(method.getName()));
> if (failovers > 0) {
>   b.append(" after ").append(failovers).append(" failover attempts");
> }
> b.append(isFailover? ". Trying to failover ": ". Retrying ");
> b.append(delay > 0? "after sleeping for " + delay + "ms.": 
> "immediately.");
> {code}
> and so does  {{handleException}} method do
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Exception while invoking call #" + callId + " "
>   + proxyDescriptor.getProxyInfo().getString(method.getName())
>   + ". Not retrying because " + retryInfo.action.reason, e);
> }
> {code}
> FailoverProxyProvider
> {code}
>public String getString(String methodName) {
>   return proxy.getClass().getSimpleName() + "." + methodName
>   + " over " + proxyInfo;
> }
> @Override
> public String toString() {
>   return proxy.getClass().getSimpleName() + " over " + proxyInfo;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14322) Incorrect host info may be reported in failover message

2017-04-18 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-14322:
---
Status: Patch Available  (was: Open)

> Incorrect host info may be reported in failover message
> ---
>
> Key: HADOOP-14322
> URL: https://issues.apache.org/jira/browse/HADOOP-14322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-14322.001.patch
>
>
> This may apply to other components, but using HDFS as an example.
> When multiple threads use the same DFSClient to make RPC calls, they may 
> report incorrect NN host name in the failover message:
> {code}
> INFO [pool-3-thread-13] retry.RetryInvocationHandler 
> (RetryInvocationHandler.java:invoke(148)) - Exception while invoking delete 
> of class ClientNamenodeProtocolTranslatorPB over *a.b.c.d*:8020. Trying to 
> fail over immediately.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category WRITE is not supported in state standby. Visit 
> https://s.apache.org/sbnn-error
> {code}
> where *a.b.c.d* is the RPC proxy corresponds to the active NN, which confuses 
> user to think failover is not behaving correctly. Because *a.b.c.d*  is 
> expected to be the proxy corresponding to the standby NN here instead.
> The reason is that when the ProxyDescriptor data field of 
> RetryInvocationHandler may be shared by multiple threads that do the RPC 
> calls, the failover done by one thread (which changed the rpc proxy) may be 
> visible to other threads when the other threads report the above message. 
> An example sequence: 
> # multiple threads start with the same SNN to do RPC calls, 
> # all threads discover that a failover is needed, 
> # thread X failover first, and changed the ProxyDescriptor's proxyInfo to ANN
> # other threads reports the above message with the proxyInfo changed by 
> thread X, and reported ANN instead of SNN in the message.
> Some details:
> RetryInvocationHandler does the following when failing over:
> {code}
>   synchronized void failover(long expectedFailoverCount, Method method,
>int callId) {
>   // Make sure that concurrent failed invocations only cause a single
>   // actual failover.
>   if (failoverCount == expectedFailoverCount) {
> fpp.performFailover(proxyInfo.proxy);
> failoverCount++;
>   } else {
> LOG.warn("A failover has occurred since the start of call #" + callId
> + " " + proxyInfo.getString(method.getName()));
>   }
>   proxyInfo = fpp.getProxy();
> }
> {code}
> and changed the proxyInfo in the ProxyDescriptor.
> While the log method below report message with ProxyDescriotor's proxyinfo:
> {code}
> private void log(final Method method, final boolean isFailover,
>   final int failovers, final long delay, final Exception ex) {
> ..
>final StringBuilder b = new StringBuilder()
> .append(ex + ", while invoking ")
> .append(proxyDescriptor.getProxyInfo().getString(method.getName()));
> if (failovers > 0) {
>   b.append(" after ").append(failovers).append(" failover attempts");
> }
> b.append(isFailover? ". Trying to failover ": ". Retrying ");
> b.append(delay > 0? "after sleeping for " + delay + "ms.": 
> "immediately.");
> {code}
> and so does  {{handleException}} method do
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Exception while invoking call #" + callId + " "
>   + proxyDescriptor.getProxyInfo().getString(method.getName())
>   + ". Not retrying because " + retryInfo.action.reason, e);
> }
> {code}
> FailoverProxyProvider
> {code}
>public String getString(String methodName) {
>   return proxy.getClass().getSimpleName() + "." + methodName
>   + " over " + proxyInfo;
> }
> @Override
> public String toString() {
>   return proxy.getClass().getSimpleName() + " over " + proxyInfo;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14266) S3Guard: S3AFileSystem::listFiles() to employ MetadataStore

2017-04-18 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973859#comment-15973859
 ] 

Aaron Fabbri commented on HADOOP-14266:
---

The isAuthoritative change looks good to me.

I also noticed a change in v7 patch to add FileStatusAcceptor checking for 
ProvidedFileStatusIterator.  Looks like this was missing before?  I'm curious 
if we should have a test case covering this?  If so feel free to file separate 
JIRA.  You might see a javadoc warning on missing {{@param}} for the acceptor 
arg you added.

I'm doing more testing in us-west-2.  So far ran non-parallel dynamo test and 
got one (unrelated I think) failure:

{noformat}
mvn clean verify  -Dtest=none -Ddynamo -Ds3guard 
-Dit.test='ITestS3A*,ITestS3G*,ITestMetadata*'
...
testConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)
  Time elapsed: 0.504 sec  <<< ERROR!
java.lang.IllegalArgumentException: No DynamoDB table name configured!
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:261)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:81)
{noformat}

 I will also try parallel dynamo test, and then with LocalMetadataStore and 
fs.s3a.metadatastore.authoritative=true.

> S3Guard: S3AFileSystem::listFiles() to employ MetadataStore
> ---
>
> Key: HADOOP-14266
> URL: https://issues.apache.org/jira/browse/HADOOP-14266
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14266-HADOOP-13345.000.patch, 
> HADOOP-14266-HADOOP-13345.001.patch, HADOOP-14266-HADOOP-13345.002.patch, 
> HADOOP-14266-HADOOP-13345.003.patch, HADOOP-14266-HADOOP-13345.003.patch, 
> HADOOP-14266-HADOOP-13345.004.patch, HADOOP-14266-HADOOP-13345-005.patch, 
> HADOOP-14266-HADOOP-13345.005.patch, HADOOP-14266-HADOOP-13345.006.patch, 
> HADOOP-14266-HADOOP-13345.007.patch
>
>
> Similar to [HADOOP-13926], this is to track the effort of employing 
> MetadataStore in {{S3AFileSystem::listFiles()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14266) S3Guard: S3AFileSystem::listFiles() to employ MetadataStore

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973852#comment-15973852
 ] 

Hadoop QA commented on HADOOP-14266:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
39s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:612578f |
| JIRA Issue | HADOOP-14266 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863917/HADOOP-14266-HADOOP-13345.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 04dcfcb2c98b 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / d4fd991 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12121/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12121/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: S3AFileSystem::listFiles() to employ MetadataStore
> ---
>
> Key: HADOOP-14266
> URL: https://issues.apache.org/jira/browse/HADOOP-14266
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14266-HADOOP-13345.000.patch, 
> HADOOP-14266-HADOOP-13345.001.patch, HADOOP-14266-HADOOP-13345.002.patch, 

[jira] [Updated] (HADOOP-14266) S3Guard: S3AFileSystem::listFiles() to employ MetadataStore

2017-04-18 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14266:
---
Attachment: HADOOP-14266-HADOOP-13345.007.patch

V7 is to address {{!recursive && isAuthoritative}} case, though not yet 
supported by DDBMetadataStore. I tested and looks good. Additional tests will 
be appreciated.

For using two running iterators in {{FileStatusListingIterator}} to replace 
{{private final Set providedStatus}}, let's address separately. We 
need order guarantee.

> S3Guard: S3AFileSystem::listFiles() to employ MetadataStore
> ---
>
> Key: HADOOP-14266
> URL: https://issues.apache.org/jira/browse/HADOOP-14266
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14266-HADOOP-13345.000.patch, 
> HADOOP-14266-HADOOP-13345.001.patch, HADOOP-14266-HADOOP-13345.002.patch, 
> HADOOP-14266-HADOOP-13345.003.patch, HADOOP-14266-HADOOP-13345.003.patch, 
> HADOOP-14266-HADOOP-13345.004.patch, HADOOP-14266-HADOOP-13345-005.patch, 
> HADOOP-14266-HADOOP-13345.005.patch, HADOOP-14266-HADOOP-13345.006.patch, 
> HADOOP-14266-HADOOP-13345.007.patch
>
>
> Similar to [HADOOP-13926], this is to track the effort of employing 
> MetadataStore in {{S3AFileSystem::listFiles()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14321) explicitly exclude s3a root dir ITests from parallel runs

2017-04-18 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973706#comment-15973706
 ] 

Mingliang Liu commented on HADOOP-14321:


I got this error when running in parallel mode (though root directory test was 
in sequential).

{code}
---
 T E S T S
---
Running org.apache.hadoop.fs.contract.s3.ITestS3ContractRootDir
Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.296 sec - in 
org.apache.hadoop.fs.contract.s3.ITestS3ContractRootDir
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.499 sec <<< 
FAILURE! - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 1.406 sec  <<< ERROR!
org.apache.hadoop.fs.PathIOException: `mliu-s3guard': Cannot delete root path
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.rejectRootDirectoryDelete(S3AFileSystem.java:1372)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerDelete(S3AFileSystem.java:1298)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:1262)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:116)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

I'm not sure if this has been reported?

> explicitly exclude s3a root dir ITests from parallel runs
> -
>
> Key: HADOOP-14321
> URL: https://issues.apache.org/jira/browse/HADOOP-14321
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14321-branch-2-001.patch
>
>
> the s3 root dir tests are running, even though they are meant to be excluded 
> via the statement
> {code}
> **/ITest*Root*.java
> {code}
> Maybe the double * in the pattern is causing confusion. Fix: explicitly list 
> the relevant tests (s3, s3n, s3a instead)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14321) explicitly exclude s3a root dir ITests from parallel runs

2017-04-18 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973676#comment-15973676
 ] 

Mingliang Liu commented on HADOOP-14321:


+1

FWIW, surefire plugin supports regular expressions. Maybe we can consider that 
later.
{code:xml}
diff --git a/hadoop-tools/hadoop-aws/pom.xml b/hadoop-tools/hadoop-aws/pom.xml
index c1880552b1..c1fb1fde3f 100644
--- a/hadoop-tools/hadoop-aws/pom.xml
+++ b/hadoop-tools/hadoop-aws/pom.xml
@@ -181,7 +181,7 @@
   
   
 
**/ITestJets3tNativeS3FileSystemContract.java
-**/ITest*Root*.java
+%regex[.*ITest.*Root.*]
 **/ITestS3AFileContextStatistics.java
 **/ITestS3AEncryptionSSE*.java
 **/ITestS3AHuge*.java
@@ -209,7 +209,7 @@
   
   
 
**/ITestJets3tNativeS3FileSystemContract.java
-**/ITest*Root*.java
+%regex[.*ITest.*Root.*]
 **/ITestS3AFileContextStatistics.java
 **/ITestS3AHuge*.java
 **/ITestS3AEncryptionSSE*.java
{code}

> explicitly exclude s3a root dir ITests from parallel runs
> -
>
> Key: HADOOP-14321
> URL: https://issues.apache.org/jira/browse/HADOOP-14321
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14321-branch-2-001.patch
>
>
> the s3 root dir tests are running, even though they are meant to be excluded 
> via the statement
> {code}
> **/ITest*Root*.java
> {code}
> Maybe the double * in the pattern is causing confusion. Fix: explicitly list 
> the relevant tests (s3, s3n, s3a instead)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14316) Switch from FindBugs to Spotbugs

2017-04-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973671#comment-15973671
 ] 

Allen Wittenauer commented on HADOOP-14316:
---

bq. Presumably there will some followup "fix spotbug issues" JIRAs?

Almost certainly.  62 new findbugs errors are quite a few and will likely be 
shocking. I didn't look through all of them, but of the handful that I did, 
they definitely pointed to problems. It's gonna take the community fix them.  

I'll drop a note to common-dev just to warn folks on commit.

> Switch from FindBugs to Spotbugs 
> -
>
> Key: HADOOP-14316
> URL: https://issues.apache.org/jira/browse/HADOOP-14316
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14316.00.patch
>
>
> Findbugs hasn't gotten a decent update in a few years.  The community has 
> since forked it and created https://github.com/spotbugs/spotbugs .  Running 
> the RC1 on trunk has pointed out some definite problem areas.  I think it 
> would be to our benefit to switch trunk over sooner rather than later, even 
> though it's still in RC status.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14321) explicitly exclude s3a root dir ITests from parallel runs

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973645#comment-15973645
 ] 

Steve Loughran commented on HADOOP-14321:
-

{{-Dparallel-tests -DtestsThreadCount=8 }}


> explicitly exclude s3a root dir ITests from parallel runs
> -
>
> Key: HADOOP-14321
> URL: https://issues.apache.org/jira/browse/HADOOP-14321
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14321-branch-2-001.patch
>
>
> the s3 root dir tests are running, even though they are meant to be excluded 
> via the statement
> {code}
> **/ITest*Root*.java
> {code}
> Maybe the double * in the pattern is causing confusion. Fix: explicitly list 
> the relevant tests (s3, s3n, s3a instead)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11656) Classpath isolation for downstream clients

2017-04-18 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973641#comment-15973641
 ] 

Sangjin Lee commented on HADOOP-11656:
--

bq. I may be able to work around it by patching out the bundled deps for Fedora 
builds...

You can do that only so much from your side. Maybe swapping 1.2.3 of something 
with 1.2.4 would work, but the Hadoop community cannot guarantee that things 
will work if the version jump is sufficiently large.

{quote}
 If I'm understanding the situation correctly, perhaps there's a better way to 
resolve dependency convergence issues? I've found that often, just following 
semantic versioning, and keeping dependencies reasonably up-to-date resolves 
most issues.
{quote}

As [~steve_l] noted, it's not so simple as adopting semantic versioning etc. 
I'd be the first to acknowledge that the current set of dependency versions is 
hopelessly outdated. But we have been very conservative for fear of causing 
more issues by upgrading for downstream users and frameworks. The 3.0 timeframe 
gives us a window to make these changes that will insulate downstream from 
Hadoop and vice versa.

bq. But right now we don't have any way to stop changes in Hadoop's 
dependencies from breaking things downstream.

I would disagree with this statement. Shading is only *one* mechanism of 
isolating classpaths. The other commonly used mechanism is to have an isolating 
classloader, like a servlet webapp classloader. Hadoop has had this for many 
years, and folks have been using it with success. And it doesn't involve 
rewriting classes at build time.

We've been working on making it stricter for 3.0 so that we can finally 
separate the Hadoop classpath from the user classpath, thereby freeing Hadoop 
to evolve its dependencies without worrying about users' dependencies. See 
HADOOP-13070 and HADOOP-13398. IMO, we should keep the isolating classloader 
feature for 3.0 and get that done for the container runtime at least.

> Classpath isolation for downstream clients
> --
>
> Key: HADOOP-11656
> URL: https://issues.apache.org/jira/browse/HADOOP-11656
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: classloading, classpath, dependencies, scripts, shell
> Attachments: HADOOP-11656_proposal.md
>
>
> Currently, Hadoop exposes downstream clients to a variety of third party 
> libraries. As our code base grows and matures we increase the set of 
> libraries we rely on. At the same time, as our user base grows we increase 
> the likelihood that some downstream project will run into a conflict while 
> attempting to use a different version of some library we depend on. This has 
> already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
> (and I'm sure others).
> While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
> off and they don't do anything to help dependency conflicts on the driver 
> side or for folks talking to HDFS directly. This should serve as an umbrella 
> for changes needed to do things thoroughly on the next major version.
> We should ensure that downstream clients
> 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
> doesn't pull in any third party dependencies
> 2) only see our public API classes (or as close to this as feasible) when 
> executing user provided code, whether client side in a launcher/driver or on 
> the cluster in a container or within MR.
> This provides us with a double benefit: users get less grief when they want 
> to run substantially ahead or behind the versions we need and the project is 
> freer to change our own dependency versions because they'll no longer be in 
> our compatibility promises.
> Project specific task jiras to follow after I get some justifying use cases 
> written in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14321) explicitly exclude s3a root dir ITests from parallel runs

2017-04-18 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973636#comment-15973636
 ] 

Mingliang Liu commented on HADOOP-14321:


Steve which command line did you use for testing this?

> explicitly exclude s3a root dir ITests from parallel runs
> -
>
> Key: HADOOP-14321
> URL: https://issues.apache.org/jira/browse/HADOOP-14321
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14321-branch-2-001.patch
>
>
> the s3 root dir tests are running, even though they are meant to be excluded 
> via the statement
> {code}
> **/ITest*Root*.java
> {code}
> Maybe the double * in the pattern is causing confusion. Fix: explicitly list 
> the relevant tests (s3, s3n, s3a instead)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11656) Classpath isolation for downstream clients

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973613#comment-15973613
 ] 

Steve Loughran commented on HADOOP-11656:
-

[~ctubbsii]: this is about client side classpath dependencies, not server. If 
you want to know why, look at HADOOP-10101 to see coverage of just one JAR, 
then consider also Jackson 1.x, jackson 2.x, jersey, and other widely used 
things. The ones which cause the most problems are those for IPC: protobuf, 
avro, where the generated code has to be in perfect sync with with the version 
of classes generated by the protoc compiler and compiled into the archives.

bq. Has the upstream Hadoop community considered other possible options, 

yes

bq. such as better semantic versioning

requires fundamental change across the entire java stack, doesn't handle the 
problems of a downstream app wanting to use a version of protobuf incompatible 
with the version Hadoop's generated classes depend on, etc. Also the whole 
notion of "semantically compatible" is one we could discuss for a long time. 
Suffice to say, even though we like to maintain semantic compatibility, the 
fact that protobuf 2.5 doesn't link against classes generated by protoc 2.4 
means that we are fighting a losing battle here.

bq. , modularity, 

what exactly do you mean here?

bq. updated dependencies

see HADOOP-9991

bq. marking dependencies "optional",

This is why we are splitting things like having a separate 
{{hadoop-hdfs-client}} from the server side code.

bq. relying on user-defined classpath at runtime, etc., 

Requires fundamental changes in both build time and runtime isolation in the 
JVM and its toolchain. We're actually looking forward to Java 9 here — keep an 
eye on HADOOP-11123

bq. as an alternative to shading/bundling

we are not fans of shading a—we recognise its fundamental wrongness, as well as 
adverse consequences, both in maintenance/admin ("does this aggregate/shaded 
app include something insecure/license incompatible), and in unwanted side 
effects (a recentl example HADOOP-14138). But right now we don't have any way 
to stop changes in Hadoop's dependencies from breaking things downstream, we 
pull in so many things server side, and that need to avoid breaking things is 
constraining what we can do. Minimising changes hampers our ability to use the 
best tools from others; being aggressive about dependencies will destroy the 
well-being of everything downstream.

As noted, this is about client side. Server-side: not bundled/shaded, same as 
it ever was. You can even skip the shading by building with -DskipShade. 
Downstream apps, such as HBase, will pick up the shaded things: you will have 
to build the shaded things and give it to them. That's the only way we can 
decouple their dependencies within the constraints of Java's current isolation 
model. Java 9 will change this, hopefully for the better.



> Classpath isolation for downstream clients
> --
>
> Key: HADOOP-11656
> URL: https://issues.apache.org/jira/browse/HADOOP-11656
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: classloading, classpath, dependencies, scripts, shell
> Attachments: HADOOP-11656_proposal.md
>
>
> Currently, Hadoop exposes downstream clients to a variety of third party 
> libraries. As our code base grows and matures we increase the set of 
> libraries we rely on. At the same time, as our user base grows we increase 
> the likelihood that some downstream project will run into a conflict while 
> attempting to use a different version of some library we depend on. This has 
> already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
> (and I'm sure others).
> While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
> off and they don't do anything to help dependency conflicts on the driver 
> side or for folks talking to HDFS directly. This should serve as an umbrella 
> for changes needed to do things thoroughly on the next major version.
> We should ensure that downstream clients
> 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
> doesn't pull in any third party dependencies
> 2) only see our public API classes (or as close to this as feasible) when 
> executing user provided code, whether client side in a launcher/driver or on 
> the cluster in a container or within MR.
> This provides us with a double benefit: users get less grief when they want 
> to run substantially ahead or behind the versions we need and the project is 
> freer to change our own dependency versions because they'll no longer be in 
> our compatibility promises.
> Project specific task jiras to follow after I get some justifying 

[jira] [Commented] (HADOOP-14305) S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus

2017-04-18 Thread Steve Moist (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973589#comment-15973589
 ] 

Steve Moist commented on HADOOP-14305:
--

I've made the pom changes, but want to spend some time writing some more unit 
tests that deal with SSE-C.  I'll submit a patch in a few days.

> S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus
> ---
>
> Key: HADOOP-14305
> URL: https://issues.apache.org/jira/browse/HADOOP-14305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Moist
>Priority: Minor
> Attachments: HADOOP-14305-001.patch
>
>
> The S3a encryption tests all run in parallel (they were interfering with each 
> other, apparently). This adds ~1 min to the test runs.
> They should run in serial. That they fail when this is attempted due to Bad 
> Auth problems must be considered a serious problem, as is indicates issues 
> related to working with SSE encryption from hadoop



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14313) Replace/improve Hadoop's byte[] comparator

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973581#comment-15973581
 ] 

Steve Loughran commented on HADOOP-14313:
-

Looking at the hadoop code, that was lifted from Guava many years ago. We 
either maintain that or switch to the guava one. Probably easiest just to move 
to Guava unless we really don't trust them for stability/perf

> Replace/improve Hadoop's byte[] comparator
> --
>
> Key: HADOOP-14313
> URL: https://issues.apache.org/jira/browse/HADOOP-14313
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Vikas Vishwakarma
> Attachments: HADOOP-14313.master.001.patch
>
>
> Hi,
> Recently we were looking at the Lexicographic byte array comparison in HBase. 
> We did microbenchmark for the byte array comparator of HADOOP ( 
> https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/io/FastByteComparisons.java#L161
>  ) , HBase Vs the latest byte array comparator from guava  ( 
> https://github.com/google/guava/blob/master/guava/src/com/google/common/primitives/UnsignedBytes.java#L362
>  ) and observed that the guava main branch version is much faster. 
> Specifically we see very good improvement when the byteArraySize%8 != 0 and 
> also for large byte arrays. I will update the benchmark results using JMH for 
> Hadoop vs Guava. For the jira on HBase, please refer HBASE-17877. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14303) Review retry logic on all S3 SDK calls, implement where needed

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973559#comment-15973559
 ] 

Steve Loughran commented on HADOOP-14303:
-

+retry logic should include handling of any load-throttling problems related to 
integrated services, such as SSE-KMS. S3Guard is already doing this for DDB.

> Review retry logic on all S3 SDK calls, implement where needed
> --
>
> Key: HADOOP-14303
> URL: https://issues.apache.org/jira/browse/HADOOP-14303
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> AWS S3, IAM, KMS, DDB etc all throttle callers: the S3A code needs to handle 
> this without failing, as if it slows down its requests it can recover.
> 1. Look at all the places where we are calling S3A via the AWS SDK and make 
> sure we are retrying with some backoff & jitter policy, ideally something 
> unified. This must be more systematic than the case-by-case, 
> problem-by-problem strategy we are implicitly using.
> 2. Many of the AWS S3 SDK calls do implement retry (e.g PUT/multipart PUT), 
> but we need to check the other parts of the process: login, initiate/complete 
> MPU, ...
> Related
> HADOOP-13811 Failed to sanitize XML document destined for handler class
> HADOOP-13664 S3AInputStream to use a retry policy on read failures
> This stuff is all hard to test. A key need is to be able to differentiate 
> recoverable throttle & network failures from unrecoverable problems like: 
> auth, network config (e.g bad endpoint), etc.
> May be the opportunity to add a faulting subclass of Amazon S3 client which 
> can be configured in IT Tests to fail at specific points. Ryan Blue's mcok S3 
> client does this in HADOOP-13786, but it is for 100% mock. I'm thinking of 
> something with similar fault raising, but in front of the real S3A client 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14319) Under replicated blocks are not getting re-replicated

2017-04-18 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-14319.
---
Resolution: Invalid

Please send your queries to hdfs-user mailing list. 
https://hadoop.apache.org/mailing_lists.html
To answer your query please look at dfs.namenode.replication.max-streams , 
dfs.namenode.replication.max-streams-hard-limit,  
dfs.namenode.replication.work.multiplier.per.iteration etc.


> Under replicated blocks are not getting re-replicated
> -
>
> Key: HADOOP-14319
> URL: https://issues.apache.org/jira/browse/HADOOP-14319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Anil
>
> Under replicated blocks are not getting re-replicated
> In production Hadoop cluster of 5 Manangement + 5 Data Nodes, under 
> replicated blocks are not re-replicated even after 2 days. 
> Here is quick view of required configurations;
>  Default replication factor:  3
>  Average block replication:   3.0
>  Corrupt blocks:  0
>  Missing replicas:0 (0.0 %)
>  Number of data-nodes:5
>  Number of racks: 1
> After bringing one of the DataNodes down, the replication factor for the 
> blocks allocated on the Data Node became 2. It is observed that, even after 2 
> days the replication factor remains as 2. Under replicated blocks are not 
> getting re-replicated to another DataNodes in the cluster. 
> If a Data Node goes down, HDFS will try to replicate the blocks from Dead DN 
> to other nodes and the priority. Are there any configuration changes to speed 
> up the re-replication process for the under replicated blocks? 
> When tested for blocks with replication factor 1, the re-replication happened 
> to 2 overnight in around 10 hours of time. But blocks with 2 replication 
> factor are not being re-replicated to default replication factor 3. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Closed] (HADOOP-14319) Under replicated blocks are not getting re-replicated

2017-04-18 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash closed HADOOP-14319.
-

> Under replicated blocks are not getting re-replicated
> -
>
> Key: HADOOP-14319
> URL: https://issues.apache.org/jira/browse/HADOOP-14319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Anil
>
> Under replicated blocks are not getting re-replicated
> In production Hadoop cluster of 5 Manangement + 5 Data Nodes, under 
> replicated blocks are not re-replicated even after 2 days. 
> Here is quick view of required configurations;
>  Default replication factor:  3
>  Average block replication:   3.0
>  Corrupt blocks:  0
>  Missing replicas:0 (0.0 %)
>  Number of data-nodes:5
>  Number of racks: 1
> After bringing one of the DataNodes down, the replication factor for the 
> blocks allocated on the Data Node became 2. It is observed that, even after 2 
> days the replication factor remains as 2. Under replicated blocks are not 
> getting re-replicated to another DataNodes in the cluster. 
> If a Data Node goes down, HDFS will try to replicate the blocks from Dead DN 
> to other nodes and the priority. Are there any configuration changes to speed 
> up the re-replication process for the under replicated blocks? 
> When tested for blocks with replication factor 1, the re-replication happened 
> to 2 overnight in around 10 hours of time. But blocks with 2 replication 
> factor are not being re-replicated to default replication factor 3. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-04-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973537#comment-15973537
 ] 

Ted Yu commented on HADOOP-13866:
-

See this thread: 
http://search-hadoop.com/m/HBase/YGbbZFYYbSDQ3l1?subj=Re+DISCUSS+More+Shading

No HBase JIRA yet.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch, 
> HADOOP-13866.v7.patch, HADOOP-13866.v8.patch, HADOOP-13866.v8.patch, 
> HADOOP-13866.v8.patch, HADOOP-13866.v9.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2017-04-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13200:
-
Target Version/s: 3.0.0-beta1  (was: 3.0.0-alpha3)

Going to move this one out to beta1 for tracking purposes, in case it doesn't 
get done by alpha3. If things progress quickly, happy to have it in alpha3 too.

> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Tim Yao
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13200.02.patch, HADOOP-13200.03.patch
>
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12990) lz4 incompatibility between OS and Hadoop

2017-04-18 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973452#comment-15973452
 ] 

Jason Lowe commented on HADOOP-12990:
-

My apologies for the delay, somehow I missed getting notified on your latest 
comment.

If this is really going to work as end-users expect then it needs to use the 
'.lz4' extension.  That means going with the approach of a unified Lz4Codec 
that can read both the legacy Lz4Codec and write the CLI-compatible one.  There 
would need to be a release note explaining the incompatibility for data written 
by the newer codec for clusters still running the older codec.  There are some 
other caveats, since this could cause issues for a rolling downgrade (i.e.: 
data written by the new codec before the downgrade can't be decoded after the 
downgrade).  We can mitigate this by making the output format of the new codec 
configurable and setting the default to be the legacy format, but then of 
course it doesn't work "out of the box" with the lz4 CLI tool which will be 
surprising to some.

Anyway the first step is to see if the first proposal is even feasible -- can 
the codec reliably auto-detect which format is being used and properly decode 
both the legacy and CLI-compatible formats.  If that possibility exists then we 
can work through the logistics of whether the new codec emits the 
CLI-compatible format by default and how to handle the compatibility scenarios.

> lz4 incompatibility between OS and Hadoop
> -
>
> Key: HADOOP-12990
> URL: https://issues.apache.org/jira/browse/HADOOP-12990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Priority: Minor
>
> {{hdfs dfs -text}} hit exception when trying to view the compression file 
> created by Linux lz4 tool.
> The Hadoop version has HADOOP-11184 "update lz4 to r123", thus it is using 
> LZ4 library in release r123.
> Linux lz4 version:
> {code}
> $ /tmp/lz4 -h 2>&1 | head -1
> *** LZ4 Compression CLI 64-bits r123, by Yann Collet (Apr  1 2016) ***
> {code}
> Test steps:
> {code}
> $ cat 10rows.txt
> 001|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 002|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 003|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 004|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 005|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 006|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 007|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 008|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 009|c1|c2|c3|c4|c5|c6|c7|c8|c9
> 010|c1|c2|c3|c4|c5|c6|c7|c8|c9
> $ /tmp/lz4 10rows.txt 10rows.txt.r123.lz4
> Compressed 310 bytes into 105 bytes ==> 33.87%
> $ hdfs dfs -put 10rows.txt.r123.lz4 /tmp
> $ hdfs dfs -text /tmp/10rows.txt.r123.lz4
> 16/04/01 08:19:07 INFO compress.CodecPool: Got brand-new decompressor [.lz4]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:123)
> at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98)
> at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:106)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:101)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14316) Switch from FindBugs to Spotbugs

2017-04-18 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973372#comment-15973372
 ] 

Mingliang Liu commented on HADOOP-14316:


+1

> Switch from FindBugs to Spotbugs 
> -
>
> Key: HADOOP-14316
> URL: https://issues.apache.org/jira/browse/HADOOP-14316
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14316.00.patch
>
>
> Findbugs hasn't gotten a decent update in a few years.  The community has 
> since forked it and created https://github.com/spotbugs/spotbugs .  Running 
> the RC1 on trunk has pointed out some definite problem areas.  I think it 
> would be to our benefit to switch trunk over sooner rather than later, even 
> though it's still in RC status.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14322) Incorrect host info may be reported in failover message

2017-04-18 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang reassigned HADOOP-14322:
--

Assignee: Yongjun Zhang

> Incorrect host info may be reported in failover message
> ---
>
> Key: HADOOP-14322
> URL: https://issues.apache.org/jira/browse/HADOOP-14322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> This may apply to other components, but using HDFS as an example.
> When multiple threads use the same DFSClient to make RPC calls, they may 
> report incorrect NN host name in the failover message:
> {code}
> INFO [pool-3-thread-13] retry.RetryInvocationHandler 
> (RetryInvocationHandler.java:invoke(148)) - Exception while invoking delete 
> of class ClientNamenodeProtocolTranslatorPB over *a.b.c.d*:8020. Trying to 
> fail over immediately.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category WRITE is not supported in state standby. Visit 
> https://s.apache.org/sbnn-error
> {code}
> where *a.b.c.d* is the RPC proxy corresponds to the active NN, which confuses 
> user to think failover is not behaving correctly. Because *a.b.c.d*  is 
> expected to be the proxy corresponding to the standby NN here instead.
> The reason is that when the ProxyDescriptor data field of 
> RetryInvocationHandler may be shared by multiple threads that do the RPC 
> calls, the failover done by one thread (which changed the rpc proxy) may be 
> visible to other threads when the other threads report the above message. 
> An example sequence: 
> # multiple threads start with the same SNN to do RPC calls, 
> # all threads discover that a failover is needed, 
> # thread X failover first, and changed the ProxyDescriptor's proxyInfo to ANN
> # other threads reports the above message with the proxyInfo changed by 
> thread X, and reported ANN instead of SNN in the message.
> Some details:
> RetryInvocationHandler does the following when failing over:
> {code}
>   synchronized void failover(long expectedFailoverCount, Method method,
>int callId) {
>   // Make sure that concurrent failed invocations only cause a single
>   // actual failover.
>   if (failoverCount == expectedFailoverCount) {
> fpp.performFailover(proxyInfo.proxy);
> failoverCount++;
>   } else {
> LOG.warn("A failover has occurred since the start of call #" + callId
> + " " + proxyInfo.getString(method.getName()));
>   }
>   proxyInfo = fpp.getProxy();
> }
> {code}
> and changed the proxyInfo in the ProxyDescriptor.
> While the log method below report message with ProxyDescriotor's proxyinfo:
> {code}
> private void log(final Method method, final boolean isFailover,
>   final int failovers, final long delay, final Exception ex) {
> ..
>final StringBuilder b = new StringBuilder()
> .append(ex + ", while invoking ")
> .append(proxyDescriptor.getProxyInfo().getString(method.getName()));
> if (failovers > 0) {
>   b.append(" after ").append(failovers).append(" failover attempts");
> }
> b.append(isFailover? ". Trying to failover ": ". Retrying ");
> b.append(delay > 0? "after sleeping for " + delay + "ms.": 
> "immediately.");
> {code}
> and so does  {{handleException}} method do
> {code}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Exception while invoking call #" + callId + " "
>   + proxyDescriptor.getProxyInfo().getString(method.getName())
>   + ". Not retrying because " + retryInfo.action.reason, e);
> }
> {code}
> FailoverProxyProvider
> {code}
>public String getString(String methodName) {
>   return proxy.getClass().getSimpleName() + "." + methodName
>   + " over " + proxyInfo;
> }
> @Override
> public String toString() {
>   return proxy.getClass().getSimpleName() + " over " + proxyInfo;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14322) Incorrect host info may be reported in failover message

2017-04-18 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HADOOP-14322:
--

 Summary: Incorrect host info may be reported in failover message
 Key: HADOOP-14322
 URL: https://issues.apache.org/jira/browse/HADOOP-14322
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Yongjun Zhang


This may apply to other components, but using HDFS as an example.

When multiple threads use the same DFSClient to make RPC calls, they may report 
incorrect NN host name in the failover message:
{code}
INFO [pool-3-thread-13] retry.RetryInvocationHandler 
(RetryInvocationHandler.java:invoke(148)) - Exception while invoking delete of 
class ClientNamenodeProtocolTranslatorPB over *a.b.c.d*:8020. Trying to fail 
over immediately.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): 
Operation category WRITE is not supported in state standby. Visit 
https://s.apache.org/sbnn-error
{code}

where *a.b.c.d* is the RPC proxy corresponds to the active NN, which confuses 
user to think failover is not behaving correctly. Because *a.b.c.d*  is 
expected to be the proxy corresponding to the standby NN here instead.

The reason is that when the ProxyDescriptor data field of 
RetryInvocationHandler may be shared by multiple threads that do the RPC calls, 
the failover done by one thread (which changed the rpc proxy) may be visible to 
other threads when the other threads report the above message. 

An example sequence: 
# multiple threads start with the same SNN to do RPC calls, 
# all threads discover that a failover is needed, 
# thread X failover first, and changed the ProxyDescriptor's proxyInfo to ANN
# other threads reports the above message with the proxyInfo changed by thread 
X, and reported ANN instead of SNN in the message.

Some details:

RetryInvocationHandler does the following when failing over:
{code}
  synchronized void failover(long expectedFailoverCount, Method method,
   int callId) {
  // Make sure that concurrent failed invocations only cause a single
  // actual failover.
  if (failoverCount == expectedFailoverCount) {
fpp.performFailover(proxyInfo.proxy);
failoverCount++;
  } else {
LOG.warn("A failover has occurred since the start of call #" + callId
+ " " + proxyInfo.getString(method.getName()));
  }
  proxyInfo = fpp.getProxy();
}
{code}
and changed the proxyInfo in the ProxyDescriptor.

While the log method below report message with ProxyDescriotor's proxyinfo:
{code}
private void log(final Method method, final boolean isFailover,
  final int failovers, final long delay, final Exception ex) {
..
   final StringBuilder b = new StringBuilder()
.append(ex + ", while invoking ")
.append(proxyDescriptor.getProxyInfo().getString(method.getName()));
if (failovers > 0) {
  b.append(" after ").append(failovers).append(" failover attempts");
}
b.append(isFailover? ". Trying to failover ": ". Retrying ");
b.append(delay > 0? "after sleeping for " + delay + "ms.": "immediately.");
{code}
and so does  {{handleException}} method do
{code}
if (LOG.isDebugEnabled()) {
  LOG.debug("Exception while invoking call #" + callId + " "
  + proxyDescriptor.getProxyInfo().getString(method.getName())
  + ". Not retrying because " + retryInfo.action.reason, e);
}
{code}

FailoverProxyProvider
{code}
   public String getString(String methodName) {
  return proxy.getClass().getSimpleName() + "." + methodName
  + " over " + proxyInfo;
}

@Override
public String toString() {
  return proxy.getClass().getSimpleName() + " over " + proxyInfo;
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14305) S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus

2017-04-18 Thread Steve Moist (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973319#comment-15973319
 ] 

Steve Moist commented on HADOOP-14305:
--

Sounds good.  I'll submit a few patches today.

> S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus
> ---
>
> Key: HADOOP-14305
> URL: https://issues.apache.org/jira/browse/HADOOP-14305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14305-001.patch
>
>
> The S3a encryption tests all run in parallel (they were interfering with each 
> other, apparently). This adds ~1 min to the test runs.
> They should run in serial. That they fail when this is attempted due to Bad 
> Auth problems must be considered a serious problem, as is indicates issues 
> related to working with SSE encryption from hadoop



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14305) S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus

2017-04-18 Thread Steve Moist (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Moist reassigned HADOOP-14305:


Assignee: Steve Moist

> S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus
> ---
>
> Key: HADOOP-14305
> URL: https://issues.apache.org/jira/browse/HADOOP-14305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Moist
>Priority: Minor
> Attachments: HADOOP-14305-001.patch
>
>
> The S3a encryption tests all run in parallel (they were interfering with each 
> other, apparently). This adds ~1 min to the test runs.
> They should run in serial. That they fail when this is attempted due to Bad 
> Auth problems must be considered a serious problem, as is indicates issues 
> related to working with SSE encryption from hadoop



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-14305) S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus

2017-04-18 Thread Steve Moist (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14305 started by Steve Moist.

> S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus
> ---
>
> Key: HADOOP-14305
> URL: https://issues.apache.org/jira/browse/HADOOP-14305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Moist
>Priority: Minor
> Attachments: HADOOP-14305-001.patch
>
>
> The S3a encryption tests all run in parallel (they were interfering with each 
> other, apparently). This adds ~1 min to the test runs.
> They should run in serial. That they fail when this is attempted due to Bad 
> Auth problems must be considered a serious problem, as is indicates issues 
> related to working with SSE encryption from hadoop



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11656) Classpath isolation for downstream clients

2017-04-18 Thread Christopher Tubbs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973277#comment-15973277
 ] 

Christopher Tubbs commented on HADOOP-11656:


Other suggestions I forgot to mention: the apilyzer-maven-plugin can be used 
with semantic versioning, to ensure no non-public API leakage into the public 
API, and dependency injection using Java's ServiceLoader is pretty convenient 
for modularity-based dependency minimization.

> Classpath isolation for downstream clients
> --
>
> Key: HADOOP-11656
> URL: https://issues.apache.org/jira/browse/HADOOP-11656
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: classloading, classpath, dependencies, scripts, shell
> Attachments: HADOOP-11656_proposal.md
>
>
> Currently, Hadoop exposes downstream clients to a variety of third party 
> libraries. As our code base grows and matures we increase the set of 
> libraries we rely on. At the same time, as our user base grows we increase 
> the likelihood that some downstream project will run into a conflict while 
> attempting to use a different version of some library we depend on. This has 
> already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
> (and I'm sure others).
> While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
> off and they don't do anything to help dependency conflicts on the driver 
> side or for folks talking to HDFS directly. This should serve as an umbrella 
> for changes needed to do things thoroughly on the next major version.
> We should ensure that downstream clients
> 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
> doesn't pull in any third party dependencies
> 2) only see our public API classes (or as close to this as feasible) when 
> executing user provided code, whether client side in a launcher/driver or on 
> the cluster in a container or within MR.
> This provides us with a double benefit: users get less grief when they want 
> to run substantially ahead or behind the versions we need and the project is 
> freer to change our own dependency versions because they'll no longer be in 
> our compatibility promises.
> Project specific task jiras to follow after I get some justifying use cases 
> written in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11656) Classpath isolation for downstream clients

2017-04-18 Thread Christopher Tubbs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973268#comment-15973268
 ] 

Christopher Tubbs commented on HADOOP-11656:


As the current maintainer of Hadoop in the Fedora Linux distribution, I'm a bit 
concerned that the goal here is basically to statically compile everything from 
the dependencies (bugs, security issues, and all) into the Hadoop distribution 
jars. This would make downstream packaging in Fedora (and other distros which 
rely on packagers to do dependency convergence) much more difficult, as static 
compilation / bundling is essentially disallowed (some exceptions), because 
packages don't benefit from system updates of their dependencies, creating 
security risks and bad user experiences.

Am I misunderstanding what is being pursued here? Or is this basically bundling?

If this is bundling, I may be able to work around it by patching out the 
bundled deps for Fedora builds, but this is somewhat inconvenient, and I'd 
rather upstream Hadoop do things that make it easier on downstream community 
and vendor packaging. If I'm understanding the situation correctly, perhaps 
there's a better way to resolve dependency convergence issues? I've found that 
often, just following semantic versioning, and keeping dependencies reasonably 
up-to-date resolves most issues. (For example, one of the biggest issues I've 
had depending on Hadoop is its dependency on the old mortbay jetty, which is so 
full of security issues and so out of date, I'm surprised it's not a constant 
source of CVEs for Hadoop itself.)

Has the upstream Hadoop community considered other possible options, such as 
better semantic versioning, modularity, updated dependencies, marking 
dependencies "optional", relying on user-defined classpath at runtime, etc., as 
an alternative to shading/bundling? What is the basis for selecting shading as 
the solution instead of some of these other options?

Perhaps I'm completely misunderstanding the situation. If so, please explain 
where I've misunderstood.

Thanks.

> Classpath isolation for downstream clients
> --
>
> Key: HADOOP-11656
> URL: https://issues.apache.org/jira/browse/HADOOP-11656
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
>  Labels: classloading, classpath, dependencies, scripts, shell
> Attachments: HADOOP-11656_proposal.md
>
>
> Currently, Hadoop exposes downstream clients to a variety of third party 
> libraries. As our code base grows and matures we increase the set of 
> libraries we rely on. At the same time, as our user base grows we increase 
> the likelihood that some downstream project will run into a conflict while 
> attempting to use a different version of some library we depend on. This has 
> already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
> (and I'm sure others).
> While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
> off and they don't do anything to help dependency conflicts on the driver 
> side or for folks talking to HDFS directly. This should serve as an umbrella 
> for changes needed to do things thoroughly on the next major version.
> We should ensure that downstream clients
> 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
> doesn't pull in any third party dependencies
> 2) only see our public API classes (or as close to this as feasible) when 
> executing user provided code, whether client side in a launcher/driver or on 
> the cluster in a container or within MR.
> This provides us with a double benefit: users get less grief when they want 
> to run substantially ahead or behind the versions we need and the project is 
> freer to change our own dependency versions because they'll no longer be in 
> our compatibility promises.
> Project specific task jiras to follow after I get some justifying use cases 
> written in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14316) Switch from FindBugs to Spotbugs

2017-04-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973228#comment-15973228
 ] 

Sean Busbey commented on HADOOP-14316:
--

+1 (non-binding)

{quote}
Presumably there will some followup "fix spotbug issues" JIRAs?
{quote}

sgtm.

> Switch from FindBugs to Spotbugs 
> -
>
> Key: HADOOP-14316
> URL: https://issues.apache.org/jira/browse/HADOOP-14316
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14316.00.patch
>
>
> Findbugs hasn't gotten a decent update in a few years.  The community has 
> since forked it and created https://github.com/spotbugs/spotbugs .  Running 
> the RC1 on trunk has pointed out some definite problem areas.  I think it 
> would be to our benefit to switch trunk over sooner rather than later, even 
> though it's still in RC status.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14321) explicitly exclude s3a root dir ITests from parallel runs

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973226#comment-15973226
 ] 

Hadoop QA commented on HADOOP-14321:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
49s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
30s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:8515d35 |
| JIRA Issue | HADOOP-14321 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863857/HADOOP-14321-branch-2-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 9ad621e1a18e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 6cfceee |
| Default Java | 1.7.0_121 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_121 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12120/artifact/patchprocess/branch-mvninstall-root.txt
 |
| JDK v1.7.0_121  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12120/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 

[jira] [Updated] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-04-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13075:

Release Note: The new encryption options SSE-KMS and especially SSE-C must 
be considered experimental at present. The existing SSE-AWS mechanism is 
considered stable.

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Andrew Olson
>Assignee: Steve Moist
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13075-001.patch, HADOOP-13075-002.patch, 
> HADOOP-13075-003.patch, HADOOP-13075-branch2.002.patch
>
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14321) explicitly exclude s3a root dir ITests from parallel runs

2017-04-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14321:

Attachment: HADOOP-14321-branch-2-001.patch

Patch 001; for branch-2 & so includes s3 as well as s3a & s3n.

Testing: s3 ireland. I explicitly verified that I did not seem them in the 
parallel stretch, and that they were in the serial set.

{code}
---
 T E S T S
---
Running org.apache.hadoop.fs.contract.s3.ITestS3ContractRootDir
Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.229 sec - in 
org.apache.hadoop.fs.contract.s3.ITestS3ContractRootDir
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.243 sec - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractRootDir
Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.046 sec - in 
org.apache.hadoop.fs.contract.s3n.ITestS3NContractRootDir
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.699 sec -
{code}
Here I'm skipping the s3/s3n tests, but they are being skipped in the righ 
tplacve; the s3a one adds 20s to a test run (not ideal, but clearly needed)

> explicitly exclude s3a root dir ITests from parallel runs
> -
>
> Key: HADOOP-14321
> URL: https://issues.apache.org/jira/browse/HADOOP-14321
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14321-branch-2-001.patch
>
>
> the s3 root dir tests are running, even though they are meant to be excluded 
> via the statement
> {code}
> **/ITest*Root*.java
> {code}
> Maybe the double * in the pattern is causing confusion. Fix: explicitly list 
> the relevant tests (s3, s3n, s3a instead)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14321) explicitly exclude s3a root dir ITests from parallel runs

2017-04-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14321:

Status: Patch Available  (was: In Progress)

> explicitly exclude s3a root dir ITests from parallel runs
> -
>
> Key: HADOOP-14321
> URL: https://issues.apache.org/jira/browse/HADOOP-14321
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14321-branch-2-001.patch
>
>
> the s3 root dir tests are running, even though they are meant to be excluded 
> via the statement
> {code}
> **/ITest*Root*.java
> {code}
> Maybe the double * in the pattern is causing confusion. Fix: explicitly list 
> the relevant tests (s3, s3n, s3a instead)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-14321) explicitly exclude s3a root dir ITests from parallel runs

2017-04-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14321 started by Steve Loughran.
---
> explicitly exclude s3a root dir ITests from parallel runs
> -
>
> Key: HADOOP-14321
> URL: https://issues.apache.org/jira/browse/HADOOP-14321
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> the s3 root dir tests are running, even though they are meant to be excluded 
> via the statement
> {code}
> **/ITest*Root*.java
> {code}
> Maybe the double * in the pattern is causing confusion. Fix: explicitly list 
> the relevant tests (s3, s3n, s3a instead)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14320) TestIPC.testIpcWithReaderQueuing fails intermittently

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973116#comment-15973116
 ] 

Hadoop QA commented on HADOOP-14320:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 14s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HADOOP-14320 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863842/HADOOP-14320.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d0a3db3b5d8a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 654372d |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12119/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12119/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12119/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestIPC.testIpcWithReaderQueuing fails intermittently
> -
>
> Key: HADOOP-14320
> URL: https://issues.apache.org/jira/browse/HADOOP-14320
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>

[jira] [Commented] (HADOOP-14317) KMSWebServer$deprecateEnv may leak secret

2017-04-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973089#comment-15973089
 ] 

Hudson commented on HADOOP-14317:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11600 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11600/])
HADOOP-14317. KMSWebServer$deprecateEnv may leak secret. Contributed by 
(jzhuge: rev a9f07e0d3ebb41d24d11e2bdb0ee872fa72072ca)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebServer.java


> KMSWebServer$deprecateEnv may leak secret
> -
>
> Key: HADOOP-14317
> URL: https://issues.apache.org/jira/browse/HADOOP-14317
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14317.001.patch, HADOOP-14317.002.patch
>
>
> May print warning message with secret in deprecated env var or matching 
> property:
> {code}
> LOG.warn("Environment variable {} = '{}' is deprecated and overriding"
> + " property {} = '{}', please set the property in {} instead.",
> varName, value, propName, propValue, confFile);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14317) KMSWebServer$deprecateEnv may leak secret

2017-04-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14317:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~andrew.wang] for the review.

> KMSWebServer$deprecateEnv may leak secret
> -
>
> Key: HADOOP-14317
> URL: https://issues.apache.org/jira/browse/HADOOP-14317
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14317.001.patch, HADOOP-14317.002.patch
>
>
> May print warning message with secret in deprecated env var or matching 
> property:
> {code}
> LOG.warn("Environment variable {} = '{}' is deprecated and overriding"
> + " property {} = '{}', please set the property in {} instead.",
> varName, value, propName, propValue, confFile);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14266) S3Guard: S3AFileSystem::listFiles() to employ MetadataStore

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973045#comment-15973045
 ] 

Steve Loughran commented on HADOOP-14266:
-

filed HADOOP-14321 to isolate the root dir test. Shouldn't affect the huge ones 
though

> S3Guard: S3AFileSystem::listFiles() to employ MetadataStore
> ---
>
> Key: HADOOP-14266
> URL: https://issues.apache.org/jira/browse/HADOOP-14266
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14266-HADOOP-13345.000.patch, 
> HADOOP-14266-HADOOP-13345.001.patch, HADOOP-14266-HADOOP-13345.002.patch, 
> HADOOP-14266-HADOOP-13345.003.patch, HADOOP-14266-HADOOP-13345.003.patch, 
> HADOOP-14266-HADOOP-13345.004.patch, HADOOP-14266-HADOOP-13345-005.patch, 
> HADOOP-14266-HADOOP-13345.005.patch, HADOOP-14266-HADOOP-13345.006.patch
>
>
> Similar to [HADOOP-13926], this is to track the effort of employing 
> MetadataStore in {{S3AFileSystem::listFiles()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14321) explicitly exclude s3a root dir ITests from parallel runs

2017-04-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14321:
---

 Summary: explicitly exclude s3a root dir ITests from parallel runs
 Key: HADOOP-14321
 URL: https://issues.apache.org/jira/browse/HADOOP-14321
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


the s3 root dir tests are running, even though they are meant to be excluded 
via the statement
{code}
**/ITest*Root*.java
{code}

Maybe the double * in the pattern is causing confusion. Fix: explicitly list 
the relevant tests (s3, s3n, s3a instead)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14266) S3Guard: S3AFileSystem::listFiles() to employ MetadataStore

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973029#comment-15973029
 ] 

Steve Loughran commented on HADOOP-14266:
-

the failure in {{testRecursiveRootListing()}} is happending because the results 
of {{fs.listFiles(root, true)}} are different from a treewalk because distcp 
got in the way

we should be excluding them, which is what the POM appears to do
{code}
**/ITest*Root*.java
{code}.



> S3Guard: S3AFileSystem::listFiles() to employ MetadataStore
> ---
>
> Key: HADOOP-14266
> URL: https://issues.apache.org/jira/browse/HADOOP-14266
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-14266-HADOOP-13345.000.patch, 
> HADOOP-14266-HADOOP-13345.001.patch, HADOOP-14266-HADOOP-13345.002.patch, 
> HADOOP-14266-HADOOP-13345.003.patch, HADOOP-14266-HADOOP-13345.003.patch, 
> HADOOP-14266-HADOOP-13345.004.patch, HADOOP-14266-HADOOP-13345-005.patch, 
> HADOOP-14266-HADOOP-13345.005.patch, HADOOP-14266-HADOOP-13345.006.patch
>
>
> Similar to [HADOOP-13926], this is to track the effort of employing 
> MetadataStore in {{S3AFileSystem::listFiles()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14317) KMSWebServer$deprecateEnv may leak secret

2017-04-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973021#comment-15973021
 ] 

Andrew Wang commented on HADOOP-14317:
--

+1 thanks John!

> KMSWebServer$deprecateEnv may leak secret
> -
>
> Key: HADOOP-14317
> URL: https://issues.apache.org/jira/browse/HADOOP-14317
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14317.001.patch, HADOOP-14317.002.patch
>
>
> May print warning message with secret in deprecated env var or matching 
> property:
> {code}
> LOG.warn("Environment variable {} = '{}' is deprecated and overriding"
> + " property {} = '{}', please set the property in {} instead.",
> varName, value, propName, propValue, confFile);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14063) Hadoop CredentialProvider fails to load list of keystore files

2017-04-18 Thread Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973010#comment-15973010
 ] 

Yan commented on HADOOP-14063:
--

Breaking *any* existing library behavior is a risky practice, and should have 
had followed careful migration/compatibility/documentation paths and checks.

My point is that any behavioral changes on the keystoreExists(), if deemed to 
be necessary, should be under a separate jira, not this one which just deals 
with the traversal of a list of keystore files and should have been addressed 
using existing or enhanced methods and not through changing existing ones.

> Hadoop CredentialProvider fails to load list of keystore files
> --
>
> Key: HADOOP-14063
> URL: https://issues.apache.org/jira/browse/HADOOP-14063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-14063-001.patch, HADOOP-14063-002.patch
>
>
> The {{hadoop.security.credential.provider.path}} property can be a list of 
> keystore files like this:
> _jceks://hdfs/file1.jceks,jceks://hdfs/file2.jceks,jceks://hdfs/file3.jceks 
> ..._
> Each file can have different permissions set to limit the users that have 
> access to the keys.  Some users may not have access to all the keystore files.
> Each keystore file in the list should be tried until one is found with the 
> key needed. 
> Currently it will throw an exception if one of the keystore files cannot be 
> loaded instead of continuing to try the next one in the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14063) Hadoop CredentialProvider fails to load list of keystore files

2017-04-18 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972991#comment-15972991
 ] 

Eric Yang commented on HADOOP-14063:


Agree with Yan that FileNotFound exception should not be captured to preserve 
existing semantic.  For AccessControlException, it would be right to handle the 
exception and return false.  This would be closer to the original 
implementation.

> Hadoop CredentialProvider fails to load list of keystore files
> --
>
> Key: HADOOP-14063
> URL: https://issues.apache.org/jira/browse/HADOOP-14063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-14063-001.patch, HADOOP-14063-002.patch
>
>
> The {{hadoop.security.credential.provider.path}} property can be a list of 
> keystore files like this:
> _jceks://hdfs/file1.jceks,jceks://hdfs/file2.jceks,jceks://hdfs/file3.jceks 
> ..._
> Each file can have different permissions set to limit the users that have 
> access to the keys.  Some users may not have access to all the keystore files.
> Each keystore file in the list should be tried until one is found with the 
> key needed. 
> Currently it will throw an exception if one of the keystore files cannot be 
> loaded instead of continuing to try the next one in the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14320) TestIPC.testIpcWithReaderQueuing fails intermittently

2017-04-18 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-14320:
-
Status: Patch Available  (was: Open)

> TestIPC.testIpcWithReaderQueuing fails intermittently
> -
>
> Key: HADOOP-14320
> URL: https://issues.apache.org/jira/browse/HADOOP-14320
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-14320.001.patch
>
>
> {noformat}
> org.mockito.exceptions.verification.TooLittleActualInvocations: 
> callQueueManager.put();
> Wanted 2 times:
> -> at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:810)
> But was 1 time:
> -> at org.apache.hadoop.ipc.Server.queueCall(Server.java:2466)
>   at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:810)
>   at 
> org.apache.hadoop.ipc.TestIPC.testIpcWithReaderQueuing(TestIPC.java:738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14320) TestIPC.testIpcWithReaderQueuing fails intermittently

2017-04-18 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-14320:
-
Attachment: HADOOP-14320.001.patch

> TestIPC.testIpcWithReaderQueuing fails intermittently
> -
>
> Key: HADOOP-14320
> URL: https://issues.apache.org/jira/browse/HADOOP-14320
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-14320.001.patch
>
>
> {noformat}
> org.mockito.exceptions.verification.TooLittleActualInvocations: 
> callQueueManager.put();
> Wanted 2 times:
> -> at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:810)
> But was 1 time:
> -> at org.apache.hadoop.ipc.Server.queueCall(Server.java:2466)
>   at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:810)
>   at 
> org.apache.hadoop.ipc.TestIPC.testIpcWithReaderQueuing(TestIPC.java:738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14317) KMSWebServer$deprecateEnv may leak secret

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972973#comment-15972973
 ] 

Hadoop QA commented on HADOOP-14317:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HADOOP-14317 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863829/HADOOP-14317.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a134f21fee65 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 654372d |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12118/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12118/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KMSWebServer$deprecateEnv may leak secret
> -
>
> Key: HADOOP-14317
> URL: https://issues.apache.org/jira/browse/HADOOP-14317
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14317.001.patch, HADOOP-14317.002.patch
>
>
> May print warning message 

[jira] [Commented] (HADOOP-13372) MR jobs can not access Swift filesystem if Kerberos is enabled

2017-04-18 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972930#comment-15972930
 ] 

John Zhuge commented on HADOOP-13372:
-

{{AdlFileSystem}} works without a similar patch because the authority in 
{{adl:}} URI is a valid hostname, so 
{{getCanonicalServiceName#getCanonicalServiceName}} works fine.

> MR jobs can not access Swift filesystem if Kerberos is enabled
> --
>
> Key: HADOOP-13372
> URL: https://issues.apache.org/jira/browse/HADOOP-13372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/swift, security
>Affects Versions: 2.7.2
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-13372.001.patch
>
>
> {code}
> java.lang.IllegalArgumentException: java.net.UnknownHostException:
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
> at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:262)
> at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:303)
> at 
> org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:524)
> at 
> org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:508)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at 
> org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:121)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:183)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:430)
> Caused by: java.net.UnknownHostException:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14320) TestIPC.testIpcWithReaderQueuing fails intermittently

2017-04-18 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972922#comment-15972922
 ] 

Eric Badger commented on HADOOP-14320:
--

*TestIPC.java:810*
{{verify(spy, timeout(100).times(i + 1)).put(Mockito.anyObject());}}
The 100ms timeout is too aggressive. If the threads don't start immediately or 
run more slowly than this main-line code (aka doesn't finish its call in 
100ms), the timeout will cause the test to fail. This can be recreated by 
adding a 150ms timeout to the thread's run method.

> TestIPC.testIpcWithReaderQueuing fails intermittently
> -
>
> Key: HADOOP-14320
> URL: https://issues.apache.org/jira/browse/HADOOP-14320
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>
> {noformat}
> org.mockito.exceptions.verification.TooLittleActualInvocations: 
> callQueueManager.put();
> Wanted 2 times:
> -> at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:810)
> But was 1 time:
> -> at org.apache.hadoop.ipc.Server.queueCall(Server.java:2466)
>   at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:810)
>   at 
> org.apache.hadoop.ipc.TestIPC.testIpcWithReaderQueuing(TestIPC.java:738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14320) TestIPC.testIpcWithReaderQueuing fails intermittently

2017-04-18 Thread Eric Badger (JIRA)
Eric Badger created HADOOP-14320:


 Summary: TestIPC.testIpcWithReaderQueuing fails intermittently
 Key: HADOOP-14320
 URL: https://issues.apache.org/jira/browse/HADOOP-14320
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


{noformat}
org.mockito.exceptions.verification.TooLittleActualInvocations: 
callQueueManager.put();
Wanted 2 times:
-> at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:810)
But was 1 time:
-> at org.apache.hadoop.ipc.Server.queueCall(Server.java:2466)

at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:810)
at 
org.apache.hadoop.ipc.TestIPC.testIpcWithReaderQueuing(TestIPC.java:738)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14313) Replace/improve Hadoop's byte[] comparator

2017-04-18 Thread Vikas Vishwakarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972904#comment-15972904
 ] 

Vikas Vishwakarma commented on HADOOP-14313:


I have updated the microbenchmark result table in the update above. Broadly 
upto 200 bytes array size we see 10-20% higher throughput (ops/ms) for larger 
byte arrays it shows almost 100% higher throughput and the trend indicates that 
the performance gain increases with larger byte array sizes

I completely agree that it is upto the Hadoop community to choose the preferred 
implementation either through Guava or a local copy in the project. HBase has 
it's own copy in commons (Bytes.java & ByteBufferUtils.java) so that is 
independent of hadoop implementation


> Replace/improve Hadoop's byte[] comparator
> --
>
> Key: HADOOP-14313
> URL: https://issues.apache.org/jira/browse/HADOOP-14313
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Vikas Vishwakarma
> Attachments: HADOOP-14313.master.001.patch
>
>
> Hi,
> Recently we were looking at the Lexicographic byte array comparison in HBase. 
> We did microbenchmark for the byte array comparator of HADOOP ( 
> https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/io/FastByteComparisons.java#L161
>  ) , HBase Vs the latest byte array comparator from guava  ( 
> https://github.com/google/guava/blob/master/guava/src/com/google/common/primitives/UnsignedBytes.java#L362
>  ) and observed that the guava main branch version is much faster. 
> Specifically we see very good improvement when the byteArraySize%8 != 0 and 
> also for large byte arrays. I will update the benchmark results using JMH for 
> Hadoop vs Guava. For the jira on HBase, please refer HBASE-17877. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869816#comment-15869816
 ] 

Steve Loughran edited comment on HADOOP-13075 at 4/18/17 3:14 PM:
--

just changed the title to put the word "encryption" in. This patch was 
committed with the wrong JIRA number BTW: HADOOP-13204


For anyone looking for this patch in branch-2, the relevant commit is 
6d62d0ea87acffb47e97904fc78405a36e9fc9d3


was (Author: ste...@apache.org):
just changed the title to put the word "encryption" in. This patch was 
committed with the wrong JIRA number BTW: HADOOP-13204

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Andrew Olson
>Assignee: Steve Moist
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13075-001.patch, HADOOP-13075-002.patch, 
> HADOOP-13075-003.patch, HADOOP-13075-branch2.002.patch
>
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14317) KMSWebServer$deprecateEnv may leak secret

2017-04-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14317:

Description: 
May print warning message with secret in deprecated env var or matching 
property:
{code}
LOG.warn("Environment variable {} = '{}' is deprecated and overriding"
+ " property {} = '{}', please set the property in {} instead.",
varName, value, propName, propValue, confFile);
{code}

  was:
May print secret in warning message:
{code}
LOG.warn("Environment variable {} = '{}' is deprecated and overriding"
+ " property {} = '{}', please set the property in {} instead.",
varName, value, propName, propValue, confFile);
{code}


> KMSWebServer$deprecateEnv may leak secret
> -
>
> Key: HADOOP-14317
> URL: https://issues.apache.org/jira/browse/HADOOP-14317
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14317.001.patch, HADOOP-14317.002.patch
>
>
> May print warning message with secret in deprecated env var or matching 
> property:
> {code}
> LOG.warn("Environment variable {} = '{}' is deprecated and overriding"
> + " property {} = '{}', please set the property in {} instead.",
> varName, value, propName, propValue, confFile);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14317) KMSWebServer$deprecateEnv may leak secret

2017-04-18 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14317:

Attachment: HADOOP-14317.002.patch

Patch 002
* Fix findbugs

> KMSWebServer$deprecateEnv may leak secret
> -
>
> Key: HADOOP-14317
> URL: https://issues.apache.org/jira/browse/HADOOP-14317
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14317.001.patch, HADOOP-14317.002.patch
>
>
> May print secret in warning message:
> {code}
> LOG.warn("Environment variable {} = '{}' is deprecated and overriding"
> + " property {} = '{}', please set the property in {} instead.",
> varName, value, propName, propValue, confFile);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14305) S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus

2017-04-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14305:

Status: Open  (was: Patch Available)

> S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus
> ---
>
> Key: HADOOP-14305
> URL: https://issues.apache.org/jira/browse/HADOOP-14305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14305-001.patch
>
>
> The S3a encryption tests all run in parallel (they were interfering with each 
> other, apparently). This adds ~1 min to the test runs.
> They should run in serial. That they fail when this is attempted due to Bad 
> Auth problems must be considered a serious problem, as is indicates issues 
> related to working with SSE encryption from hadoop



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14216) Improve Configuration XML Parsing Performance

2017-04-18 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972808#comment-15972808
 ] 

Jonathan Eagles commented on HADOOP-14216:
--

Exactly, I'll put a patch up with a test using a full URI to ensure this 
doesn't go missing.

> Improve Configuration XML Parsing Performance
> -
>
> Key: HADOOP-14216
> URL: https://issues.apache.org/jira/browse/HADOOP-14216
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14216.1.patch, HADOOP-14216.2-branch-2.patch, 
> HADOOP-14216.2.patch, HADOOP-14216.addendum.1.patch
>
>
> JIRA is to improve XML parsing performance through reuse and a change in XML 
> parser (STAX)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14316) Switch from FindBugs to Spotbugs

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972802#comment-15972802
 ] 

Steve Loughran commented on HADOOP-14316:
-

Presumably there will some followup "fix spotbug issues" JIRAs?

> Switch from FindBugs to Spotbugs 
> -
>
> Key: HADOOP-14316
> URL: https://issues.apache.org/jira/browse/HADOOP-14316
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14316.00.patch
>
>
> Findbugs hasn't gotten a decent update in a few years.  The community has 
> since forked it and created https://github.com/spotbugs/spotbugs .  Running 
> the RC1 on trunk has pointed out some definite problem areas.  I think it 
> would be to our benefit to switch trunk over sooner rather than later, even 
> though it's still in RC status.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14138) Remove S3A ref from META-INF service discovery, rely on existing core-default entry

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972796#comment-15972796
 ] 

Steve Loughran commented on HADOOP-14138:
-

The problem with the current mechanism is that it forces a transitive classload 
of all dependencies, which is O(filesystem)*O(dependencies); doesn't handle 
failure well.


What I'm trying to understand here is "why do you want to ignore 
core-default.xml, given that sacrifices the ability to allow installations to 
override it at the core-site layer? It seems you've made that policy decision, 
and now suffering the consequences. And while the service discovery of S3A was 
the first place to break, you are still left with the problem of "how to pick 
up credentials".

> Remove S3A ref from META-INF service discovery, rely on existing core-default 
> entry
> ---
>
> Key: HADOOP-14138
> URL: https://issues.apache.org/jira/browse/HADOOP-14138
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha3
>
> Attachments: HADOOP-14138.001.patch, HADOOP-14138-branch-2-001.patch
>
>
> As discussed in HADOOP-14132, the shaded AWS library is killing performance 
> starting all hadoop operations, due to classloading on FS service discovery.
> This is despite the fact that there is an entry for fs.s3a.impl in 
> core-default.xml, *we don't need service discovery here*
> Proposed:
> # cut the entry from 
> {{/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> # when HADOOP-14132 is in, move to that, including declaring an XML file 
> exclusively for s3a entries
> I want this one in first as its a major performance regression, and one we 
> coula actually backport to 2.7.x, just to improve load time slightly there too



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14313) Replace/improve Hadoop's byte[] comparator

2017-04-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972781#comment-15972781
 ] 

Sean Busbey commented on HADOOP-14313:
--

I meant how the microbenchmark did, but 10-15% on a macro benchmark downstream 
sounds great.

{quote}
One point of view was that this is something very core to HBase and hence there 
should not be a dependency on external library, but I guess approach might vary 
from community to community.
{quote}

This jira is filed against the Hadoop project, so this bit is kind of 
immaterial I think? If Hadoop decides to delegate this activity to Guava and 
HBase prefers the current implementation, they could just make their own copy 
of it, no?

> Replace/improve Hadoop's byte[] comparator
> --
>
> Key: HADOOP-14313
> URL: https://issues.apache.org/jira/browse/HADOOP-14313
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Vikas Vishwakarma
> Attachments: HADOOP-14313.master.001.patch
>
>
> Hi,
> Recently we were looking at the Lexicographic byte array comparison in HBase. 
> We did microbenchmark for the byte array comparator of HADOOP ( 
> https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/io/FastByteComparisons.java#L161
>  ) , HBase Vs the latest byte array comparator from guava  ( 
> https://github.com/google/guava/blob/master/guava/src/com/google/common/primitives/UnsignedBytes.java#L362
>  ) and observed that the guava main branch version is much faster. 
> Specifically we see very good improvement when the byteArraySize%8 != 0 and 
> also for large byte arrays. I will update the benchmark results using JMH for 
> Hadoop vs Guava. For the jira on HBase, please refer HBASE-17877. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14216) Improve Configuration XML Parsing Performance

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972732#comment-15972732
 ] 

Steve Loughran commented on HADOOP-14216:
-

I the problem I'm seeing isn't fallback related, it's that the Xinclude you've 
had to implement can't handle URIs in the reference. I think it should see if 
the ref for the include can be used in a {{new URI()}} call before trying to 
create a file:// URI from it; if it can become a URI, then it's good to go as is

> Improve Configuration XML Parsing Performance
> -
>
> Key: HADOOP-14216
> URL: https://issues.apache.org/jira/browse/HADOOP-14216
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14216.1.patch, HADOOP-14216.2-branch-2.patch, 
> HADOOP-14216.2.patch, HADOOP-14216.addendum.1.patch
>
>
> JIRA is to improve XML parsing performance through reuse and a change in XML 
> parser (STAX)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14306) TestLocalFileSystem tests have very low timeouts

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972708#comment-15972708
 ] 

Steve Loughran commented on HADOOP-14306:
-

the trouble with the maven timeout is that it just kills the whole JVM. As the 
junit test runner builds up the XML test report in memory, killing the JVM 
loses the entire log of the test run which failed out. This is generally 
considered of limited value when trying to work out why a test failed

> TestLocalFileSystem tests have very low timeouts
> 
>
> Key: HADOOP-14306
> URL: https://issues.apache.org/jira/browse/HADOOP-14306
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-14306.001.patch, HADOOP-14306.002.patch
>
>
> Most tests have a timeout of 1 second, which is much too low, especially if 
> there is a spinning disk involved. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14291) S3a "Bad Request" message to include diagnostics

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972705#comment-15972705
 ] 

Steve Loughran commented on HADOOP-14291:
-

Wiki page created: https://wiki.apache.org/hadoop/S3ABadRequest

now someone needs to add this to the translated error, and ideally, the 
diagnostics

> S3a "Bad Request" message to include diagnostics
> 
>
> Key: HADOOP-14291
> URL: https://issues.apache.org/jira/browse/HADOOP-14291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's a whole section in s3a troubleshooting because requests can get auth 
> failures for many reasons, including
> * no credentials
> * wrong credentials
> * right credentials, wrong bucket
> * wrong endpoint for v4 auth
> * trying to use private S3 server without specifying endpoint, so AWS being 
> hit
> * clock out
> * joda time
> 
> We can aid with debugging this by including as much as we can in in the 
> message and a URL To a new S3A bad auth wiki page.
> Info we could include
> * bucket
> * fs.s3a.endpoint
> * nslookup of endpoint
> * Anything else relevant but not a security risk
> Goal; people stand a chance of working out what is failing within a bounded 
> time period



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14291) S3a "Bad Request" message to include diagnostics

2017-04-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14291:

Summary: S3a "Bad Request" message to include diagnostics  (was: S3a "No 
auth" message to include diagnostics)

> S3a "Bad Request" message to include diagnostics
> 
>
> Key: HADOOP-14291
> URL: https://issues.apache.org/jira/browse/HADOOP-14291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's a whole section in s3a troubleshooting because requests can get auth 
> failures for many reasons, including
> * no credentials
> * wrong credentials
> * right credentials, wrong bucket
> * wrong endpoint for v4 auth
> * trying to use private S3 server without specifying endpoint, so AWS being 
> hit
> * clock out
> * joda time
> 
> We can aid with debugging this by including as much as we can in in the 
> message and a URL To a new S3A bad auth wiki page.
> Info we could include
> * bucket
> * fs.s3a.endpoint
> * nslookup of endpoint
> * Anything else relevant but not a security risk
> Goal; people stand a chance of working out what is failing within a bounded 
> time period



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14291) S3a "No auth" message to include diagnostics

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972692#comment-15972692
 ] 

Steve Loughran commented on HADOOP-14291:
-

+maybe we could actually print the MD5 hashes of the secrets. That way, you can 
verify that the secrets used in the bucket are the same as those you hold 
—without us disclosing the secrets in a way which can be considered a security 
leak.

Maybe here each provider should have the ability to provide a diagnostics 
string; the key-one would list the hashed property & provenance; the env one 
would declare they came from the env vars (and again, the # values).

> S3a "No auth" message to include diagnostics
> 
>
> Key: HADOOP-14291
> URL: https://issues.apache.org/jira/browse/HADOOP-14291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's a whole section in s3a troubleshooting because requests can get auth 
> failures for many reasons, including
> * no credentials
> * wrong credentials
> * right credentials, wrong bucket
> * wrong endpoint for v4 auth
> * trying to use private S3 server without specifying endpoint, so AWS being 
> hit
> * clock out
> * joda time
> 
> We can aid with debugging this by including as much as we can in in the 
> message and a URL To a new S3A bad auth wiki page.
> Info we could include
> * bucket
> * fs.s3a.endpoint
> * nslookup of endpoint
> * Anything else relevant but not a security risk
> Goal; people stand a chance of working out what is failing within a bounded 
> time period



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14313) Replace/improve Hadoop's byte[] comparator

2017-04-18 Thread Vikas Vishwakarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972668#comment-15972668
 ] 

Vikas Vishwakarma commented on HADOOP-14313:


[~busbey] In test (ycsb, local tests) we are seeing 10-15% improvement on an 
average on HBase side.  For the second question, either ways is ok. One point 
of view was that this is something very core to HBase and hence there should 
not be a dependency on external library, but I guess approach might vary from 
community to community. 

> Replace/improve Hadoop's byte[] comparator
> --
>
> Key: HADOOP-14313
> URL: https://issues.apache.org/jira/browse/HADOOP-14313
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Vikas Vishwakarma
> Attachments: HADOOP-14313.master.001.patch
>
>
> Hi,
> Recently we were looking at the Lexicographic byte array comparison in HBase. 
> We did microbenchmark for the byte array comparator of HADOOP ( 
> https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/io/FastByteComparisons.java#L161
>  ) , HBase Vs the latest byte array comparator from guava  ( 
> https://github.com/google/guava/blob/master/guava/src/com/google/common/primitives/UnsignedBytes.java#L362
>  ) and observed that the guava main branch version is much faster. 
> Specifically we see very good improvement when the byteArraySize%8 != 0 and 
> also for large byte arrays. I will update the benchmark results using JMH for 
> Hadoop vs Guava. For the jira on HBase, please refer HBASE-17877. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14305) S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus

2017-04-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972625#comment-15972625
 ] 

Steve Loughran commented on HADOOP-14305:
-

Thanks for helping track this down.

bq.  I don't really advise people to use SSE-C.

That's the kind of thing we need in the documentation and release notes. You 
can add this detail to the release notes of the original JIRA.

for this patch, how about restricting it to serializing that SSE-C test, and 
updating the docs about how to safely use the feature. I'll review and we can 
have a fast turnaround here: no production code changes to worry about

> S3A SSE tests won't run in parallel: Bad request in directory GetFileStatus
> ---
>
> Key: HADOOP-14305
> URL: https://issues.apache.org/jira/browse/HADOOP-14305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14305-001.patch
>
>
> The S3a encryption tests all run in parallel (they were interfering with each 
> other, apparently). This adds ~1 min to the test runs.
> They should run in serial. That they fail when this is attempted due to Bad 
> Auth problems must be considered a serious problem, as is indicates issues 
> related to working with SSE encryption from hadoop



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972399#comment-15972399
 ] 

Hadoop QA commented on HADOOP-13200:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 29 new + 2 unchanged - 0 fixed = 31 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 15s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestShellBasedUnixGroupsMapping |
|   | hadoop.io.erasurecode.coder.TestHHXORErasureCoder |
|   | hadoop.io.erasurecode.coder.TestRSErasureCoder |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HADOOP-13200 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863772/HADOOP-13200.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 894eefe39809 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 654372d |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12117/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12117/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12117/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972370#comment-15972370
 ] 

Hadoop QA commented on HADOOP-13200:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 30 new + 2 unchanged - 0 fixed = 32 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  9s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.io.erasurecode.coder.TestRSErasureCoder |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.io.erasurecode.coder.TestHHXORErasureCoder |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HADOOP-13200 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863768/HADOOP-13200.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 18ad77128555 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 84a8848 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12116/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12116/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12116/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Commented] (HADOOP-14315) Python example in the rack awareness document doesn't work due to bad indentation

2017-04-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972332#comment-15972332
 ] 

Hudson commented on HADOOP-14315:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11599 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11599/])
HADOOP-14315. Python example in the rack awareness document doesn't work 
(aajisaka: rev 654372db859656a2201ae9f9f7c374c6564ea34d)
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md


> Python example in the rack awareness document doesn't work due to bad 
> indentation
> -
>
> Key: HADOOP-14315
> URL: https://issues.apache.org/jira/browse/HADOOP-14315
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
>
> Running that example fails with:
> {code}
>   File "example.py", line 28
> address = '{0}/{1}'.format(ip, netmask)  # format 
> address string so it looks like 'ip/netmask' to make netaddr work
>   ^
> IndentationError: expected an indented block
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2017-04-18 Thread Tim Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Yao updated HADOOP-13200:
-
Attachment: HADOOP-13200.03.patch

> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Tim Yao
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13200.02.patch, HADOOP-13200.03.patch
>
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2017-04-18 Thread Tim Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Yao updated HADOOP-13200:
-
Attachment: (was: HADOOP-13200.03.patch)

> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Tim Yao
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13200.02.patch
>
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14315) Python example in the rack awareness document doesn't work due to bad indentation

2017-04-18 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14315:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   2.8.1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, branch-2.8, and branch-2.8.1. Thanks 
[~sekikn] for the contribution!

> Python example in the rack awareness document doesn't work due to bad 
> indentation
> -
>
> Key: HADOOP-14315
> URL: https://issues.apache.org/jira/browse/HADOOP-14315
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
>
> Running that example fails with:
> {code}
>   File "example.py", line 28
> address = '{0}/{1}'.format(ip, netmask)  # format 
> address string so it looks like 'ip/netmask' to make netaddr work
>   ^
> IndentationError: expected an indented block
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14315) Python example in the rack awareness document doesn't work due to bad indentation

2017-04-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972306#comment-15972306
 ] 

ASF GitHub Bot commented on HADOOP-14315:
-

Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/214


> Python example in the rack awareness document doesn't work due to bad 
> indentation
> -
>
> Key: HADOOP-14315
> URL: https://issues.apache.org/jira/browse/HADOOP-14315
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>
> Running that example fails with:
> {code}
>   File "example.py", line 28
> address = '{0}/{1}'.format(ip, netmask)  # format 
> address string so it looks like 'ip/netmask' to make netaddr work
>   ^
> IndentationError: expected an indented block
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14315) Python example in the rack awareness document doesn't work due to bad indentation

2017-04-18 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972304#comment-15972304
 ] 

Akira Ajisaka commented on HADOOP-14315:


+1

> Python example in the rack awareness document doesn't work due to bad 
> indentation
> -
>
> Key: HADOOP-14315
> URL: https://issues.apache.org/jira/browse/HADOOP-14315
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>
> Running that example fails with:
> {code}
>   File "example.py", line 28
> address = '{0}/{1}'.format(ip, netmask)  # format 
> address string so it looks like 'ip/netmask' to make netaddr work
>   ^
> IndentationError: expected an indented block
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14318) Remove non-existent setfattr command option from FileSystemShell.md

2017-04-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972297#comment-15972297
 ] 

Hudson commented on HADOOP-14318:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11598 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11598/])
HADOOP-14318. Remove non-existent setfattr command option from (aajisaka: rev 
84a8848aaecd56b4cc85185924c2e33674165e01)
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md


> Remove non-existent setfattr command option from FileSystemShell.md
> ---
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2017-04-18 Thread Tim Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Yao updated HADOOP-13200:
-
Attachment: HADOOP-13200.03.patch

> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Tim Yao
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13200.02.patch, HADOOP-13200.03.patch
>
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14315) Python example in the rack awareness document doesn't work due to bad indentation

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972295#comment-15972295
 ] 

Hadoop QA commented on HADOOP-14315:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HADOOP-14315 |
| GITHUB PR | https://github.com/apache/hadoop/pull/214 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 74e0fffeeeab 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 84a8848 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12115/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Python example in the rack awareness document doesn't work due to bad 
> indentation
> -
>
> Key: HADOOP-14315
> URL: https://issues.apache.org/jira/browse/HADOOP-14315
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
>
> Running that example fails with:
> {code}
>   File "example.py", line 28
> address = '{0}/{1}'.format(ip, netmask)  # format 
> address string so it looks like 'ip/netmask' to make netaddr work
>   ^
> IndentationError: expected an indented block
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14318) Remove non-existent setfattr command option from FileSystemShell.md

2017-04-18 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14318:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   2.8.1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, branch-2.8, and branch-2.8.1. Thanks 
[~doris] for the contribution!

> Remove non-existent setfattr command option from FileSystemShell.md
> ---
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14318) Remove non-existent setfattr command option from FileSystemShell.md

2017-04-18 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-14318:
--

Assignee: Doris Gu
Hadoop Flags: Reviewed
 Summary: Remove non-existent setfattr command option from 
FileSystemShell.md  (was: FileSystemShell markdown did not changed with Java 
source)

> Remove non-existent setfattr command option from FileSystemShell.md
> ---
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Assignee: Doris Gu
>Priority: Minor
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2017-04-18 Thread Tim Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Yao updated HADOOP-13200:
-
Attachment: (was: HADOOP-13200.01.patch)

> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Tim Yao
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13200.02.patch
>
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972257#comment-15972257
 ] 

Hadoop QA commented on HADOOP-13200:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 12s{color} 
| {color:red} HADOOP-13200 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13200 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863760/HADOOP-13200.02.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12114/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Tim Yao
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13200.01.patch, HADOOP-13200.02.patch
>
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2017-04-18 Thread Tim Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Yao updated HADOOP-13200:
-
Attachment: HADOOP-13200.02.patch

> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Tim Yao
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13200.01.patch, HADOOP-13200.02.patch
>
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14319) Under replicated blocks are not getting re-replicated

2017-04-18 Thread Anil (JIRA)
Anil created HADOOP-14319:
-

 Summary: Under replicated blocks are not getting re-replicated
 Key: HADOOP-14319
 URL: https://issues.apache.org/jira/browse/HADOOP-14319
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.2
Reporter: Anil


Under replicated blocks are not getting re-replicated

In production Hadoop cluster of 5 Manangement + 5 Data Nodes, under replicated 
blocks are not re-replicated even after 2 days. 

Here is quick view of required configurations;

 Default replication factor:3
 Average block replication: 3.0
 Corrupt blocks:0
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  5
 Number of racks:   1

After bringing one of the DataNodes down, the replication factor for the blocks 
allocated on the Data Node became 2. It is observed that, even after 2 days the 
replication factor remains as 2. Under replicated blocks are not getting 
re-replicated to another DataNodes in the cluster. 

If a Data Node goes down, HDFS will try to replicate the blocks from Dead DN to 
other nodes and the priority. Are there any configuration changes to speed up 
the re-replication process for the under replicated blocks? 

When tested for blocks with replication factor 1, the re-replication happened 
to 2 overnight in around 10 hours of time. But blocks with 2 replication factor 
are not being re-replicated to default replication factor 3. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11790) leveldb usage should be disabled by default or smarter about platforms

2017-04-18 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972244#comment-15972244
 ] 

Ayappan commented on HADOOP-11790:
--

Trying to get leveldbjni community to make a new release but no luck so far.

https://github.com/fusesource/leveldbjni/issues/85

> leveldb usage should be disabled by default or smarter about platforms
> --
>
> Key: HADOOP-11790
> URL: https://issues.apache.org/jira/browse/HADOOP-11790
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: * any non-x86
> * any OS that isn't Linux, OSX, Windows
>Reporter: Ayappan
>Priority: Critical
>
> The leveldbjni artifact in maven repository has been built for only x86 
> architecture. Due to which some of the testcases are failing in PowerPC. The 
> leveldbjni community has no plans to support other platforms [ 
> https://github.com/fusesource/leveldbjni/issues/54 ]. Right now , the 
> approach is we need to locally built leveldbjni prior to running hadoop 
> testcases. Pushing a PowerPC-specific leveldbjni artifact in central maven 
> repository and making pom.xml to pickup it up while running in PowerPC is 
> another option but i don't know whether this is a suitable one . Any other 
> alternative/solution is there ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14318) FileSystemShell markdown did not changed with Java source

2017-04-18 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972240#comment-15972240
 ] 

Doris Gu edited comment on HADOOP-14318 at 4/18/17 7:28 AM:


Thanks, [~ajisakaa]. Jenkins passed...


was (Author: doris):
Thanks, [~ajisakaa]. Jenkins passed, ^o^

> FileSystemShell markdown did not changed with Java source
> -
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14318) FileSystemShell markdown did not changed with Java source

2017-04-18 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972240#comment-15972240
 ] 

Doris Gu edited comment on HADOOP-14318 at 4/18/17 7:28 AM:


Thanks, [~ajisakaa]. Jenkins passed, ^o^


was (Author: doris):
Thanks, [~ajisakaa]. Jenkins passed, ^.^

> FileSystemShell markdown did not changed with Java source
> -
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14318) FileSystemShell markdown did not changed with Java source

2017-04-18 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972240#comment-15972240
 ] 

Doris Gu commented on HADOOP-14318:
---

Thanks, [~ajisakaa]. Jenkins passed, ^.^

> FileSystemShell markdown did not changed with Java source
> -
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14318) FileSystemShell markdown did not changed with Java source

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972230#comment-15972230
 ] 

Hadoop QA commented on HADOOP-14318:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HADOOP-14318 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863753/HADOOP-14318.001.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux a3dd497b006e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8dfcd95 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12113/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FileSystemShell markdown did not changed with Java source
> -
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14318) FileSystemShell markdown did not changed with Java source

2017-04-18 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14318:
---
Target Version/s: 2.9.0, 2.8.1, 3.0.0-alpha3
  Status: Patch Available  (was: Open)

Nice catch! +1 pending Jenkins.

> FileSystemShell markdown did not changed with Java source
> -
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14318) FileSystemShell markdown did not changed with Java source

2017-04-18 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14318:
---
Component/s: documentation

> FileSystemShell markdown did not changed with Java source
> -
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14317) KMSWebServer$deprecateEnv may leak secret

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972213#comment-15972213
 ] 

Hadoop QA commented on HADOOP-14317:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-common-project/hadoop-kms generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
59s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-kms |
|  |  Dead store to propValue in 
org.apache.hadoop.crypto.key.kms.server.KMSWebServer.deprecateEnv(String, 
Configuration, String, String)  At 
KMSWebServer.java:org.apache.hadoop.crypto.key.kms.server.KMSWebServer.deprecateEnv(String,
 Configuration, String, String)  At KMSWebServer.java:[line 108] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | HADOOP-14317 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863745/HADOOP-14317.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f9f27c9b2a54 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8dfcd95 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12112/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-kms.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12112/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12112/console |
| Powered by | Apache 

[jira] [Commented] (HADOOP-14318) FileSystemShell markdown did not changed with Java source

2017-04-18 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972206#comment-15972206
 ] 

Doris Gu commented on HADOOP-14318:
---

fix some errors in FileSystemShell.md, please check, thanks!

> FileSystemShell markdown did not changed with Java source
> -
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14318) FileSystemShell markdown did not changed with Java source

2017-04-18 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HADOOP-14318:
--
Attachment: HADOOP-14318.001.patch

> FileSystemShell markdown did not changed with Java source
> -
>
> Key: HADOOP-14318
> URL: https://issues.apache.org/jira/browse/HADOOP-14318
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HADOOP-14318.001.patch
>
>
> fix the mistake about "*setfattr*" which should not contain option "*-b*"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org