[jira] [Commented] (HADOOP-13827) Add reencryptEDEK interface for KMS

2016-11-30 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711237#comment-15711237
 ] 

Andrew Wang commented on HADOOP-13827:
--

Thanks for the rev Xiao, some review comments, looks pretty close:

* General note, we should try to use the more generic terms like EEK rather 
EDEK and not refer to "ezKey".
* KeyProvider: you can use an EqualsBuilder or Guava equivalent to simplify 
this.
* Thinking about it more, I guess there's no need to provide a different 
keyname at this point. We can compatibly add a two-parameter reencrypt method 
later when we need it. Sorry for doing this extra work.
* Agree on reusing GENERATE_EEK for authorization, forgot that these are not 
per-op but actually classes of ops.

> Add reencryptEDEK interface for KMS
> ---
>
> Key: HADOOP-13827
> URL: https://issues.apache.org/jira/browse/HADOOP-13827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13827.02.patch, HDFS-11159.01.patch
>
>
> This is the KMS part. Please refer to HDFS-10899 for the design doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-11-30 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711073#comment-15711073
 ] 

Varun Vasudev commented on HADOOP-13835:


[~ajisakaa] - can you please review the latest patch? Thanks!

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li reopened HADOOP-13849:
-

I

> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native, the output of command "hadoop 
> checknative -a" is as follows:
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HADOOP-13849:

Comment: was deleted

(was: I)

> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native, the output of command "hadoop 
> checknative -a" is as follows:
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-11-30 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710947#comment-15710947
 ] 

Mingliang Liu commented on HADOOP-13449:


Sorry for late reply. Thank you [~fabbri] very much for running integration 
tests and analyze the failure. I can reproduce the unit test failure 
{{TestS3AGetFileStatus#testNotFound}}. I can also reproduce the integration 
failures on US-standard region. I'll work on them this tomorrow. Thanks for 
taking care of {{ITestS3AFileSystemContract}}.
{code}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractGetFileStatus
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractMkdir
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractSeek
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractRename
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractDelete
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractOpen
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.946 sec - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractMkdir
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.332 sec - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractOpen
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.372 sec - 
in org.apache.hadoop.fs.contract.s3n.ITestS3NContractCreate
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractDelete
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.455 sec - in 
org.apache.hadoop.fs.contract.s3n.ITestS3NContractDelete
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.375 sec - in 
org.apache.hadoop.fs.contract.s3n.ITestS3NContractMkdir
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractOpen
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.406 sec - in 
org.apache.hadoop.fs.contract.s3n.ITestS3NContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.478 sec - in 
org.apache.hadoop.fs.contract.s3n.ITestS3NContractOpen
Running org.apache.hadoop.fs.contract.s3n.ITestS3NContractSeek
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContext
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.313 sec - in 
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContext
Tests run: 18, Failures: 0, Errors: 0, Skipped: 18, Time elapsed: 0.655 sec - 
in org.apache.hadoop.fs.contract.s3n.ITestS3NContractSeek
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextCreateMkdir
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.987 sec - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractRename
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextURI
Tests run: 10, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 73.829 sec - 
in org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextUtil
Tests run: 8, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 75.878 sec <<< 
FAILURE! - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractDelete
testDeleteNonEmptyDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractDelete)
  Time elapsed: 28.759 sec  <<< FAILURE!
java.lang.AssertionError: non recursive delete should have raised an exception, 
but completed with exit code true
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.fs.contract.AbstractContractDeleteTest.testDeleteNonEmptyDirNonRecursive(AbstractContractDeleteTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)

[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-11-30 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710879#comment-15710879
 ] 

Aaron Fabbri commented on HADOOP-13449:
---

FYI so we don't duplicate effort: I'm looking at ITestS3AFileSystemContract 
failure right now.  Looks like it may be a failure to delete from DDB metadata 
store.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, 
> HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, 
> HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, 
> HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, 
> HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13848) Missing auth-keys.xml prevents detecting test code build problem

2016-11-30 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13848:

Affects Version/s: 2.6.0

> Missing auth-keys.xml prevents detecting test code build problem
> 
>
> Key: HADOOP-13848
> URL: https://issues.apache.org/jira/browse/HADOOP-13848
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, test
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13848) Missing auth-keys.xml prevents detecting test code build problem

2016-11-30 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13848:

Component/s: (was: fs/swift)

> Missing auth-keys.xml prevents detecting test code build problem
> 
>
> Key: HADOOP-13848
> URL: https://issues.apache.org/jira/browse/HADOOP-13848
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, test
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13848) Missing auth-keys.xml prevents detecting test code build problem

2016-11-30 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-13848.
-
Resolution: Fixed

The problem in hadoop-aws is fixed by HADOOP-13446 in trunk, branch-2, and 
branch-2.8.

> Missing auth-keys.xml prevents detecting test code build problem
> 
>
> Key: HADOOP-13848
> URL: https://issues.apache.org/jira/browse/HADOOP-13848
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, test
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710658#comment-15710658
 ] 

Tao Li commented on HADOOP-13849:
-

Yes. I think the "system native" should have better compress/decompress 
performance than "java builtin".

> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native, the output of command "hadoop 
> checknative -a" is as follows:
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710655#comment-15710655
 ] 

Tao Li commented on HADOOP-13849:
-

[~ste...@apache.org]
1. I saw the "using java builtin" or "using system-native" in my test cases 
log, so I am sure my test cases are correct.
2. My hardware CPU/Memory/Network bandwidh/Disk bandwidh are not bottleneck
3. I have also tested decompress speed. I even found that the "java builtin" is 
faster than "system native"

> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native, the output of command "hadoop 
> checknative -a" is as follows:
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710650#comment-15710650
 ] 

Hudson commented on HADOOP-13840:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10922 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10922/])
HADOOP-13840. Implement getUsed() for ViewFileSystem. Contributed by (wang: rev 
1f7613be958bbdb735fd2b49e3f0b48e2c8b7c13)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java


> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13840.01.patch, HADOOP-13840.02.patch
>
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 
> implementations tries to run the {{getContentSummary}} which crashes on 
> seeing unexpected mount points under slash. 
> ViewFileSystem#getUsed() is not expected to collate all the space used from 
> all the mount points configured under "/". Proposal is to avoid invoking 
> FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash 
> is supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13818) TestLocalFileSystem#testSetTimes fails

2016-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710581#comment-15710581
 ] 

Hadoop QA commented on HADOOP-13818:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841189/HADOOP-13818.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e69aa5256467 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7226a71 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11173/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11173/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestLocalFileSystem#testSetTimes fails
> --
>
> Key: HADOOP-13818
> URL: https://issues.apache.org/jira/browse/HADOOP-13818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
> Environment: Mac OS Sierra, both OpenJDK 8u122-ea and Oracle JDK 8u112
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: 

[jira] [Updated] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-30 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13840:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks again for working on this Manoj!

> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13840.01.patch, HADOOP-13840.02.patch
>
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 
> implementations tries to run the {{getContentSummary}} which crashes on 
> seeing unexpected mount points under slash. 
> ViewFileSystem#getUsed() is not expected to collate all the space used from 
> all the mount points configured under "/". Proposal is to avoid invoking 
> FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash 
> is supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710533#comment-15710533
 ] 

Hadoop QA commented on HADOOP-13840:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13840 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841186/HADOOP-13840.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4bfa602a9ce9 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7226a71 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11172/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11172/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13840.01.patch, HADOOP-13840.02.patch
>
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 

[jira] [Updated] (HADOOP-13818) TestLocalFileSystem#testSetTimes fails

2016-11-30 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13818:
---
Attachment: HADOOP-13818.004.patch

> TestLocalFileSystem#testSetTimes fails
> --
>
> Key: HADOOP-13818
> URL: https://issues.apache.org/jira/browse/HADOOP-13818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
> Environment: Mac OS Sierra, both OpenJDK 8u122-ea and Oracle JDK 8u112
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13818.001.patch, HADOOP-13818.002.patch, 
> HADOOP-13818.003.patch, HADOOP-13818.004.patch
>
>
> {noformat}
> Running org.apache.hadoop.fs.TestLocalFileSystem
> Tests run: 20, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 4.887 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.TestLocalFileSystem
> testSetTimes(org.apache.hadoop.fs.TestLocalFileSystem)  Time elapsed: 0.084 
> sec  <<< FAILURE!
> java.lang.AssertionError: expected:<23456000> but was:<1479176144000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.checkTimesStatus(TestLocalFileSystem.java:391)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testSetTimes(TestLocalFileSystem.java:414)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13818) TestLocalFileSystem#testSetTimes fails

2016-11-30 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13818:
---
Attachment: (was: HADOOP-13818.004.patch)

> TestLocalFileSystem#testSetTimes fails
> --
>
> Key: HADOOP-13818
> URL: https://issues.apache.org/jira/browse/HADOOP-13818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
> Environment: Mac OS Sierra, both OpenJDK 8u122-ea and Oracle JDK 8u112
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13818.001.patch, HADOOP-13818.002.patch, 
> HADOOP-13818.003.patch, HADOOP-13818.004.patch
>
>
> {noformat}
> Running org.apache.hadoop.fs.TestLocalFileSystem
> Tests run: 20, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 4.887 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.TestLocalFileSystem
> testSetTimes(org.apache.hadoop.fs.TestLocalFileSystem)  Time elapsed: 0.084 
> sec  <<< FAILURE!
> java.lang.AssertionError: expected:<23456000> but was:<1479176144000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.checkTimesStatus(TestLocalFileSystem.java:391)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testSetTimes(TestLocalFileSystem.java:414)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-30 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13840:

Attachment: HADOOP-13840.02.patch

Attaching v02 to address javadoc style issues. Thanks for the review 
[~andrew.wang].

> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13840.01.patch, HADOOP-13840.02.patch
>
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 
> implementations tries to run the {{getContentSummary}} which crashes on 
> seeing unexpected mount points under slash. 
> ViewFileSystem#getUsed() is not expected to collate all the space used from 
> all the mount points configured under "/". Proposal is to avoid invoking 
> FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash 
> is supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-30 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13840:

Attachment: (was: HADOOP-13840.02.patch)

> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13840.01.patch
>
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 
> implementations tries to run the {{getContentSummary}} which crashes on 
> seeing unexpected mount points under slash. 
> ViewFileSystem#getUsed() is not expected to collate all the space used from 
> all the mount points configured under "/". Proposal is to avoid invoking 
> FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash 
> is supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-30 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710304#comment-15710304
 ] 

Andrew Wang commented on HADOOP-13840:
--

LGTM +1 besides the flagged javadoc errors. Thanks for working on this Manoj!

> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13840.01.patch, HADOOP-13840.02.patch
>
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 
> implementations tries to run the {{getContentSummary}} which crashes on 
> seeing unexpected mount points under slash. 
> ViewFileSystem#getUsed() is not expected to collate all the space used from 
> all the mount points configured under "/". Proposal is to avoid invoking 
> FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash 
> is supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710252#comment-15710252
 ] 

Hadoop QA commented on HADOOP-13793:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
51s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
41s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13793 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841151/HADOOP-13793-HADOOP-13345.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bc3d50679ef0 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 5e93093 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11171/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11171/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13793-HADOOP-13345.001.patch, 
> HADOOP-13793-HADOOP-13345.002.patch
>
>
> Many of us share concerns that testing the consistency features of S3Guard 
> 

[jira] [Commented] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-30 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710216#comment-15710216
 ] 

Aaron Fabbri commented on HADOOP-13793:
---

All s3a integration tests passed with default endpoint in US West 2.

Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractDelete
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractGetFileStatus
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractMkdir
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractOpen
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractRename
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractSeek
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContext
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextCreateMkdir
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextURI
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextUtil
Running org.apache.hadoop.fs.s3a.ITestBlockingThreadPoolExecutorService
Running org.apache.hadoop.fs.s3a.ITestS3AAWSCredentialsProvider
Running org.apache.hadoop.fs.s3a.ITestS3ABlockOutputArray
Running org.apache.hadoop.fs.s3a.ITestS3ABlockOutputByteBuffer
Running org.apache.hadoop.fs.s3a.ITestS3ABlockOutputDisk
Running org.apache.hadoop.fs.s3a.ITestS3ABlocksize
Running org.apache.hadoop.fs.s3a.ITestS3AConfiguration
Running org.apache.hadoop.fs.s3a.ITestS3ACredentialsInURL
Running org.apache.hadoop.fs.s3a.ITestS3AEncryption
Running org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmPropagation
Running org.apache.hadoop.fs.s3a.ITestS3AEncryptionBlockOutputStream
Running org.apache.hadoop.fs.s3a.ITestS3AFailureHandling
Running org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost
Running org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract
Running org.apache.hadoop.fs.s3a.ITestS3AMiscOperations
Running org.apache.hadoop.fs.s3a.ITestS3ATemporaryCredentials
Running org.apache.hadoop.fs.s3a.ITestS3ATestUtils
Running org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency
Running org.apache.hadoop.fs.s3a.scale.ITestS3ADeleteFilesOneByOne
Running org.apache.hadoop.fs.s3a.scale.ITestS3ADeleteManyFiles
Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
Running org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesArrayBlocks
Running org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesByteBufferBlocks
Running org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesClassicOutput
Running org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesDiskBlocks
Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
Running org.apache.hadoop.fs.s3a.yarn.ITestS3A
Running org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster
Running org.apache.hadoop.fs.s3native.ITestInMemoryNativeS3FileSystemContract
...
Failed tests:
  
ITestJets3tNativeS3FileSystemContract>NativeS3FileSystemContractBaseTest.testListStatusForRoot:66
 Root directory is not empty;  expected:<0> but was:<4>
Tests in error:
  
ITestJets3tNativeS3FileSystemContract>FileSystemContractBaseTest.testLSRootDir:727->FileSystemContractBaseTest.assertListFilesFinds:742
 » FileNotFound
Tests run: 492, Failures: 1, Errors: 1, Skipped: 103

> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13793-HADOOP-13345.001.patch, 
> HADOOP-13793-HADOOP-13345.002.patch
>
>
> Many of us share concerns that testing the consistency features of S3Guard 
> will be difficult if we depend on the rare and unpredictable occurrence of 
> actual inconsistency in S3 to exercise those code paths.
> I think we should have a mechanism for injecting failure to force exercising 
> of the consistency codepaths in S3Guard.
> Requirements:
> - Integration tests that cause S3A to see the types of inconsistency we 
> address with S3Guard.
> - These are deterministic integration tests.
> Unit tests are possible as well, if we were to stub out the S3Client.  That 
> may be less bang for the buck, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13257) Improve Azure Data Lake contract tests.

2016-11-30 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15710143#comment-15710143
 ] 

Mingliang Liu commented on HADOOP-13257:


Thanks for providing a patch. It seems addresses all Steve's previous comments 
and looks good to me overall. I find it interesting when I read through 
{{fillUnicodes()}} methods. I also like the idea of using {{Parameterized}} 
tests.

# I don't have a Azure subscription;did you finish a successful full test run 
integrated against the Azure Data Lake back-end with this patch?
# In {{TestAdlSupportedCharsetInPath}}, is {{failureReport}} ever reported? 
Naming private helper methods {{assertTrue}} and {{assertFalse}} may be 
confusing with JUnit methods. We'd choose different names.
# In the {{TestMetadata.java}}, we can make the parent a static variable as 
it's used in all test cases.
# Many {{asserEquals()}} calls should use the pattern {{assertEquals(expected, 
actual)}} or else the failing message will be confusing.
# License statement in {{TestAdlPermissionLive}} is ill-formatted.
# When generating {{Parameterized.Parameters}}, can we use loops? They're 
clearer for covering different cases.
# The follow methods can be simplified
{code}
283   private boolean contains(FileStatus[] statuses, String remotePath) {
284 boolean contains = false;
285 for (FileStatus status : statuses) {
286 
287   if (status.getPath().toString().equals(remotePath)) {
288 contains = true;
289   }
290 }
291 
292 if (!contains) {
293   for (FileStatus status : statuses) {
294 LOG.debug("Directory Content");
295 LOG.debug(status.getPath().toString());
296   }
297 }
298 
299 return contains;
300   }
{code}
as
{code}
  private boolean contains(FileStatus[] statuses, String remotePath) {
for (FileStatus status : statuses) {
  LOG.debug("Directory Content: {}", status.getPath());
  if (status.getPath().toString().equals(remotePath)) {
return true;
  }
}
return false;
  }
{code}
# Checkstyle warnings are related if you run locally {{mvn checkstyle:check}} 
with this patch (I can't open the Jenkins pre-commit report)
{code}
[ERROR] src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java[473] (blocks) 
NeedBraces: 'if' construct must use '{}'s.
[ERROR] 
src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileContextMainOperationsLive.java[28:8]
 (imports) UnusedImports: Unused import - org.apache.hadoop.fs.Path.
[ERROR] 
src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileContextMainOperationsLive.java[29:8]
 (imports) UnusedImports: Unused import - 
org.apache.hadoop.fs.permission.FsPermission.
[ERROR] 
src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileContextMainOperationsLive.java[30:8]
 (imports) UnusedImports: Unused import - 
org.apache.hadoop.security.AccessControlException.
[ERROR] 
src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileContextMainOperationsLive.java[31:8]
 (imports) UnusedImports: Unused import - org.junit.Assert.
[ERROR] 
src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileContextMainOperationsLive.java[40:15]
 (imports) UnusedImports: Unused import - 
org.apache.hadoop.fs.FileContextTestHelper.exists.
[ERROR] 
src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileSystemContractLive.java[25:8]
 (imports) UnusedImports: Unused import - 
org.apache.hadoop.security.AccessControlException.
[ERROR] 
src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileSystemContractLive.java[28:8]
 (imports) UnusedImports: Unused import - org.junit.Ignore.
{code}

> Improve Azure Data Lake contract tests.
> ---
>
> Key: HADOOP-13257
> URL: https://issues.apache.org/jira/browse/HADOOP-13257
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Nauroth
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-13257.001.patch
>
>
> HADOOP-12875 provided the initial implementation of the FileSystem contract 
> tests covering Azure Data Lake.  This issue tracks subsequent improvements on 
> those test suites for improved coverage and matching the specified semantics 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-30 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13793:
--
Status: Patch Available  (was: Open)

> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13793-HADOOP-13345.001.patch, 
> HADOOP-13793-HADOOP-13345.002.patch
>
>
> Many of us share concerns that testing the consistency features of S3Guard 
> will be difficult if we depend on the rare and unpredictable occurrence of 
> actual inconsistency in S3 to exercise those code paths.
> I think we should have a mechanism for injecting failure to force exercising 
> of the consistency codepaths in S3Guard.
> Requirements:
> - Integration tests that cause S3A to see the types of inconsistency we 
> address with S3Guard.
> - These are deterministic integration tests.
> Unit tests are possible as well, if we were to stub out the S3Client.  That 
> may be less bang for the buck, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-30 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13793:
--
Attachment: HADOOP-13793-HADOOP-13345.002.patch

Reattaching patch: removing "Created by fabbri on Date" comment my editor added 
to a class.

> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13793-HADOOP-13345.001.patch, 
> HADOOP-13793-HADOOP-13345.002.patch
>
>
> Many of us share concerns that testing the consistency features of S3Guard 
> will be difficult if we depend on the rare and unpredictable occurrence of 
> actual inconsistency in S3 to exercise those code paths.
> I think we should have a mechanism for injecting failure to force exercising 
> of the consistency codepaths in S3Guard.
> Requirements:
> - Integration tests that cause S3A to see the types of inconsistency we 
> address with S3Guard.
> - These are deterministic integration tests.
> Unit tests are possible as well, if we were to stub out the S3Client.  That 
> may be less bang for the buck, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-30 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13793:
--
Attachment: (was: HADOOP-13793-HADOOP-13345.002.patch)

> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13793-HADOOP-13345.001.patch, 
> HADOOP-13793-HADOOP-13345.002.patch
>
>
> Many of us share concerns that testing the consistency features of S3Guard 
> will be difficult if we depend on the rare and unpredictable occurrence of 
> actual inconsistency in S3 to exercise those code paths.
> I think we should have a mechanism for injecting failure to force exercising 
> of the consistency codepaths in S3Guard.
> Requirements:
> - Integration tests that cause S3A to see the types of inconsistency we 
> address with S3Guard.
> - These are deterministic integration tests.
> Unit tests are possible as well, if we were to stub out the S3Client.  That 
> may be less bang for the buck, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-30 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13793:
--
Status: Open  (was: Patch Available)

> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13793-HADOOP-13345.001.patch, 
> HADOOP-13793-HADOOP-13345.002.patch
>
>
> Many of us share concerns that testing the consistency features of S3Guard 
> will be difficult if we depend on the rare and unpredictable occurrence of 
> actual inconsistency in S3 to exercise those code paths.
> I think we should have a mechanism for injecting failure to force exercising 
> of the consistency codepaths in S3Guard.
> Requirements:
> - Integration tests that cause S3A to see the types of inconsistency we 
> address with S3Guard.
> - These are deterministic integration tests.
> Unit tests are possible as well, if we were to stub out the S3Client.  That 
> may be less bang for the buck, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709992#comment-15709992
 ] 

Hadoop QA commented on HADOOP-13793:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
 5s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
45s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13793 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841140/HADOOP-13793-HADOOP-13345.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 118c61f408c8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 5e93093 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11170/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11170/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13793-HADOOP-13345.001.patch, 
> HADOOP-13793-HADOOP-13345.002.patch
>
>
> Many of us share concerns that testing the consistency features of S3Guard 
> 

[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-30 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709923#comment-15709923
 ] 

Xiao Chen commented on HADOOP-13597:


Thanks John.

Yes agreed. Let's keep this jira focused so new configs only. The old ones 
aren't public so can be changed in another jira.

re. KMSHttpServer testing, I was thinking to make the most basic checks so if 
something breaks it will be obvious. If it's all covered by MiniKMS and can be 
easily figured out, fine by me then.

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13850) s3guard to log choice of metadata store at debug

2016-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709914#comment-15709914
 ] 

Hadoop QA commented on HADOOP-13850:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
47s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
36s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13850 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841134/HADOOP-13850-HADOOP-13345.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 723d4f09112c 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 5e93093 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11169/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11169/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> s3guard to log choice of metadata store at debug
> 
>
> Key: HADOOP-13850
> URL: https://issues.apache.org/jira/browse/HADOOP-13850
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Priority: Trivial
> Attachments: 

[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-30 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709900#comment-15709900
 ] 

John Zhuge commented on HADOOP-13597:
-

HDFS/Common way of key naming makes sense, I will switch. Leave the old keys 
alone.

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-30 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13793:
--
Attachment: HADOOP-13793-HADOOP-13345.002.patch

Attaching updated patch: rebased on latest HADOOP-13345 feature branch.

> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13793-HADOOP-13345.001.patch, 
> HADOOP-13793-HADOOP-13345.002.patch
>
>
> Many of us share concerns that testing the consistency features of S3Guard 
> will be difficult if we depend on the rare and unpredictable occurrence of 
> actual inconsistency in S3 to exercise those code paths.
> I think we should have a mechanism for injecting failure to force exercising 
> of the consistency codepaths in S3Guard.
> Requirements:
> - Integration tests that cause S3A to see the types of inconsistency we 
> address with S3Guard.
> - These are deterministic integration tests.
> Unit tests are possible as well, if we were to stub out the S3Client.  That 
> may be less bang for the buck, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-30 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13793:
--
Status: Patch Available  (was: Open)

> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13793-HADOOP-13345.001.patch, 
> HADOOP-13793-HADOOP-13345.002.patch
>
>
> Many of us share concerns that testing the consistency features of S3Guard 
> will be difficult if we depend on the rare and unpredictable occurrence of 
> actual inconsistency in S3 to exercise those code paths.
> I think we should have a mechanism for injecting failure to force exercising 
> of the consistency codepaths in S3Guard.
> Requirements:
> - Integration tests that cause S3A to see the types of inconsistency we 
> address with S3Guard.
> - These are deterministic integration tests.
> Unit tests are possible as well, if we were to stub out the S3Client.  That 
> may be less bang for the buck, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests

2016-11-30 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13793:
--
Status: Open  (was: Patch Available)

> s3guard: add inconsistency injection, integration tests
> ---
>
> Key: HADOOP-13793
> URL: https://issues.apache.org/jira/browse/HADOOP-13793
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13793-HADOOP-13345.001.patch, 
> HADOOP-13793-HADOOP-13345.002.patch
>
>
> Many of us share concerns that testing the consistency features of S3Guard 
> will be difficult if we depend on the rare and unpredictable occurrence of 
> actual inconsistency in S3 to exercise those code paths.
> I think we should have a mechanism for injecting failure to force exercising 
> of the consistency codepaths in S3Guard.
> Requirements:
> - Integration tests that cause S3A to see the types of inconsistency we 
> address with S3Guard.
> - These are deterministic integration tests.
> Unit tests are possible as well, if we were to stub out the S3Client.  That 
> may be less bang for the buck, though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-11-30 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709888#comment-15709888
 ] 

Daryn Sharp commented on HADOOP-13709:
--

+1 assuming no objections from [~jlowe].

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13850) s3guard to log choice of metadata store at debug

2016-11-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709876#comment-15709876
 ] 

Steve Loughran commented on HADOOP-13850:
-

+1, pending yetus being happy (and ignoring the fact there are no new tests)

> s3guard to log choice of metadata store at debug
> 
>
> Key: HADOOP-13850
> URL: https://issues.apache.org/jira/browse/HADOOP-13850
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-13850-HADOOP-13345.000.patch
>
>
> People not using s3guard really don't need to know this on every single use 
> of the S3A client. 
> {code}
> INFO  s3guard.S3Guard (S3Guard.java:getMetadataStore(77)) - Using 
> NullMetadataStore for s3a filesystem
> {code}
> downgrade to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709855#comment-15709855
 ] 

Hadoop QA commented on HADOOP-13709:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841127/HADOOP-13709.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a83da421c5e2 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4fca94f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11168/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11168/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch
>

[jira] [Updated] (HADOOP-13850) s3guard to log choice of metadata store at debug

2016-11-30 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13850:
---
Status: Patch Available  (was: Open)

> s3guard to log choice of metadata store at debug
> 
>
> Key: HADOOP-13850
> URL: https://issues.apache.org/jira/browse/HADOOP-13850
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-13850-HADOOP-13345.000.patch
>
>
> People not using s3guard really don't need to know this on every single use 
> of the S3A client. 
> {code}
> INFO  s3guard.S3Guard (S3Guard.java:getMetadataStore(77)) - Using 
> NullMetadataStore for s3a filesystem
> {code}
> downgrade to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13850) s3guard to log choice of metadata store at debug

2016-11-30 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13850:
---
Target Version/s: HADOOP-13345

> s3guard to log choice of metadata store at debug
> 
>
> Key: HADOOP-13850
> URL: https://issues.apache.org/jira/browse/HADOOP-13850
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-13850-HADOOP-13345.000.patch
>
>
> People not using s3guard really don't need to know this on every single use 
> of the S3A client. 
> {code}
> INFO  s3guard.S3Guard (S3Guard.java:getMetadataStore(77)) - Using 
> NullMetadataStore for s3a filesystem
> {code}
> downgrade to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13850) s3guard to log choice of metadata store at debug

2016-11-30 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13850:
---
Attachment: HADOOP-13850-HADOOP-13345.000.patch

> s3guard to log choice of metadata store at debug
> 
>
> Key: HADOOP-13850
> URL: https://issues.apache.org/jira/browse/HADOOP-13850
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-13850-HADOOP-13345.000.patch
>
>
> People not using s3guard really don't need to know this on every single use 
> of the S3A client. 
> {code}
> INFO  s3guard.S3Guard (S3Guard.java:getMetadataStore(77)) - Using 
> NullMetadataStore for s3a filesystem
> {code}
> downgrade to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13675) Bug in return value for delete() calls in WASB

2016-11-30 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709826#comment-15709826
 ] 

Mingliang Liu edited comment on HADOOP-13675 at 11/30/16 9:27 PM:
--

Thanks for updating the patch. The main change looks good to me.

# Can you include the tests against the live Azure Storage service, and state 
that you ran all tests successfully before posting the patch?
# Can you explicitly write javadoc for the {{delete}} operation, especially the 
return value? What does it mean by deleting successful? Changes have been made 
because of this delete call?
# In test {{TestNativeAzureFileSystemConcurrencyLive}}, the reason of potential 
failure is lost. We can either log the encountered exception in child thread; 
or fail fast in the verification code, as example:
{code}
boolean deleteSuccess = false;
for (int i = 0; i < threadCount; i++) {
  assertFalse("child thread has exception", 
helperThreads[i].getExceptionEncounteredFlag());
  if (deleteSuccess) {
assertFalse("More than one thread delete() return true", 
helperThreads[i].getDeleteSuccess());
  } else {
deleteSuccess = helperThreads[i].getDeleteSuccess();
  }
}
{code}
replaces
{code}
47  boolean deleteSuccess = false, testSuccess = true;
48  
49  for (int i = 0; i < threadCount; i++) {
50  
51if (helperThreads[i].getExceptionEncounteredFlag()) {
52  testSuccess = false;
53  break;
54}
55  
56if (helperThreads[i].getDeleteSuccess()) {
57  if (deleteSuccess) {
58testSuccess = false;
59break;
60  } else {
61deleteSuccess = true;
62  }
63}
64  }
65  
66  if (!deleteSuccess) {
67testSuccess = false;
68  }
69  assertTrue(testSuccess);
{code}
# The checkstyle warnings are related. Please kindly fix.


was (Author: liuml07):
Thanks for updating the patch. The main change looks good to me.

# Can you include the tests against the live Azure Storage service, and state 
that you ran all tests successfully before posting the patch?
# Can you explicitly write javadoc for the {{delete}} operation, especially the 
return value? What does it mean by deleting successful? Changes have been made 
because of this delete call?
# In test {{TestNativeAzureFileSystemConcurrencyLive}}, the reason of potential 
failure is lost. We can either log the encountered exception in child thread; 
or fail fast in the verification code, as example:
{code}
boolean deleteSuccess = false;
for (int i = 0; i < threadCount; i++) {
  assertFalse("child thread has exception", 
helperThreads[i].getExceptionEncounteredFlag());
  if (deleteSuccess) {
assertFalse("More than one thread delete() return true", 
helperThreads[i].getDeleteSuccess());
  } else {
deleteSuccess = helperThreads[i].getDeleteSuccess();
  }
}
{code}
replaces
{code}
47  boolean deleteSuccess = false, testSuccess = true;
48  
49  for (int i = 0; i < threadCount; i++) {
50  
51if (helperThreads[i].getExceptionEncounteredFlag()) {
52  testSuccess = false;
53  break;
54}
55  
56if (helperThreads[i].getDeleteSuccess()) {
57  if (deleteSuccess) {
58testSuccess = false;
59break;
60  } else {
61deleteSuccess = true;
62  }
63}
64  }
65  
66  if (!deleteSuccess) {
67testSuccess = false;
68  }
69  assertTrue(testSuccess);
{code}

> Bug in return value for delete() calls in WASB
> --
>
> Key: HADOOP-13675
> URL: https://issues.apache.org/jira/browse/HADOOP-13675
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, fs/azure
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.9.0
>
> Attachments: HADOOP-13675.001.patch, HADOOP-13675.002.patch
>
>
> Current implementation of WASB does not correctly handle multiple 
> threads/clients calling delete on the same file. The expected behavior in 
> such scenarios is only one of the thread should delete the file and return 
> true, while all other threads should receive false. However in the current 
> implementation even though only one thread deletes the file, multiple clients 
> incorrectly get "true" as the return from delete() call..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Commented] (HADOOP-13675) Bug in return value for delete() calls in WASB

2016-11-30 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709826#comment-15709826
 ] 

Mingliang Liu commented on HADOOP-13675:


Thanks for updating the patch. The main change looks good to me.

# Can you include the tests against the live Azure Storage service, and state 
that you ran all tests successfully before posting the patch?
# Can you explicitly write javadoc for the {{delete}} operation, especially the 
return value? What does it mean by deleting successful? Changes have been made 
because of this delete call?
# In test {{TestNativeAzureFileSystemConcurrencyLive}}, the reason of potential 
failure is lost. We can either log the encountered exception in child thread; 
or fail fast in the verification code, as example:
{code}
boolean deleteSuccess = false;
for (int i = 0; i < threadCount; i++) {
  assertFalse("child thread has exception", 
helperThreads[i].getExceptionEncounteredFlag());
  if (deleteSuccess) {
assertFalse("More than one thread delete() return true", 
helperThreads[i].getDeleteSuccess());
  } else {
deleteSuccess = helperThreads[i].getDeleteSuccess();
  }
}
{code}
replaces
{code}
47  boolean deleteSuccess = false, testSuccess = true;
48  
49  for (int i = 0; i < threadCount; i++) {
50  
51if (helperThreads[i].getExceptionEncounteredFlag()) {
52  testSuccess = false;
53  break;
54}
55  
56if (helperThreads[i].getDeleteSuccess()) {
57  if (deleteSuccess) {
58testSuccess = false;
59break;
60  } else {
61deleteSuccess = true;
62  }
63}
64  }
65  
66  if (!deleteSuccess) {
67testSuccess = false;
68  }
69  assertTrue(testSuccess);
{code}

> Bug in return value for delete() calls in WASB
> --
>
> Key: HADOOP-13675
> URL: https://issues.apache.org/jira/browse/HADOOP-13675
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure, fs/azure
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.9.0
>
> Attachments: HADOOP-13675.001.patch, HADOOP-13675.002.patch
>
>
> Current implementation of WASB does not correctly handle multiple 
> threads/clients calling delete on the same file. The expected behavior in 
> such scenarios is only one of the thread should delete the file and return 
> true, while all other threads should receive false. However in the current 
> implementation even though only one thread deletes the file, multiple clients 
> incorrectly get "true" as the return from delete() call..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13830) Intermittent failure of ITestS3NContractRootDir#testRecursiveRootListing: "Can not create a Path from an empty string"

2016-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709817#comment-15709817
 ] 

Hudson commented on HADOOP-13830:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10919 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10919/])
 HADOOP-13830. Intermittent failure of (liuml07: rev 
3fd844b99fdfae6be6e5e261f371d175aad14229)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java


> Intermittent failure of ITestS3NContractRootDir#testRecursiveRootListing: 
> "Can not create a Path from an empty string"
> --
>
> Key: HADOOP-13830
> URL: https://issues.apache.org/jira/browse/HADOOP-13830
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13830-branch-2-001.patch
>
>
> Surfaced in HADOOP-13518: Intermittent test failure of 
> {{ITestS3NContractRootDir.testRecursiveRootListing}} and 
> {{ITestS3NContractRootDir.testRmEmptyRootDirNonRecursive}}, error text "Can 
> not create a Path from an empty string". 
> Looks like theres some confusion creating paths to subdirectories; I'd like 
> to know what's happening before blindly skipping the situation of "empty 
> subdirectory path"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13830) intermittent failure of ITestS3NContractRootDir.testRecursiveRootListing: "Can not create a Path from an empty string"

2016-11-30 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13830:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

+1 We should just ignore the directory we have been asked to list.

Committed to {{trunk}} through {{branch-2.8}} branches. Thanks for your 
contribution, [~ste...@apache.org].

> intermittent failure of ITestS3NContractRootDir.testRecursiveRootListing: 
> "Can not create a Path from an empty string"
> --
>
> Key: HADOOP-13830
> URL: https://issues.apache.org/jira/browse/HADOOP-13830
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13830-branch-2-001.patch
>
>
> Surfaced in HADOOP-13518: Intermittent test failure of 
> {{ITestS3NContractRootDir.testRecursiveRootListing}} and 
> {{ITestS3NContractRootDir.testRmEmptyRootDirNonRecursive}}, error text "Can 
> not create a Path from an empty string". 
> Looks like theres some confusion creating paths to subdirectories; I'd like 
> to know what's happening before blindly skipping the situation of "empty 
> subdirectory path"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13830) Intermittent failure of ITestS3NContractRootDir#testRecursiveRootListing: "Can not create a Path from an empty string"

2016-11-30 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13830:
---
Summary: Intermittent failure of 
ITestS3NContractRootDir#testRecursiveRootListing: "Can not create a Path from 
an empty string"  (was: intermittent failure of 
ITestS3NContractRootDir.testRecursiveRootListing: "Can not create a Path from 
an empty string")

> Intermittent failure of ITestS3NContractRootDir#testRecursiveRootListing: 
> "Can not create a Path from an empty string"
> --
>
> Key: HADOOP-13830
> URL: https://issues.apache.org/jira/browse/HADOOP-13830
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13830-branch-2-001.patch
>
>
> Surfaced in HADOOP-13518: Intermittent test failure of 
> {{ITestS3NContractRootDir.testRecursiveRootListing}} and 
> {{ITestS3NContractRootDir.testRmEmptyRootDirNonRecursive}}, error text "Can 
> not create a Path from an empty string". 
> Looks like theres some confusion creating paths to subdirectories; I'd like 
> to know what's happening before blindly skipping the situation of "empty 
> subdirectory path"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-11-30 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-13709:
-
Attachment: HADOOP-13709.006.patch

Fixing minor checkstyle issues

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2016-11-30 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709695#comment-15709695
 ] 

Thomas Demoor commented on HADOOP-13826:


I think [~mackrorysd]'s implementation is heading in the right direction.

Some questions / suggestions:
* The {{controlTypes}} do not have a large memory and bandwidth impact as they 
carry little payload. Consequently, I think we can allow a lot of active 
threads here and the waiting room can be unbounded. I hope this would fix the 
issues [~mackrorysd] is still encountering. (In contrast to my earlier thinking 
above, I don't think the number of active threads needs to be shared between 
the two types, it seems unlikely that {{controlTypes}} will use significant 
resources) 
* The {{subTaskTypes}} have the potential to overwhelm memory and bandwidth 
usage and should thus be run from the  bounded threadpool. We need to take care 
that all relevant classes are captured here.
* I am not 100% sure if what I propose here would eliminate all deadlocks. I do 
not understand the deadlock scenario entirely (yet) from the discussion above. 
If you would have more insight please help me out.

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709554#comment-15709554
 ] 

Hadoop QA commented on HADOOP-13709:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 54 unchanged - 0 fixed = 56 total (was 54) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 57s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestDelegationTokenRenewer |
|   | hadoop.fs.TestTrash |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.net.TestClusterTopology |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841089/HADOOP-13709.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux eef4591adf46 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7c84871 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11167/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11167/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11167/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11167/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up subprocesses spawned by 

[jira] [Commented] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-11-30 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709530#comment-15709530
 ] 

Robert Kanter commented on HADOOP-12954:


Seems like a good idea to me.

> Add a way to change hadoop.security.token.service.use_ip
> 
>
> Key: HADOOP-12954
> URL: https://issues.apache.org/jira/browse/HADOOP-12954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12954.001.patch, HADOOP-12954.002.patch, 
> HADOOP-12954.003.patch
>
>
> Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
> {code:java}
>   static {
> Configuration conf = new Configuration();
> boolean useIp = conf.getBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
> setTokenServiceUseIp(useIp);
>   }
> {code}
> This is a problem for clients, such as Oozie, who don't add *-site.xml files 
> to their classpath.  Oozie normally creates a {{JobClient}} and passes a 
> {{Configuration}} to it with the proper configs we need.  However, because 
> {{hadoop.security.token.service.use_ip}} is specified in a static block like 
> this, and there's no API to change it, Oozie has no way to set it to the 
> non-default value.
> I propose we add a {{setConfiguration}} method which takes a 
> {{Configuration}} and rereads {{hadoop.security.token.service.use_ip}}.  
> There's a few other properties that are also loaded statically on startup 
> that can be reloaded here as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2016-11-30 Thread Luke Miner (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709517#comment-15709517
 ] 

Luke Miner commented on HADOOP-13811:
-

I've been looking through the classpath but can't find any obvious culprits. 
All I can think is that there is something wrong with my spark build. Is there 
anywhere that I could get a prebuilt version of spark with hadoop, and 
hadoop-aws? I've included the classpath returned by spark submit in case 
anything jumps out to you.

{code}
System properties:
spark.hadoop.parquet.block.size -> 2147483648
spark.hadoop.fs.s3a.impl -> org.apache.hadoop.fs.s3a.S3AFileSystem
spark.local.dir -> /raid0/spark
spark.mesos.coarse -> false
spark.hadoop.parquet.enable.summary-metadata -> false
spark.hadoop.fs.s3a.access.key -> 
spark.network.timeout -> 600
spark.executor.memory -> 16G
spark.hadoop.fs.s3n.multipart.uploads.enabled -> true
spark.rpc.message.maxSize -> 500
SPARK_SUBMIT -> true
spark.hadoop.fs.s3a.secret.key -> 
spark.jars.packages -> 
com.databricks:spark-avro_2.11:3.0.1,com.amazonaws:aws-java-sdk:1.11.60
spark.mesos.constraints -> priority:1
spark.task.cpus -> 1
spark.executor.extraJavaOptions -> -XX:+UseG1GC -XX:MaxPermSize=1G 
-XX:+HeapDumpOnOutOfMemoryError
spark.speculation -> false
spark.app.name -> Json2Pq
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version -> 2
spark.jars -> 

[jira] [Commented] (HADOOP-12954) Add a way to change hadoop.security.token.service.use_ip

2016-11-30 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709522#comment-15709522
 ] 

Junping Du commented on HADOOP-12954:
-

Hi Robert and Steve, do we want to backport this issue to 2.8 as well?

> Add a way to change hadoop.security.token.service.use_ip
> 
>
> Key: HADOOP-12954
> URL: https://issues.apache.org/jira/browse/HADOOP-12954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12954.001.patch, HADOOP-12954.002.patch, 
> HADOOP-12954.003.patch
>
>
> Currently, {{hadoop.security.token.service.use_ip}} is set on JVM startup via:
> {code:java}
>   static {
> Configuration conf = new Configuration();
> boolean useIp = conf.getBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
> setTokenServiceUseIp(useIp);
>   }
> {code}
> This is a problem for clients, such as Oozie, who don't add *-site.xml files 
> to their classpath.  Oozie normally creates a {{JobClient}} and passes a 
> {{Configuration}} to it with the proper configs we need.  However, because 
> {{hadoop.security.token.service.use_ip}} is specified in a static block like 
> this, and there's no API to change it, Oozie has no way to set it to the 
> non-default value.
> I propose we add a {{setConfiguration}} method which takes a 
> {{Configuration}} and rereads {{hadoop.security.token.service.use_ip}}.  
> There's a few other properties that are also loaded statically on startup 
> that can be reloaded here as well.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13790) Make qbt script executable

2016-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709495#comment-15709495
 ] 

Hudson commented on HADOOP-13790:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10917 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10917/])
HADOOP-13790. Make qbt script executable. Contributed by Andrew Wang. 
(aajisaka: rev be5a757096246d5c4ef73da9d233adda67bd3d69)
* (edit) dev-support/bin/qbt


> Make qbt script executable
> --
>
> Key: HADOOP-13790
> URL: https://issues.apache.org/jira/browse/HADOOP-13790
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13790.001.patch
>
>
> Trivial, the qbt script isn't executable, unlike the other scripts in 
> {{dev-support/bin}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2016-11-30 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709465#comment-15709465
 ] 

Junping Du commented on HADOOP-13826:
-

Thanks Steve for nominating this into 2.8. I put this issue on our 2.8 tracking 
list: https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release. 
[~mackrorysd], I just assign this issue to you given you already contribute 
several patches on it.

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2016-11-30 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-13826:

Assignee: Sean Mackrory

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13851) hadoop-openstack should build tests without auth-keys.xml

2016-11-30 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709446#comment-15709446
 ] 

John Zhuge commented on HADOOP-13851:
-

Essentially apply HADOOP-13446 to swift?
Is it possible to backport HADOOP-13446 to 2.7 and 2.6?

> hadoop-openstack should build tests without auth-keys.xml
> -
>
> Key: HADOOP-13851
> URL: https://issues.apache.org/jira/browse/HADOOP-13851
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift, test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the use of 
> {{maven.test.skip}} in pom.xml, the non-existence of auth-keys.xml also 
> prevents building the test code. Unfortunately this leads to delayed 
> detection of build problems in test code, e.g., introduced by a mistake in 
> backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  suggests a different solution. Any time you want to run tests, you must do 2 
> things instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-11-30 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-13709:
-
Attachment: HADOOP-13709.005.patch

Thanks for the review, [~daryn]! Attaching new patch that uses a 
synchronizedMap and sets the type at allocation. 

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13790) Make qbt script executable

2016-11-30 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13790:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks [~andrew.wang] for 
the contribution.

> Make qbt script executable
> --
>
> Key: HADOOP-13790
> URL: https://issues.apache.org/jira/browse/HADOOP-13790
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13790.001.patch
>
>
> Trivial, the qbt script isn't executable, unlike the other scripts in 
> {{dev-support/bin}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709305#comment-15709305
 ] 

Ravi Prakash commented on HADOOP-13849:
---

Hi Tao Li!

Thanks for your effort to benchmark the two implementations. Are you proposing 
to make one faster than the other?

> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native, the output of command "hadoop 
> checknative -a" is as follows:
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13849.
-
Resolution: Invalid

> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native, the output of command "hadoop 
> checknative -a" is as follows:
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709247#comment-15709247
 ] 

Steve Loughran commented on HADOOP-13849:
-

# you can see what code is used in the logs; if it says "java builtin" then 
it's using the java one; if it says system, then its using system.
# there are other factors in performance, like disk bandwidth. you may not get 
speedup.
# compare the decompress times too.

Closing as invalid, sorry
https://wiki.apache.org/hadoop/InvalidJiraIssues 

> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native, the output of command "hadoop 
> checknative -a" is as follows:
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13851) hadoop-openstack should build tests without auth-keys.xml

2016-11-30 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13851:

Description: 
hadoop-openstack require the existence of file 
{{src/test/resources/auth-keys.xml}} to run the tests. With the use of 
{{maven.test.skip}} in pom.xml, the non-existence of auth-keys.xml also 
prevents building the test code. Unfortunately this leads to delayed detection 
of build problems in test code, e.g., introduced by a mistake in backports.
{code}

  tests-off
  

  src/test/resources/auth-keys.xml

  
  
true
  


  tests-on
  

  src/test/resources/auth-keys.xml

  
  
false
  

{code}

Section {{Skipping by Default}} in 
http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
 suggests a different solution. Any time you want to run tests, you must do 2 
things instead of 1:
* Copy auth-keys.xml to src/test/resources
* Run {{mvn install}} with the extra {{-DskipTests=false}}

Would like the community to weigh in on this.

  was:
Both hadoop-aws and hadoop-openstack require the existence of file 
{{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
pom.xml, the non-existence of auth-keys.xml also prevents building the test 
code. Unfortunately this leads to delayed detection of build problems in test 
code, e.g., introduced by a mistake in backports.
{code}

  tests-off
  

  src/test/resources/auth-keys.xml

  
  
true
  


  tests-on
  

  src/test/resources/auth-keys.xml

  
  
false
  

{code}

Section {{Skipping by Default}} in 
http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
 proposes a solution. Any time you want to run tests, you must do 2 things 
instead of 1:
* Copy auth-keys.xml to src/test/resources
* Run {{mvn install}} with the extra {{-DskipTests=false}}

Would like the community to weigh in on this.


> hadoop-openstack should build tests without auth-keys.xml
> -
>
> Key: HADOOP-13851
> URL: https://issues.apache.org/jira/browse/HADOOP-13851
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift, test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the use of 
> {{maven.test.skip}} in pom.xml, the non-existence of auth-keys.xml also 
> prevents building the test code. Unfortunately this leads to delayed 
> detection of build problems in test code, e.g., introduced by a mistake in 
> backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  suggests a different solution. Any time you want to run tests, you must do 2 
> things instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13851) hadoop-openstack should build tests without auth-keys.xml

2016-11-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709235#comment-15709235
 ] 

Steve Loughran commented on HADOOP-13851:
-

really it should copy the others and move to IT tests; renaming the test 
methods and using failsafe

> hadoop-openstack should build tests without auth-keys.xml
> -
>
> Key: HADOOP-13851
> URL: https://issues.apache.org/jira/browse/HADOOP-13851
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift, test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13851) hadoop-openstack should build tests without auth-keys.xml

2016-11-30 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13851:

Priority: Minor  (was: Blocker)

> hadoop-openstack should build tests without auth-keys.xml
> -
>
> Key: HADOOP-13851
> URL: https://issues.apache.org/jira/browse/HADOOP-13851
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift, test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13851) hadoop-openstack should build tests without auth-keys.xml

2016-11-30 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13851:

Target Version/s: 2.6.6  (was: 3.0.0-alpha2)

> hadoop-openstack should build tests without auth-keys.xml
> -
>
> Key: HADOOP-13851
> URL: https://issues.apache.org/jira/browse/HADOOP-13851
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift, test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13851) hadoop-openstack should build tests without auth-keys.xml

2016-11-30 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13851:
---

 Summary: hadoop-openstack should build tests without auth-keys.xml
 Key: HADOOP-13851
 URL: https://issues.apache.org/jira/browse/HADOOP-13851
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3, fs/swift, test
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Blocker


Both hadoop-aws and hadoop-openstack require the existence of file 
{{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
pom.xml, the non-existence of auth-keys.xml also prevents building the test 
code. Unfortunately this leads to delayed detection of build problems in test 
code, e.g., introduced by a mistake in backports.
{code}

  tests-off
  

  src/test/resources/auth-keys.xml

  
  
true
  


  tests-on
  

  src/test/resources/auth-keys.xml

  
  
false
  

{code}

Section {{Skipping by Default}} in 
http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
 proposes a solution. Any time you want to run tests, you must do 2 things 
instead of 1:
* Copy auth-keys.xml to src/test/resources
* Run {{mvn install}} with the extra {{-DskipTests=false}}

Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13851) hadoop-openstack should build tests without auth-keys.xml

2016-11-30 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13851:

Component/s: (was: fs/s3)

> hadoop-openstack should build tests without auth-keys.xml
> -
>
> Key: HADOOP-13851
> URL: https://issues.apache.org/jira/browse/HADOOP-13851
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift, test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13848) Missing auth-keys.xml prevents detecting test code build problem

2016-11-30 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709215#comment-15709215
 ] 

John Zhuge commented on HADOOP-13848:
-

Thanks [~ste...@apache.org]. Split out a JIRA for swift since after 
HADOOP-13446, aws has a different pom.xml than swift.

> Missing auth-keys.xml prevents detecting test code build problem
> 
>
> Key: HADOOP-13848
> URL: https://issues.apache.org/jira/browse/HADOOP-13848
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, fs/swift, test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2016-11-30 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709144#comment-15709144
 ] 

Sean Mackrory commented on HADOOP-13826:


I can probably do a patch for the separate threadpools for TM and 
BlockOutputStream, probably by the end of the week... I like it - sounds like 
there's consensus on that approach.

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13842) Update jackson from 1.9.13 to 2.x in hadoop-maven-plugins

2016-11-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709102#comment-15709102
 ] 

Steve Loughran commented on HADOOP-13842:
-

being maven and all, I need to do the diligence: have you done a full hadoop 
test run locally with this patch?

> Update jackson from 1.9.13 to 2.x in hadoop-maven-plugins
> -
>
> Key: HADOOP-13842
> URL: https://issues.apache.org/jira/browse/HADOOP-13842
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
> Attachments: HADOOP-13842.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-11-30 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709115#comment-15709115
 ] 

Aaron Fabbri commented on HADOOP-13449:
---

Finished a run of the S3A integration tests.  I see that fixing the 
MockS3Client factory is not as simple as my last comment, as you use it for the 
DynamoDBMetadataStore unit test.  We can revisit this here or on HADOOP-13589.

Here are the integration test failures I see when I configure the 
DynamoDBMetadataStore via core-site.xml:

{code}
Failed tests: 
  
ITestS3AContractDelete>AbstractContractDeleteTest.testDeleteNonEmptyDirNonRecursive:78->Assert.fail:88
 non recursive delete should have raised an exception, but completed with exit 
code true
  
ITestS3AContractDelete>AbstractContractDeleteTest.testDeleteNonEmptyDirRecursive:94->AbstractFSContractTestBase.assertDeleted:349->Assert.fail:88
 Deleted file: unexpectedly found 
s3a://fabbri-dev/test/testDeleteNonEmptyDirNonRecursive as  
S3AFileStatus{path=s3a://fabbri-dev/test/testDeleteNonEmptyDirNonRecursive; 
isDirectory=true; modification_time=0; access_time=0; owner=fabbri; 
group=fabbri; permission=rwxrwxrwx; isSymlink=false} isEmptyDirectory=false
  ITestS3AConfiguration.testUsernameFromUGI:481 owner in 
S3AFileStatus{path=s3a://fabbri-dev/; isDirectory=true; modification_time=0; 
access_time=0; owner=fabbri; group=fabbri; permission=rwxrwxrwx; 
isSymlink=false} isEmptyDirectory=false expected:<[alice]> but was:<[fabbri]>
  
ITestS3AFileOperationCost.testFakeDirectoryDeletion:254->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
 after rename(srcFilePath, destFilePath): directories_created expected:<1> but 
was:<0>
  
ITestS3AFileOperationCost.testCostOfGetFileStatusOnNonEmptyDir:139->Assert.fail:88
 FileStatus says directory isempty: 
S3AFileStatus{path=s3a://fabbri-dev/test/empty; isDirectory=true; 
modification_time=0; access_time=0; owner=fabbri; group=fabbri; 
permission=rwxrwxrwx; isSymlink=false} isEmptyDirectory=true
ls s3a://fabbri-dev/test/empty [00] 
S3AFileStatus{path=s3a://fabbri-dev/test/empty/simple.txt; isDirectory=false; 
length=0; replication=1; blocksize=33554432; modification_time=1480497225005; 
access_time=0; owner=fabbri; group=fabbri; permission=rw-rw-rw-; 
isSymlink=false} isEmptyDirectory=false

Tests in error: 
  
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive:116
 » PathIO
  
ITestS3AFileContextMainOperations>FileContextMainOperationsBaseTest.testRenameDirectoryAsNonExistentDirectory:1038->FileContextMainOperationsBaseTest.testRenameDirectoryAsNonExistentDirectory:1052->FileContextMainOperationsBaseTest.rename:1197
 » IO
  ITestS3AAWSCredentialsProvider.testAnonymousProvider:133 » AWSServiceIO 
initia...
  ITestS3AAWSCredentialsProvider.testBadCredentials:102->createFailingFS:76 » 
AWSServiceIO
  ITestS3ACredentialsInURL.testInstantiateFromURL:86 » AWSClientIO initializing 
...
  
ITestS3AFileSystemContract>FileSystemContractBaseTest.testWriteReadAndDeleteOneBlock:266->FileSystemContractBaseTest.writeReadAndDelete:285->FileSystemContractBaseTest.writeAndRead:815
 » FileAlreadyExists
  
ITestS3AFileSystemContract>FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed:656->FileSystemContractBaseTest.rename:512
 » AWSServiceIO
{code}


> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, 
> HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, 
> HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, 
> HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, 
> HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13850) s3guard to log choice of metadata store at debug

2016-11-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13850:
---

 Summary: s3guard to log choice of metadata store at debug
 Key: HADOOP-13850
 URL: https://issues.apache.org/jira/browse/HADOOP-13850
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran
Priority: Trivial


People not using s3guard really don't need to know this on every single use of 
the S3A client. 
{code}
INFO  s3guard.S3Guard (S3Guard.java:getMetadataStore(77)) - Using 
NullMetadataStore for s3a filesystem
{code}

downgrade to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-11-30 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709098#comment-15709098
 ] 

Daryn Sharp commented on HADOOP-13709:
--

Please declare the map with types so you don't have to typecast the objects.  
That's probably the javac warning and style ding.
 
Minor comments/suggestions.  I'd call the method {{destroyAllProcesses}}.   You 
could remove the explicit syncs when adding/removing if you use 
{{Collections.synchronizedMap}}.

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-11-30 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708839#comment-15708839
 ] 

Varun Vasudev commented on HADOOP-13835:


The ASF license warnings and the test failures are unrelated to the patch. The 
cc warnings are due to the google test code. [~aw] - is there a way to suppress 
the cc warnings?

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708829#comment-15708829
 ] 

Hadoop QA commented on HADOOP-13835:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 10m 53s{color} | 
{color:red} root generated 25 new + 7 unchanged - 0 fixed = 32 total (was 7) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 56s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
33s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
|   | hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13835 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841007/HADOOP-13835.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux 6c5cb17f0f7e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 51e6c1c |
| Default Java | 1.8.0_111 |
| cc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11166/artifact/patchprocess/diff-compile-cc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11166/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11166/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11166/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask
 . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11166/console |
| Powered by | Apache 

[jira] [Updated] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HADOOP-13849:

Description: 
I tested bzip2 java-builtin and system-native compression, and I found the 
compress speed is almost the same. (I think the system-native should have 
better compress speed than java-builtin)

My test case:
1. input file: 2.7GB text file without compression
2. after bzip2 java-builtin compress: 457MB, 12min 4sec
3. after bzip2 system-native compress: 457MB, 12min 19sec

My MapReduce Config:
conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
conf.set("mapreduce.output.fileoutputformat.compress", "true");
conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
conf.set("mapreduce.output.fileoutputformat.compress.codec", 
"org.apache.hadoop.io.compress.BZip2Codec");
conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
java-builtin
conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
system-native

And I am sure I have enable the bzip2 native, the output of command "hadoop 
checknative -a" is as follows:
Native library checking:
hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:true /lib64/libz.so.1
snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so

  was:
I tested bzip2 java-builtin and system-native compression, and I found the 
compress speed is almost the same. (I think the system-native should have 
better compress speed than java-builtin)

My test case:
1. input file: 2.7GB text file without compression
2. after bzip2 java-builtin compress: 457MB, 12min 4sec
3. after bzip2 system-native compress: 457MB, 12min 19sec

My MapReduce Config:
conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
conf.set("mapreduce.output.fileoutputformat.compress", "true");
conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
conf.set("mapreduce.output.fileoutputformat.compress.codec", 
"org.apache.hadoop.io.compress.BZip2Codec");
conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
java-builtin
conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
system-native

And I am sure I have enable the bzip2 native:
# hadoop checknative -a
Native library checking:
hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:true /lib64/libz.so.1
snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so


> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native, the output of command "hadoop 
> checknative -a" is as follows:
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HADOOP-13849:

Description: 
I tested bzip2 java-builtin and system-native compression, and I found the 
compress speed is almost the same. (I think the system-native should have 
better compress speed than java-builtin)

My test case:
1. input file: 2.7GB text file without compression
2. after bzip2 java-builtin compress: 457MB, 12min 4sec
3. after bzip2 system-native compress: 457MB, 12min 19sec

My MapReduce Config:
conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
conf.set("mapreduce.output.fileoutputformat.compress", "true");
conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
conf.set("mapreduce.output.fileoutputformat.compress.codec", 
"org.apache.hadoop.io.compress.BZip2Codec");
conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
java-builtin
conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
system-native

And I am sure I have enable the bzip2 native:
# hadoop checknative -a
Native library checking:
hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:true /lib64/libz.so.1
snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so

  was:
I tested bzip2 java-builtin and system-native compression, and I found the 
compress speed is almost the same. (I think the system-native should have 
better compress speed than java-builtin)

My test case:
1. input file: 2.7GB text file without compression
2. after bzip2 java-builtin compress: 457MB, 12min 4sec
3. after bzip2 system-native compress: 457MB, 12min 19sec

My MapReduce Config:
conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
conf.set("mapreduce.output.fileoutputformat.compress", "true");
conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
conf.set("mapreduce.output.fileoutputformat.compress.codec", 
"org.apache.hadoop.io.compress.BZip2Codec");
conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
java-builtin
conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
system-native



> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native:
> # hadoop checknative -a
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HADOOP-13849:

Description: 
I tested bzip2 java-builtin and system-native compression, and I found the 
compress speed is almost the same. (I think the system-native should have 
better compress speed than java-builtin)

My test case:
1. input file: 2.7GB text file without compression
2. after bzip2 java-builtin compress: 457MB, 12min 4sec
3. after bzip2 system-native compress: 457MB, 12min 19sec

My MapReduce Config:
conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
conf.set("mapreduce.output.fileoutputformat.compress", "true");
conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
conf.set("mapreduce.output.fileoutputformat.compress.codec", 
"org.apache.hadoop.io.compress.BZip2Codec");
conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
java-builtin
conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
system-native


  was:
I tested bzip2 java-builtin and system-native compression, and I found the 
compress speed is almost the same. (I think the system-native should have 
better compress speed than java-builtin)

My test case:
input: 2.7GB text file without compression
bzip2 java-builtin compress: 457MB, 12min 4sec
bzip2 system-native compress: 457MB, 12min 19sec

My MapReduce Config:
conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
conf.set("mapreduce.output.fileoutputformat.compress", "true");
conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
conf.set("mapreduce.output.fileoutputformat.compress.codec", 
"org.apache.hadoop.io.compress.BZip2Codec");
conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
java-builtin
conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
system-native



> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-11-30 Thread Tao Li (JIRA)
Tao Li created HADOOP-13849:
---

 Summary: Bzip2 java-builtin and system-native have almost the same 
compress speed
 Key: HADOOP-13849
 URL: https://issues.apache.org/jira/browse/HADOOP-13849
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 2.6.0
 Environment: os version: redhat6
hadoop version: 2.6.0
native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64

Reporter: Tao Li


I tested bzip2 java-builtin and system-native compression, and I found the 
compress speed is almost the same. (I think the system-native should have 
better compress speed than java-builtin)

My test case:
input: 2.7GB text file without compression
bzip2 java-builtin compress: 457MB, 12min 4sec
bzip2 system-native compress: 457MB, 12min 19sec

My MapReduce Config:
conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
conf.set("mapreduce.output.fileoutputformat.compress", "true");
conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
conf.set("mapreduce.output.fileoutputformat.compress.codec", 
"org.apache.hadoop.io.compress.BZip2Codec");
conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
java-builtin
conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
system-native




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13846) S3A to implement rename(final Path src, final Path dst, final Rename... options)

2016-11-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708389#comment-15708389
 ] 

Steve Loughran commented on HADOOP-13846:
-

yes. I will have to remove that marker in the process

> S3A to implement rename(final Path src, final Path dst, final Rename... 
> options)
> 
>
> Key: HADOOP-13846
> URL: https://issues.apache.org/jira/browse/HADOOP-13846
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> S3a now raises exceptions on invalid rename operations, but these get lost. I 
> plan to use them in my s3guard committer HADOOP-13786.
> Rather than just make innerRename() private, S3A could implement 
> {{FileSystem.rename(final Path src, final Path dst, final Rename... 
> options)}} and so have an exception-raising rename which can be called 
> without going more into the internals. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2016-11-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708386#comment-15708386
 ] 

Steve Loughran commented on HADOOP-13811:
-

That message telling you off for an s3 is a quirk of how services are loaded; 
ignore. We have pulled that from 2.8, HADOOP-13323.

What it does do is warn me: you still have Hadoop 2.7.x on the classpath.

So does the stack trace, again, it's a 2.7.x stack

{code}
maxKeys = conf.getInt(MAX_PAGING_KEYS, DEFAULT_MAX_PAGING_KEYS);
partSize = conf.getLong(MULTIPART_SIZE, DEFAULT_MULTIPART_SIZE);
multiPartThreshold = conf.getInt(MIN_MULTIPART_THRESHOLD,
  DEFAULT_MIN_MULTIPART_THRESHOLD);
{code}

By changing the property you've moved down one more line, but as the old code 
is still on your CP, you aren't getting anywhere. Assume all these errors are 
classpath related, fix them and then maybe there's a chance of the patched code 
being picked up.

> s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to 
> sanitize XML document destined for handler class
> -
>
> Key: HADOOP-13811
> URL: https://issues.apache.org/jira/browse/HADOOP-13811
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Sometimes, occasionally, getFileStatus() fails with a stack trace starting 
> with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document 
> destined for handler class}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2016-11-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13826:

Target Version/s: 2.8.0
Priority: Critical  (was: Major)

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2016-11-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708342#comment-15708342
 ] 

Steve Loughran commented on HADOOP-13826:
-

Where are we with this? We're about to cut the 2.8 branch, and I can see this 
being something to get in

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13848) Missing auth-keys.xml prevents detecting test code build problem

2016-11-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708336#comment-15708336
 ] 

Steve Loughran commented on HADOOP-13848:
-

aws has moved its sets to integration tests; the key check shoud be moved to 
making those tests skipped, rather than all tests. swift still has them all as 
simple Tests; you'd probably have to move them first

> Missing auth-keys.xml prevents detecting test code build problem
> 
>
> Key: HADOOP-13848
> URL: https://issues.apache.org/jira/browse/HADOOP-13848
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, fs/swift, test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel

2016-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708259#comment-15708259
 ] 

ASF GitHub Bot commented on HADOOP-13600:
-

Github user steveloughran closed the pull request at:

https://github.com/apache/hadoop/pull/167


> S3a rename() to copy files in a directory in parallel
> -
>
> Key: HADOOP-13600
> URL: https://issues.apache.org/jira/browse/HADOOP-13600
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Sahil Takiar
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13818) TestLocalFileSystem#testSetTimes fails

2016-11-30 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-13818:
---
Attachment: HADOOP-13818.004.patch

Thanks [~ajisakaa] for the review and comments. The comments seem great. I 
regard the environment as another condition here. That means only if either 
param is null and meanwhile the environment is macOS then we will call 
{{getFileStatus}}, otherwise it wll be skipped. One anther comment from me: I 
think we might update the summary of this JIRA, that will make it more clear. 
Post a new patch to address the comments.

> TestLocalFileSystem#testSetTimes fails
> --
>
> Key: HADOOP-13818
> URL: https://issues.apache.org/jira/browse/HADOOP-13818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
> Environment: Mac OS Sierra, both OpenJDK 8u122-ea and Oracle JDK 8u112
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13818.001.patch, HADOOP-13818.002.patch, 
> HADOOP-13818.003.patch, HADOOP-13818.004.patch
>
>
> {noformat}
> Running org.apache.hadoop.fs.TestLocalFileSystem
> Tests run: 20, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 4.887 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.TestLocalFileSystem
> testSetTimes(org.apache.hadoop.fs.TestLocalFileSystem)  Time elapsed: 0.084 
> sec  <<< FAILURE!
> java.lang.AssertionError: expected:<23456000> but was:<1479176144000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.checkTimesStatus(TestLocalFileSystem.java:391)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testSetTimes(TestLocalFileSystem.java:414)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13818) TestLocalFileSystem#testSetTimes fails

2016-11-30 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13818:
---
Environment: Mac OS Sierra, both OpenJDK 8u122-ea and Oracle JDK 8u112  
(was: Mac OS Sierra, OpenJDK 8u122-ea)

> TestLocalFileSystem#testSetTimes fails
> --
>
> Key: HADOOP-13818
> URL: https://issues.apache.org/jira/browse/HADOOP-13818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
> Environment: Mac OS Sierra, both OpenJDK 8u122-ea and Oracle JDK 8u112
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13818.001.patch, HADOOP-13818.002.patch, 
> HADOOP-13818.003.patch
>
>
> {noformat}
> Running org.apache.hadoop.fs.TestLocalFileSystem
> Tests run: 20, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 4.887 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.TestLocalFileSystem
> testSetTimes(org.apache.hadoop.fs.TestLocalFileSystem)  Time elapsed: 0.084 
> sec  <<< FAILURE!
> java.lang.AssertionError: expected:<23456000> but was:<1479176144000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.checkTimesStatus(TestLocalFileSystem.java:391)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testSetTimes(TestLocalFileSystem.java:414)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13818) TestLocalFileSystem#testSetTimes fails

2016-11-30 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15708015#comment-15708015
 ] 

Akira Ajisaka commented on HADOOP-13818:


Thanks Steve for the comment and thanks Yiqun for updating the patch. Would you 
add a comment such as follows?
{noformat}
// On some macOS environment, BasicFileAttributeView.setTimes
// does not set times correctly when the argument is null.
// TODO: Remove this after the issue is fixed.
{noformat}
In addition, can we use {{Shell.MAC}} to call {{getFileStatus}} only when the 
environment is macOS?

> TestLocalFileSystem#testSetTimes fails
> --
>
> Key: HADOOP-13818
> URL: https://issues.apache.org/jira/browse/HADOOP-13818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
> Environment: Mac OS Sierra, OpenJDK 8u122-ea
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HADOOP-13818.001.patch, HADOOP-13818.002.patch, 
> HADOOP-13818.003.patch
>
>
> {noformat}
> Running org.apache.hadoop.fs.TestLocalFileSystem
> Tests run: 20, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 4.887 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.TestLocalFileSystem
> testSetTimes(org.apache.hadoop.fs.TestLocalFileSystem)  Time elapsed: 0.084 
> sec  <<< FAILURE!
> java.lang.AssertionError: expected:<23456000> but was:<1479176144000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.checkTimesStatus(TestLocalFileSystem.java:391)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testSetTimes(TestLocalFileSystem.java:414)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-11-30 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-13835:
---
Attachment: HADOOP-13835.003.patch

Uploaded a new patch after MAPREDUCE-6743

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-30 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704129#comment-15704129
 ] 

John Zhuge edited comment on HADOOP-13597 at 11/30/16 8:20 AM:
---

Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer thus all KMS unit tests exercise 
KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework
- Obsolete HTTP admin port for the Tomcat Manager GUI which does not seem to 
work anyway
- Obsolete {{kms.sh version}} that prints Tomcat version

TESTING DONE
- All hadoop-kms unit tests. MiniKMS equals full KMS.
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status
- JMX works
- /logs works

TODO
- Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE
- Add static web content /index.html
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001


was (Author: jzhuge):
Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework
- Obsolete HTTP admin port for the Tomcat Manager GUI which does not seem to 
work anyway
- Obsolete {{kms.sh version}} that prints Tomcat version

TESTING DONE
- All hadoop-kms unit tests. MiniKMS equals full KMS.
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status
- JMX works
- /logs works

TODO
- Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE
- Add static web content /index.html
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-30 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707922#comment-15707922
 ] 

John Zhuge commented on HADOOP-13597:
-

Great comments!

bq. Suggest to name the new config key names _KEY

In the case, we will have mixed naming style in KMSConfiguration. Is that ok? 
The new properties are much fewer than old ones especially after I move SSL 
properties to ssl-server.xml.

bq. We should add basic testing to KMSHttpServer

KMSHttpServer is called by MiniKMS thus all its methods are exercised by KMS 
unit tests.Should I add tests to ensure legacy env variables are still 
supported?

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org