[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281212#comment-15281212
 ] 

Chris Nauroth commented on HADOOP-13028:


[~cmccabe], thank you for your review.

bq. Can we add a comment to toString stating that this output is not stable API 
and should not be parsed?

Steve has done this in patch v011.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> HADOOP-13028-branch-2-010.patch, HADOOP-13028-branch-2-011.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12709) Deprecate s3:// in branch-2,; cut from trunk

2016-05-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281191#comment-15281191
 ] 

Mingliang Liu commented on HADOOP-12709:


{{hadoop.net.TestDNS}} is not related and is tracked by [HADOOP-13101].

> Deprecate s3:// in branch-2,; cut from trunk
> 
>
> Key: HADOOP-12709
> URL: https://issues.apache.org/jira/browse/HADOOP-12709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-12709.000.patch, HADOOP-12709.001.patch, 
> HADOOP-12709.002.patch, HADOOP-12709.003.patch, HADOOP-12709.004.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10230) GSetByHashMap breaks contract of GSet

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281168#comment-15281168
 ] 

Hadoop QA commented on HADOOP-10230:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
15s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 4s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 3s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803531/HADOOP-10230.001.patch
 |
| JIRA Issue | HADOOP-10230 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cccdae43171b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|

[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281102#comment-15281102
 ] 

Colin Patrick McCabe commented on HADOOP-13028:
---

That's a good point, [~cnauroth].  I guess as long as people don't start 
treating this output as a stable API, it's reasonable to have debugging 
information there.  Can we add a comment to toString stating that this output 
is not stable API and should not be parsed?  +1 once that is done.

Thanks for working on this, [~steve_l]... it's going to be very helpful for 
running Hadoop on s3.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> HADOOP-13028-branch-2-010.patch, HADOOP-13028-branch-2-011.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10230) GSetByHashMap breaks contract of GSet

2016-05-11 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281093#comment-15281093
 ] 

Hiroshi Ikeda commented on HADOOP-10230:


GSetByHashMap internally uses HashMap, which supports null elements and doesn't 
throw NPE from not only {{put}} but {{contains}}, {{get}}, and {{remove}}.

> GSetByHashMap breaks contract of GSet
> -
>
> Key: HADOOP-10230
> URL: https://issues.apache.org/jira/browse/HADOOP-10230
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Hiroshi Ikeda
>Assignee: Andres Perez
>Priority: Trivial
> Attachments: HADOOP-10230.001.patch
>
>
> The contract of GSet says it is ensured to throw NullPointerException if a 
> given argument is null for many methods, but GSetByHashMap doesn't. I think 
> just writing non-null preconditions for GSet are required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11505) Various native parts use bswap incorrectly and unportably

2016-05-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281046#comment-15281046
 ] 

Colin Patrick McCabe commented on HADOOP-11505:
---

I think it would be great to see build slaves with alternate architectures.  
Maybe a good place to start is by emailing the hadoop development list and 
talking to the infrastructure team.

> Various native parts use bswap incorrectly and unportably
> -
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Alan Burlison
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
> HADOOP-11505.004.patch, HADOOP-11505.005.patch, HADOOP-11505.006.patch, 
> HADOOP-11505.007.patch, HADOOP-11505.008.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13135) Encounter response code 500 when accessing /metrics endpoint

2016-05-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281013#comment-15281013
 ] 

Akira AJISAKA commented on HADOOP-13135:


Umm.. I tried to access /metrics endpoint of NameNode and the response code is 
200. Maybe is it a problem of HBase?

> Encounter response code 500 when accessing /metrics endpoint
> 
>
> Key: HADOOP-13135
> URL: https://issues.apache.org/jira/browse/HADOOP-13135
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Ted Yu
>
> When accessing /metrics endpoint on hbase master through hadoop 2.7.1, I got:
> {code}
> HTTP ERROR 500
> Problem accessing /metrics. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029)
>   at 
> org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> {code}
> [~ajisakaa] suggested that code 500 should be 404 (NOT FOUND).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280997#comment-15280997
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


{code}
javac warning still occurs because Jenkins precommit run with the patch in 
github PR.
{code}
Oh I didn't know it. I also pushed it to the branch on github. Thank you for 
the notification.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13126) Add Brotli compression codec

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280995#comment-15280995
 ] 

Tsuyoshi Ozawa commented on HADOOP-13126:
-

[~rdblue] thank you for the response. The result of the benchmark is 
interesting to me. Let me review it.

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch, HADOOP-13126.2.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280983#comment-15280983
 ] 

Chris Nauroth commented on HADOOP-13028:


[~ste...@apache.org], thank you for patch v011.  That addressed my feedback.  
There is a new JavaDoc warning on {{S3AInputStream#close}}.  I'd be +1 after a 
clean-up of that and providing a patch that applies to trunk.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> HADOOP-13028-branch-2-010.patch, HADOOP-13028-branch-2-011.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10230) GSetByHashMap breaks contract of GSet

2016-05-11 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez updated HADOOP-10230:
--
Attachment: HADOOP-10230.001.patch

> GSetByHashMap breaks contract of GSet
> -
>
> Key: HADOOP-10230
> URL: https://issues.apache.org/jira/browse/HADOOP-10230
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Hiroshi Ikeda
>Assignee: Andres Perez
>Priority: Trivial
> Attachments: HADOOP-10230.001.patch
>
>
> The contract of GSet says it is ensured to throw NullPointerException if a 
> given argument is null for many methods, but GSetByHashMap doesn't. I think 
> just writing non-null preconditions for GSet are required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10230) GSetByHashMap breaks contract of GSet

2016-05-11 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez updated HADOOP-10230:
--
Status: Patch Available  (was: Open)

Change the exception type to NullPointerException as it is specificied in the 
GSet contract.

> GSetByHashMap breaks contract of GSet
> -
>
> Key: HADOOP-10230
> URL: https://issues.apache.org/jira/browse/HADOOP-10230
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Hiroshi Ikeda
>Assignee: Andres Perez
>Priority: Trivial
>
> The contract of GSet says it is ensured to throw NullPointerException if a 
> given argument is null for many methods, but GSetByHashMap doesn't. I think 
> just writing non-null preconditions for GSet are required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280965#comment-15280965
 ] 

Hudson commented on HADOOP-13065:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9748 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9748/])
HADOOP-13065. Add a new interface for retrieving FS and FC Statistics (cmccabe: 
rev 687233f20d24c29041929dd0a99d963cec54b6df)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageStatistics.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/EmptyStorageStatistics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/UnionStorageStatistics.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemStorageStatistics.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GlobalStorageStatistics.java


> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HADOOP-13065.009.patch, HADOOP-13065.010.patch, HADOOP-13065.011.patch, 
> HADOOP-13065.012.patch, HADOOP-13065.013.patch, HDFS-10175.000.patch, 
> HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, 
> HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, 
> TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-10230) GSetByHashMap breaks contract of GSet

2016-05-11 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez reassigned HADOOP-10230:
-

Assignee: Andres Perez

> GSetByHashMap breaks contract of GSet
> -
>
> Key: HADOOP-10230
> URL: https://issues.apache.org/jira/browse/HADOOP-10230
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Hiroshi Ikeda
>Assignee: Andres Perez
>Priority: Trivial
>
> The contract of GSet says it is ensured to throw NullPointerException if a 
> given argument is null for many methods, but GSetByHashMap doesn't. I think 
> just writing non-null preconditions for GSet are required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13126) Add Brotli compression codec

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280956#comment-15280956
 ] 

Hadoop QA commented on HADOOP-13126:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 43s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 26s 
{color} | {color:red} root: The patch generated 28 new + 0 unchanged - 0 fixed 
= 28 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 58s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 25s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF 

[jira] [Created] (HADOOP-13135) Encounter response code 500 when accessing /metrics endpoint

2016-05-11 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-13135:
---

 Summary: Encounter response code 500 when accessing /metrics 
endpoint
 Key: HADOOP-13135
 URL: https://issues.apache.org/jira/browse/HADOOP-13135
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Ted Yu


When accessing /metrics endpoint on hbase master through hadoop 2.7.1, I got:
{code}
HTTP ERROR 500

Problem accessing /metrics. Reason:

INTERNAL_SERVER_ERROR
Caused by:

java.lang.NullPointerException
at 
org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029)
at 
org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
{code}
[~ajisakaa] suggested that code 500 should be 404 (NOT FOUND).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280918#comment-15280918
 ] 

Akira AJISAKA commented on HADOOP-9613:
---

javac warning still occurs because Jenkins precommit run with the patch in 
github PR.
The latest patch looks good to me. Hi [~ste...@apache.org], would you review 
the latest patch?

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280908#comment-15280908
 ] 

Hadoop QA commented on HADOOP-9613:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 32 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
5s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 9s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 50s {color} 
| {color:red} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 1 new + 662 
unchanged - 0 fixed = 663 total (was 662) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 48s {color} 
| {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 672 
unchanged - 0 fixed = 673 total (was 672) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 30s 
{color} | {color:red} root: The patch generated 4 new + 377 unchanged - 51 
fixed = 381 total (was 428) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | 

[jira] [Commented] (HADOOP-11505) Various native parts use bswap incorrectly and unportably

2016-05-11 Thread Amir Sanjar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280900#comment-15280900
 ] 

Amir Sanjar commented on HADOOP-11505:
--

Colin,
Thanks for your guidance and do-diligent  here. This might be of-topic, but to 
avoid similar issues in future, could I offer my help here.
For example, we could contribute a Power based Jenkins slave(s) to Apache 
Hadoop CI.  We have successfully done similar contribution to Apache Bigtop CI. 
That way we could catch any regressions earlier in development cycle. I'd 
appreciate your guidance on this.

> Various native parts use bswap incorrectly and unportably
> -
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Alan Burlison
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
> HADOOP-11505.004.patch, HADOOP-11505.005.patch, HADOOP-11505.006.patch, 
> HADOOP-11505.007.patch, HADOOP-11505.008.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280881#comment-15280881
 ] 

Hadoop QA commented on HADOOP-12291:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
2 new + 45 unchanged - 0 fixed = 47 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 52s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 50s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
| JDK v1.7.0_95 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803471/HADOOP-12291.005.patch
 |
| JIRA Issue | HADOOP-12291 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (HADOOP-12709) Deprecate s3:// in branch-2,; cut from trunk

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280869#comment-15280869
 ] 

Hadoop QA commented on HADOOP-12709:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 26 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 53s 
{color} | {color:green} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 0 new + 
657 unchanged - 5 fixed = 657 total (was 662) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s 
{color} | {color:green} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 0 new + 
667 unchanged - 5 fixed = 667 total (was 672) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
26s {color} | {color:green} root: The patch generated 0 new + 107 unchanged - 
134 fixed = 107 total (was 241) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 
generated 0 new + 4 unchanged - 4 fixed = 4 total (was 8) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| 

[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280868#comment-15280868
 ] 

Mingliang Liu commented on HADOOP-13065:


Big TY [~cmccabe] for your insightful discussion, contribution to the new stats 
design, and reviewing the patch. Actually I'd prefer a commit message like 
"contributed by Colin Patrick McCabe and Mingliang".

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HADOOP-13065.009.patch, HADOOP-13065.010.patch, HADOOP-13065.011.patch, 
> HADOOP-13065.012.patch, HADOOP-13065.013.patch, HDFS-10175.000.patch, 
> HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, 
> HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, 
> TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280865#comment-15280865
 ] 

Chris Nauroth commented on HADOOP-13028:


I'm in favor of including the stream statistics in {{S3AInputStream#toString}}. 
 This is an extension of the stream state already provided.  I would like us to 
have the ability to evolve {{toString}} output for improved diagnostics like 
this.

Typical Java best practices advise using {{toString}} output as a debugging 
aid, not as a stable format suitable for UI display or object serialization.  
HDFS-9732 is an example of a patch where I have advised against using 
{{toString}} as a serialization format and recommended migrating to a different 
method that can provide a stability guarantee.  In the future, I will strongly 
consider -1'ing patches that introduce these kinds of dependencies on 
{{toString}} output.

While reflection-based approaches are viable, especially with some helpful 
libraries, I've never heard of those projects' contributors saying that they 
like writing their code that way.  Instead, I tend to hear that it makes their 
code more awkward or introduces potential performance risks for the extra 
indirection.

Another consideration is integration with logging.  SLF4J makes it easy to pass 
along template arguments, and then SLF4J will lazily call {{toString}} based on 
the configured logging level.  If the output is hidden behind a different 
method, or even requires reflection to access it, then applications will have 
to go back to coding their own conditional checks on the log level to avoid 
potentially costly method calls.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> HADOOP-13028-branch-2-010.patch, HADOOP-13028-branch-2-011.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-13065:
--
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

committed to 2.8.  Thanks, [~liuml07].

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HADOOP-13065.009.patch, HADOOP-13065.010.patch, HADOOP-13065.011.patch, 
> HADOOP-13065.012.patch, HADOOP-13065.013.patch, HDFS-10175.000.patch, 
> HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, 
> HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, 
> TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-05-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-12749.
---
  Resolution: Fixed
   Fix Version/s: (was: 2.9.0)
  2.8.0
Target Version/s: 2.8.0

Backported to 2.8

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.8.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-05-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HADOOP-12749:
---

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13101) TestDNS#{testDefaultDnsServer,testNullDnsServer} failed intermittently

2016-05-11 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280810#comment-15280810
 ] 

Mingliang Liu commented on HADOOP-13101:


Hi [~demongaorui], please find this Jenkins pre-commit run (happend within ~12 
hours) for detailed logs: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9361/testReport/org.apache.hadoop.net/TestDNS/testDefaultDnsServer/

Echo to [~xyao]'s comment, this bug is intermittent unit test failure and may 
not be reproduced consistently as it's maybe platform-, configuration-, and 
timing-dependent.

> TestDNS#{testDefaultDnsServer,testNullDnsServer} failed intermittently
> --
>
> Key: HADOOP-13101
> URL: https://issues.apache.org/jira/browse/HADOOP-13101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Xiaoyu Yao
>
> The test failed intermittently on 
> [Jenkins|https://builds.apache.org/job/PreCommit-HDFS-Build/15366/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_91.txt]
>  with the following error.
> {code}
> Failed tests: 
>   TestDNS.testDefaultDnsServer:134 
> Expected: is "dd12a7999c74"
>  but: was "localhost"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280782#comment-15280782
 ] 

Colin Patrick McCabe edited comment on HADOOP-13028 at 5/11/16 8:43 PM:


In the past I've written code for Spark that used reflection to make use of 
APIs that may or may not be present in Hadoop.  HBase often does this as well, 
so that it can use multiple versions of Hadoop.  It seems like this wouldn't be 
a lot of code.  Is that feasible in this case?

I just find the argument that we should overload an existing unrelated API to 
output statistics very off-putting.  I guess you could argue that the 
statistics is part of the stream state, and toString is intended to reflect 
stream state.  But it will result in very long output from toString which 
probably isn't what most existing callers want.  And it's not consistent with 
the way any other hadoop streams work, including other s3 ones like s3n.

[~andrew.wang], [~cnauroth], [~liuml07], what do you think about this?  Is it 
acceptable to overload {{toString}} in this way, to output statistics?  The 
argument seems to be that this easier than using reflection to get the actual 
stream statistics object.


was (Author: cmccabe):
In the past I've written code for Spark that used reflection to make use of 
APIs that may or may not be present in Hadoop.  HBase often does this as well, 
so that it can use multiple versions of Hadoop.  It seems like this wouldn't be 
a lot of code.  Is that feasible in this case?

I just find the argument that we should overload an existing unrelated API to 
output statistics very off-putting.  It's like saying we should override 
hashCode to output the number of times the user called {{seek()}} on the stream.

I guess you could argue that the statistics is part of the stream state, and 
toString is intended to reflect stream state.  But it will result in very long 
output from toString which probably isn't what most existing callers want.  And 
it's not consistent with the way any other hadoop streams work, including other 
s3 ones like s3n.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> HADOOP-13028-branch-2-010.patch, HADOOP-13028-branch-2-011.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280790#comment-15280790
 ] 

Colin Patrick McCabe commented on HADOOP-13065:
---

+1 for version 13.  Thanks, [~liuml07].

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HADOOP-13065.009.patch, HADOOP-13065.010.patch, HADOOP-13065.011.patch, 
> HADOOP-13065.012.patch, HADOOP-13065.013.patch, HDFS-10175.000.patch, 
> HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, 
> HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, 
> TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280782#comment-15280782
 ] 

Colin Patrick McCabe edited comment on HADOOP-13028 at 5/11/16 8:39 PM:


In the past I've written code for Spark that used reflection to make use of 
APIs that may or may not be present in Hadoop.  HBase often does this as well, 
so that it can use multiple versions of Hadoop.  It seems like this wouldn't be 
a lot of code.  Is that feasible in this case?

I just find the argument that we should overload an existing unrelated API to 
output statistics very off-putting.  It's like saying we should override 
hashCode to output the number of times the user called {{seek()}} on the stream.

I guess you could argue that the statistics is part of the stream state, and 
toString is intended to reflect stream state.  But it will result in very long 
output from toString which probably isn't what most existing callers want.  And 
it's not consistent with the way any other hadoop streams work, including other 
s3 ones like s3n.


was (Author: cmccabe):
In the past I've written code for Spark that used reflection to make use of 
APIs that may or may not be present in Hadoop.  HBase often does this as well, 
so that it can use multiple versions of Hadoop.  It seems like this wouldn't be 
a lot of code.  Is that feasible in this case?

I just find the argument that we should overload an existing unrelated API to 
output statistics very off-putting.  It's like saying we should override 
hashCode to output the number of times the user called {{seek()}} on the 
stream.  I also find it concerning that this would be something unique to s3a 
and not present in the toString methods of any other filesystem (including the 
other s3 ones).  It feels like a gross hack.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> HADOOP-13028-branch-2-010.patch, HADOOP-13028-branch-2-011.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280782#comment-15280782
 ] 

Colin Patrick McCabe commented on HADOOP-13028:
---

In the past I've written code for Spark that used reflection to make use of 
APIs that may or may not be present in Hadoop.  HBase often does this as well, 
so that it can use multiple versions of Hadoop.  It seems like this wouldn't be 
a lot of code.  Is that feasible in this case?

I just find the argument that we should overload an existing unrelated API to 
output statistics very off-putting.  It's like saying we should override 
hashCode to output the number of times the user called {{seek()}} on the 
stream.  I also find it concerning that this would be something unique to s3a 
and not present in the toString methods of any other filesystem (including the 
other s3 ones).  It feels like a gross hack.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> HADOOP-13028-branch-2-010.patch, HADOOP-13028-branch-2-011.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13128) Manage Hadoop RPC resource usage via resource coupon

2016-05-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13128:

Attachment: HADOOP-13128-Proposal-20160511.pdf

Attach a draft proposal for discussion. 

> Manage Hadoop RPC resource usage via resource coupon
> 
>
> Key: HADOOP-13128
> URL: https://issues.apache.org/jira/browse/HADOOP-13128
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13128-Proposal-20160511.pdf
>
>
> HADOOP-9640 added RPC Fair Call Queue and HADOOP-10597 added RPC backoff to 
> ensure the fairness usage of the HDFS namenode resources. YARN, the Hadoop 
> cluster resource manager currently manages the CPU and Memory resources for 
> jobs/tasks but not the storage resources such as HDFS namenode and datanode 
> usage directly. As a result of that, a high priority Yarn Job may send too 
> many RPC requests to HDFS namenode and get demoted into low priority call 
> queues due to lack of reservation/coordination. 
> To better support multi-tenancy use cases like above, we propose to manage 
> RPC server resource usage via coupon mechanism integrated with YARN. The idea 
> is to allow YARN request HDFS storage resource coupon (e.g., namenode RPC 
> calls, datanode I/O bandwidth) from namenode on behalf of the job upon 
> submission time.  Once granted, the tasks will include the coupon identifier 
> in RPC header for the subsequent calls. HDFS namenode RPC scheduler maintains 
> the state of the coupon usage based on the scheduler policy (fairness or 
> priority) to match the RPC priority with the YARN scheduling priority.
> I will post a proposal with more detail shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13008) Add XFS Filter for UIs to Hadoop Common

2016-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280622#comment-15280622
 ] 

Hudson commented on HADOOP-13008:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9746 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9746/])
HADOOP-13008. Add XFS Filter for UIs to Hadoop Common. Contributed by 
(cnauroth: rev dee279b532e7286362518b531c9daea9ae8606f4)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/http/TestXFrameOptionsFilter.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/XFrameOptionsFilter.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/package-info.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/http/TestRestCsrfPreventionFilter.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/RestCsrfPreventionFilter.java


> Add XFS Filter for UIs to Hadoop Common
> ---
>
> Key: HADOOP-13008
> URL: https://issues.apache.org/jira/browse/HADOOP-13008
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13008-001.patch, HADOOP-13008-002.patch, 
> HADOOP-13008-003.patch, HADOOP-13008-004.patch
>
>
> Cross Frame Scripting (XFS) prevention for UIs can be provided through a 
> common servlet filter. This filter will set the X-Frame-Options HTTP header 
> to DENY unless configured to another valid setting.
> There are a number of UIs that could just add this to their filters as well 
> as the Yarn webapp proxy which could add it for all it's proxied UIs - if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280621#comment-15280621
 ] 

Hudson commented on HADOOP-12942:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9746 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9746/])
HADOOP-12942. hadoop credential commands non-obviously use password of (lmccay: 
rev acb509b2fa0bbe6e00f8a90aec37f63a09463afa)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderFactory.java
* hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java


> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch, HADOOP-12942.008.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".




[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-11 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280602#comment-15280602
 ] 

Larry McCay commented on HADOOP-12942:
--

This has been committed to trunk, branch-2 and branch-2.8.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch, HADOOP-12942.008.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11505) Various native parts use bswap incorrectly and unportably

2016-05-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280588#comment-15280588
 ] 

Colin Patrick McCabe commented on HADOOP-11505:
---

The problematic part of this change was making all the subprojects depend on 
hadoop-common.  It seems like you could avoid doing that by putting all the 
le32to_h, etc. definitions in a standalone header file and having the other 
projects include that file.

> Various native parts use bswap incorrectly and unportably
> -
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Alan Burlison
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
> HADOOP-11505.004.patch, HADOOP-11505.005.patch, HADOOP-11505.006.patch, 
> HADOOP-11505.007.patch, HADOOP-11505.008.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-11 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280583#comment-15280583
 ] 

Larry McCay commented on HADOOP-12942:
--

+1 - I will commit this to trunk, branch-2 and branch-2.8.
Thanks for the patch, [~yoderme]!

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch, HADOOP-12942.008.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13126) Add Brotli compression codec

2016-05-11 Thread Ryan Blue (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Blue updated HADOOP-13126:
---
Attachment: HADOOP-13126.2.patch

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch, HADOOP-13126.2.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12709) Deprecate s3:// in branch-2,; cut from trunk

2016-05-11 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12709:
---
Attachment: (was: HADOOP-12709.004.patch)

> Deprecate s3:// in branch-2,; cut from trunk
> 
>
> Key: HADOOP-12709
> URL: https://issues.apache.org/jira/browse/HADOOP-12709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-12709.000.patch, HADOOP-12709.001.patch, 
> HADOOP-12709.002.patch, HADOOP-12709.003.patch, HADOOP-12709.004.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12709) Deprecate s3:// in branch-2,; cut from trunk

2016-05-11 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12709:
---
Attachment: HADOOP-12709.004.patch

> Deprecate s3:// in branch-2,; cut from trunk
> 
>
> Key: HADOOP-12709
> URL: https://issues.apache.org/jira/browse/HADOOP-12709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-12709.000.patch, HADOOP-12709.001.patch, 
> HADOOP-12709.002.patch, HADOOP-12709.003.patch, HADOOP-12709.004.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13008) Add XFS Filter for UIs to Hadoop Common

2016-05-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13008:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1 for patch v004.  I have committed this to trunk, branch-2 and branch-2.8.  
Larry, thank you for the patch.

> Add XFS Filter for UIs to Hadoop Common
> ---
>
> Key: HADOOP-13008
> URL: https://issues.apache.org/jira/browse/HADOOP-13008
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13008-001.patch, HADOOP-13008-002.patch, 
> HADOOP-13008-003.patch, HADOOP-13008-004.patch
>
>
> Cross Frame Scripting (XFS) prevention for UIs can be provided through a 
> common servlet filter. This filter will set the X-Frame-Options HTTP header 
> to DENY unless configured to another valid setting.
> There are a number of UIs that could just add this to their filters as well 
> as the Yarn webapp proxy which could add it for all it's proxied UIs - if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-11 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin updated HADOOP-12291:
---
Attachment: HADOOP-12291.005.patch

I have added in the debug line, as requested.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-11 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin updated HADOOP-12291:
---
Status: In Progress  (was: Patch Available)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-11 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin updated HADOOP-12291:
---
Status: Patch Available  (was: In Progress)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280536#comment-15280536
 ] 

Tsuyoshi Ozawa commented on HADOOP-12064:
-

Fortunately, {{mvn test -Dtest="Test*Web*"}} passed on my local. I'll be back 
here after merging HADOOP-9613 into trunk and checking jdiff between guice 4.0 
and 3.0.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12709) Deprecate s3:// in branch-2,; cut from trunk

2016-05-11 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12709:
---
Attachment: HADOOP-12709.004.patch

Thanks [~cnauroth] for your review and nice catches. Sorry I missed the places 
out of the {{hadoop-aws}} module. In the current patch, all the clean-ups you 
listed are addressed. The checkstyle and javadoc warnings are fixed as well.

Thanks [~ste...@apache.org] for the review and comment. I was kind of 
aggressive about changing the config keys/values, but I agree that if we should 
keep the original names and mark them as deprecated if we can. This way, the 
existing applications using s3n don't have to update their configurations to 
run. As to the implementation, the v4 patch employs the 
{{Configuration#addDeprecations}} in the static block. I'm wondering if there 
is a better way.
Your 2nd and 3rd comments are very valid though I was thinking of addressing 
them separately along with other existing places where inline strings were 
used. As those changes are very related but not complex, I also think it's 
doable in this patch. See v4 if I address them correctly.

> Deprecate s3:// in branch-2,; cut from trunk
> 
>
> Key: HADOOP-12709
> URL: https://issues.apache.org/jira/browse/HADOOP-12709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-12709.000.patch, HADOOP-12709.001.patch, 
> HADOOP-12709.002.patch, HADOOP-12709.003.patch, HADOOP-12709.004.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13128) Manage Hadoop RPC resource usage via resource coupon

2016-05-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13128:

Description: 
HADOOP-9640 added RPC Fair Call Queue and HADOOP-10597 added RPC backoff to 
ensure the fairness usage of the HDFS namenode resources. YARN, the Hadoop 
cluster resource manager currently manages the CPU and Memory resources for 
jobs/tasks but not the storage resources such as HDFS namenode and datanode 
usage directly. As a result of that, a high priority Yarn Job may send too many 
RPC requests to HDFS namenode and get demoted into low priority call queues due 
to lack of reservation/coordination. 

To better support multi-tenancy use cases like above, we propose to manage RPC 
server resource usage via coupon mechanism integrated with YARN. The idea is to 
allow YARN request HDFS storage resource coupon (e.g., namenode RPC calls, 
datanode I/O bandwidth) from namenode on behalf of the job upon submission 
time.  Once granted, the tasks will include the coupon identifier in RPC header 
for the subsequent calls. HDFS namenode RPC scheduler maintains the state of 
the coupon usage based on the scheduler policy (fairness or priority) to match 
the RPC priority with the YARN scheduling priority.

I will post a proposal with more detail shortly. 



  was:
HADOOP-9640 added RPC Fair Call Queue and HADOOP-10597 added RPC backoff to 
ensure the fairness usage of the HDFS namenode resources. YARN, the Hadoop 
cluster resource manager currently manages the CPU and Memory resources for 
jobs/tasks but not the storage resources such as HDFS namenode and datanode 
usage directly. As a result of that, a high priority Yarn Job may send too many 
RPC requests to HDFS namenode and get demoted into low priority call queues due 
to lack of reservation/coordination. 

To better support multi-tenancy use cases like above, we propose to manage RPC 
server resource usage via coupon mechanism integrated with YARN. The idea is to 
allow YARN request HDFS storage resource coupon (e.g., namenode RPC calls, 
datanode I/O bandwidth) from namenode on behalf of the job upon submission 
time.  Once granted, the tasks will include the coupon identifier in RPC header 
for the subsequent calls. HDFS namenode RPC scheduler maintains the state of 
the coupon usage based on the scheduler policy (fairness or priority) to match 
the RPC priority with the YARN scheduling priority. 




> Manage Hadoop RPC resource usage via resource coupon
> 
>
> Key: HADOOP-13128
> URL: https://issues.apache.org/jira/browse/HADOOP-13128
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> HADOOP-9640 added RPC Fair Call Queue and HADOOP-10597 added RPC backoff to 
> ensure the fairness usage of the HDFS namenode resources. YARN, the Hadoop 
> cluster resource manager currently manages the CPU and Memory resources for 
> jobs/tasks but not the storage resources such as HDFS namenode and datanode 
> usage directly. As a result of that, a high priority Yarn Job may send too 
> many RPC requests to HDFS namenode and get demoted into low priority call 
> queues due to lack of reservation/coordination. 
> To better support multi-tenancy use cases like above, we propose to manage 
> RPC server resource usage via coupon mechanism integrated with YARN. The idea 
> is to allow YARN request HDFS storage resource coupon (e.g., namenode RPC 
> calls, datanode I/O bandwidth) from namenode on behalf of the job upon 
> submission time.  Once granted, the tasks will include the coupon identifier 
> in RPC header for the subsequent calls. HDFS namenode RPC scheduler maintains 
> the state of the coupon usage based on the scheduler policy (fairness or 
> priority) to match the RPC priority with the YARN scheduling priority.
> I will post a proposal with more detail shortly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280468#comment-15280468
 ] 

Hadoop QA commented on HADOOP-12064:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 29s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803438/HADOOP-12064.002.WIP.patch
 |
| JIRA Issue | HADOOP-12064 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 5015920bdfd4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 39f2bac |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9369/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 

[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Labels: UpgradeKeyLibrary maven  (was: maven)

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary, maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13131) add tests to verify that s3a supports SSE-S3 encryption

2016-05-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280371#comment-15280371
 ] 

Chris Nauroth commented on HADOOP-13131:


Hello [~ste...@apache.org].

I agree with the approach of the tests.  I think these new tests are good 
candidates for using a JUnit {{Parameterized}} suite to avoid writing redundant 
test methods.  That's just subjective though, and I'm comfortable with whatever 
decision you make on that.  Aside from that, I think it's ready to go after 
working through the pre-commit checks.

bq. One thing to consider is "how do non-AWS implementations of S3 react here".

My intuition is that a non-AWS implementation would simply ignore the extra 
{{x-amz-server-side-encryption}} header in the requests.  I can't say for sure 
though since I don't have a non-AWS implementation to test against.  I suppose 
it would depend on whether or not that implementation chooses to strictly 
validate and reject unknown extended headers.

> add tests to verify that s3a supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13132) LoadBalancingKMSClientProvider ClassCastException on AuthenticationException

2016-05-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-13132:


Assignee: Wei-Chiu Chuang

> LoadBalancingKMSClientProvider ClassCastException on AuthenticationException
> 
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Miklos Szurap
>Assignee: Wei-Chiu Chuang
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12064:

Labels: UpgradeKeyLibrary  (was: )

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280342#comment-15280342
 ] 

Tsuyoshi Ozawa commented on HADOOP-13133:
-

[~sjlee0] Good suggestion. I added a label, UpgradeKeyLibrary, for tracking 
these kind of change. Thanks!

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13133:

Labels: UpgradeKeyLibrary  (was: UpgradeDependency)

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13133:

Labels: UpgradeDependency  (was: )

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeDependency
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13134) WASB's file delete still throwing Blob not found exception

2016-05-11 Thread Lin Chan (JIRA)
Lin Chan created HADOOP-13134:
-

 Summary: WASB's file delete still throwing Blob not found exception
 Key: HADOOP-13134
 URL: https://issues.apache.org/jira/browse/HADOOP-13134
 Project: Hadoop Common
  Issue Type: Bug
  Components: azure
Affects Versions: 2.7.1
Reporter: Lin Chan
Assignee: Dushyanth


WASB is still throwing blob not found exception as shown in the following 
stack. Need to catch that and convert to Boolean return code in WASB delete.

16/05/07 01:24:57 ERROR InsertIntoHadoopFsRelation: Aborting job.
org.apache.hadoop.fs.azure.AzureException: 
com.microsoft.azure.storage.StorageException: The specified blob does not exist.
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2682)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2693)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.updateParentFolderLastModifiedTime(NativeAzureFileSystem.java:2495)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1860)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.cleanupJob(FileOutputCommitter.java:510)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(FileOutputCommitter.java:403)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:364)
at 
org.apache.parquet.hadoop.ParquetOutputCommitter.commitJob(ParquetOutputCommitter.java:46)
at 
org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitJob(WriterContainer.scala:230)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:151)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)
 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280290#comment-15280290
 ] 

Sangjin Lee commented on HADOOP-13133:
--

[~ozawa], [~ajisakaa], thanks for working on upgrading these key libraries for 
3.0. Is there an easy way to keep track of all the JIRAs of this nature? I know 
there were JIRAs to upgrade jersey, upgrade JDK, so on. It would be great if 
there is a simple way to see all these JIRAs, perhaps in the form of a label or 
an umbrella ticket with links. Thanks!

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12064:

Attachment: (was: HADOOP-12064.002.WIP.patch)

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12064:

Attachment: HADOOP-12064.002.WIP.patch

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13125) FS Contract tests don't report FS initialization errors well

2016-05-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280183#comment-15280183
 ] 

Hudson commented on HADOOP-13125:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9745 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9745/])
HADOOP-13125 FS Contract tests don't report FS initialization errors (stevel: 
rev 35532614009652d2bca7ded72f166ef6b3382598)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractBondedFSContract.java


> FS Contract tests don't report FS initialization errors well
> 
>
> Key: HADOOP-13125
> URL: https://issues.apache.org/jira/browse/HADOOP-13125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13125-001.patch
>
>
> If the {{FileSystem.initialize()}} method of an FS fails with an 
> IllegalArgumentException, the FS contract tests assume this is related to the 
> URI syntax and raises a new exception 'invalid URI'+ fsURI, retaining the 
> real cause only as error text in the nested exception.
> the exception text should be included in the message thrown, and cause 
> changed to indicate that it's initialization related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12064:

Attachment: HADOOP-12064.002.WIP.patch

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12064:

Attachment: (was: HADOOP-12064.002.WIP.patch)

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12064.001.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280147#comment-15280147
 ] 

Tsuyoshi Ozawa commented on HADOOP-12064:
-

Attached WIP patch for my local testing. I'll run all tests and check what it 
will happen.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12064:

Attachment: HADOOP-12064.002.WIP.patch

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Attachment: HADOOP-9613.017.incompatible.patch

Updated a patch based on Akira's comment on github.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.017.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280124#comment-15280124
 ] 

Akira AJISAKA commented on HADOOP-13133:


Thanks Tsuyoshi.

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa resolved HADOOP-13133.
-
Resolution: Duplicate

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280121#comment-15280121
 ] 

Tsuyoshi Ozawa commented on HADOOP-13133:
-

OK, let's do this on HADOOP-12064. Closing this as duplicated.

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280111#comment-15280111
 ] 

Akira AJISAKA commented on HADOOP-13133:


Agreed. I'm thinking we should upgrade guice as well because it depends on 
cglib.
https://github.com/google/guice/blob/master/pom.xml
guice -> cglib -> asm

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-05-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280110#comment-15280110
 ] 

Steve Loughran commented on HADOOP-12563:
-

AW: I do the paperwork to change these things when I see it: HADOOP-12913, 
HADOOP-11822 ... and as someone who builds downstream code (spark, slider) 
against branch-2 and branch-3, I'm often the first person complaining that 
things have broken.

I don't know what other things HBase, Hive, Flink, etc have picked up; I don't 
know what expectations they have on the behaviour of bits of the code. All I  
know is the more changes which break compatibility across versions *break my 
own code*. Yes, I try to address these problems before our customers get to see 
them, but (a) it's a pain, (b) if it changes binary signatures then its a 
problem for any app designed to build across versions, and (c) semantic changes 
are the most subtle of all —these are the ones which lurk until production. And 
you don't want me to add extra works to the ops teams, do you?

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13125) FS Contract tests don't report FS initialization errors well

2016-05-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280101#comment-15280101
 ] 

Steve Loughran commented on HADOOP-13125:
-

thanks - committed.

> FS Contract tests don't report FS initialization errors well
> 
>
> Key: HADOOP-13125
> URL: https://issues.apache.org/jira/browse/HADOOP-13125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13125-001.patch
>
>
> If the {{FileSystem.initialize()}} method of an FS fails with an 
> IllegalArgumentException, the FS contract tests assume this is related to the 
> URI syntax and raises a new exception 'invalid URI'+ fsURI, retaining the 
> real cause only as error text in the nested exception.
> the exception text should be included in the message thrown, and cause 
> changed to indicate that it's initialization related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13125) FS Contract tests don't report FS initialization errors well

2016-05-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13125:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> FS Contract tests don't report FS initialization errors well
> 
>
> Key: HADOOP-13125
> URL: https://issues.apache.org/jira/browse/HADOOP-13125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13125-001.patch
>
>
> If the {{FileSystem.initialize()}} method of an FS fails with an 
> IllegalArgumentException, the FS contract tests assume this is related to the 
> URI syntax and raises a new exception 'invalid URI'+ fsURI, retaining the 
> real cause only as error text in the nested exception.
> the exception text should be included in the message thrown, and cause 
> changed to indicate that it's initialization related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280096#comment-15280096
 ] 

Tsuyoshi Ozawa commented on HADOOP-13133:
-

IIUC, we should upgrade asm and cglib at the same time since cglib depends on 
asm 5.0.4 https://github.com/cglib/cglib/blob/master/pom.xml

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13133:

Summary: [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or 
later  (was: [JDK8] Upgrade asm to 5.0.3 or upper)

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or upper

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13133:

Summary: [JDK8] Upgrade asm to 5.0.3 or upper  (was: [JDK8] Upgrade asm to 
5.0.4 or upper)

> [JDK8] Upgrade asm to 5.0.3 or upper
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.4 or upper

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13133:

Summary: [JDK8] Upgrade asm to 5.0.4 or upper  (was: [JDK8] Upgrade asm to 
5.1)

> [JDK8] Upgrade asm to 5.0.4 or upper
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13133) [JDK8] Upgrade asm to 5.1

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa reassigned HADOOP-13133:
---

Assignee: Tsuyoshi Ozawa

> [JDK8] Upgrade asm to 5.1
> -
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11993) maven enforcer plugin to ban java 8 incompatible dependencies

2016-05-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280085#comment-15280085
 ] 

Akira AJISAKA commented on HADOOP-11993:


Thanks Tsuyoshi for the information. I filed a jira for upgrading asm 
(HADOOP-13133).

> maven enforcer plugin to ban java 8 incompatible dependencies
> -
>
> Key: HADOOP-11993
> URL: https://issues.apache.org/jira/browse/HADOOP-11993
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Priority: Minor
>
> It's possible to use maven-enforcer to ban dependencies; this can be used to 
> reject those known to be incompatible with Java 8
> [example|https://gist.github.com/HiJon89/65e34552c18e5ac9fd31]
> If we set maven enforcer to do this checking, it can ensure that the 2.7+ 
> codebase isn't pulling in any incompatible binaries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13133) [JDK8] Upgrade asm to 5.1

2016-05-11 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-13133:
--

 Summary: [JDK8] Upgrade asm to 5.1
 Key: HADOOP-13133
 URL: https://issues.apache.org/jira/browse/HADOOP-13133
 Project: Hadoop Common
  Issue Type: Task
Reporter: Akira AJISAKA


We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280079#comment-15280079
 ] 

Steve Loughran commented on HADOOP-13028:
-

For some more detail, here's a spark-cloud module (WiP) test run against 2.7.1; 
duration measured in tests, stream info printed as test goes alone. There's no 
meaningful string value.
{code}
= TEST OUTPUT FOR o.a.s.cloud.s3.S3aIOSuite: 'SeekReadFully: Cost of seek 
and ReadFully' =

2016-05-11 13:54:44,462 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of stat = 189,933,000 ns
2016-05-11 13:54:44,652 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of open = 189,144,000 ns
2016-05-11 13:54:44,652 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:54:45,099 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of read() [pos = 0] = 446,564,000 ns
2016-05-11 13:54:45,100 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:54:45,101 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:54:46,052 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of seek(256) [pos = 1] = 950,677,000 ns
2016-05-11 13:54:46,053 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:54:46,053 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:54:46,054 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of seek(256) [pos = 256] = 22,000 ns
2016-05-11 13:54:46,054 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:54:46,055 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:54:47,010 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of seek(EOF-2) [pos = 256] = 954,645,000 ns
2016-05-11 13:54:47,010 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:54:47,011 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:54:47,012 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of read() [pos = 21203389] = 397,000 ns
2016-05-11 13:54:47,012 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:54:47,013 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:54:49,213 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(1, byte[1]) [pos = 21203390] = 2,199,571,000 ns
2016-05-11 13:54:49,213 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:54:49,214 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:54:52,487 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(1, byte[256]) [pos = 21203390] = 3,272,746,000 ns
2016-05-11 13:54:52,487 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:54:52,488 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:54:55,092 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(260, byte[256]) [pos = 21203390] = 2,604,062,000 ns
2016-05-11 13:54:55,092 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:54:55,093 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:54:56,825 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(1024, byte[256]) [pos = 21203390] = 1,731,421,000 ns
2016-05-11 13:54:56,825 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:54:56,825 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:54:58,486 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(1536, byte[256]) [pos = 21203390] = 1,660,882,000 ns
2016-05-11 13:54:58,487 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:54:58,487 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:55:00,635 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(8192, byte[1024]) [pos = 21203390] = 2,147,589,000 ns
2016-05-11 13:55:00,635 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:55:00,636 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:55:02,333 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of readFully(9728, byte[1024]) [pos = 21203390] = 1,697,169,000 ns
2016-05-11 13:55:02,334 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) -   
org.apache.hadoop.fs.s3a.S3AInputStream@3d85fdbe
2016-05-11 13:55:02,334 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
2016-05-11 13:55:02,334 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Duration of 

[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280031#comment-15280031
 ] 

ASF GitHub Bot commented on HADOOP-9613:


Github user aajisaka commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/76#discussion_r62836827
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestTimelineWebServices.java
 ---
@@ -81,11 +82,9 @@
   private static TimelineStore store;
   private static TimelineACLsManager timelineACLsManager;
   private static AdminACLsManager adminACLsManager;
-  private long beforeTime;
+  private static long beforeTime;
 
-  private Injector injector = Guice.createInjector(new ServletModule() {
-
-@SuppressWarnings("unchecked")
--- End diff --

It looks to me that the javac warning is related because the patch removes 
`@SuppressWarnings("unchecked")`.


> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13132) LoadBalancingKMSClientProvider ClassCastException on AuthenticationException

2016-05-11 Thread Miklos Szurap (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szurap updated HADOOP-13132:
---
Component/s: kms

> LoadBalancingKMSClientProvider ClassCastException on AuthenticationException
> 
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Miklos Szurap
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-13132) LoadBalancingKMSClientProvider ClassCastException on AuthenticationException

2016-05-11 Thread Miklos Szurap (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szurap moved HDFS-10389 to HADOOP-13132:
---

Key: HADOOP-13132  (was: HDFS-10389)
Project: Hadoop Common  (was: Hadoop HDFS)

> LoadBalancingKMSClientProvider ClassCastException on AuthenticationException
> 
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szurap
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13131) add tests to verify that s3a supports SSE-S3 encryption

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279997#comment-15279997
 ] 

Hadoop QA commented on HADOOP-13131:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 6 new + 9 
unchanged - 0 fixed = 15 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 1s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803413/HADOOP-13131-001.patch
 |
| JIRA Issue | HADOOP-13131 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 954a8dfa0eb4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d971bf2 |
| Default 

[jira] [Updated] (HADOOP-13131) add tests to verify that s3a supports SSE-S3 encryption

2016-05-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Attachment: HADOOP-13131-001.patch

Patch 001; tested locally against S3 Ireland. One thing to consider is "how do 
non-AWS implementations of S3 react here". We may want to add an option to 
disable these tests

> add tests to verify that s3a supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13131) add tests to verify that s3a supports SSE-S3 encryption

2016-05-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Status: Patch Available  (was: Open)

> add tests to verify that s3a supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13131) add tests to verify that s3a supports SSE-S3 encryption

2016-05-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Summary: add tests to verify that s3a supports SSE-S3 encryption  (was: add 
a test to verify that s3a supports SSE-S3 encryption)

> add tests to verify that s3a supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13131) add a test to verify that s3a supports SSE-S3 encryption

2016-05-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-13131:
---

Assignee: Steve Loughran

> add a test to verify that s3a supports SSE-S3 encryption
> 
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13131) add a test to verify that s3a supports SSE-S3 encryption

2016-05-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13131:
---

 Summary: add a test to verify that s3a supports SSE-S3 encryption
 Key: HADOOP-13131
 URL: https://issues.apache.org/jira/browse/HADOOP-13131
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.7.2
Reporter: Steve Loughran
Priority: Minor


Although S3A claims to support server-side S3 encryption (and does, if you set 
the option), we don't have any test to verify this. Of course, as the 
encryption is transparent, it's hard to test.

Here's what I propose
# a test which sets encryption = AES256; expects things to work as normal.
# a test which sets encyption = DES and expects any operation creating a file 
or directory to fail with a 400 "bad request" error





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13130) s3a failures can surface as RTEs, not IOEs

2016-05-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279931#comment-15279931
 ] 

Steve Loughran commented on HADOOP-13130:
-

Here's an interesting example. A mkdir() operation is failing because the 
caller is (deliberately) requesting an unsupported encryption. algorithm. 
{code}

testEncrypt256(org.apache.hadoop.fs.s3a.TestS3AEncryptionAlgorithmPropagation)  
Time elapsed: 3.555 sec  <<< ERROR!
com.amazonaws.services.s3.model.AmazonS3Exception: The encryption method 
specified is not supported (Service: Amazon S3; Status Code: 400; Error Code: 
InvalidArgument; Request ID: A7FEE89E7EB4FC6D)
at 
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at 
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at 
com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1472)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:1307)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:1284)
at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:981)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1894)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:323)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

> s3a failures can surface as RTEs, not IOEs
> --
>
> Key: HADOOP-13130
> URL: https://issues.apache.org/jira/browse/HADOOP-13130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>
> S3A failures happening in the AWS library surface as 
> {{AmazonClientException}} derivatives, rather than IOEs. As the amazon 
> exceptions are runtime exceptions, any code which catches IOEs for error 
> handling breaks.
> The fix will be to catch and wrap. The hard thing will be to wrap it with 
> meaningful exceptions rather than a generic IOE. Furthermore, if anyone has 
> been catching AWS exceptions, they are going to be disappointed. That means 
> that fixing this situation could be considered "incompatible" —but only for 
> code which contains assumptions about the underlying FS and the exceptions 
> they raise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13130) s3a failures can surface as RTEs, not IOEs

2016-05-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13130:
---

 Summary: s3a failures can surface as RTEs, not IOEs
 Key: HADOOP-13130
 URL: https://issues.apache.org/jira/browse/HADOOP-13130
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.2
Reporter: Steve Loughran


S3A failures happening in the AWS library surface as {{AmazonClientException}} 
derivatives, rather than IOEs. As the amazon exceptions are runtime exceptions, 
any code which catches IOEs for error handling breaks.

The fix will be to catch and wrap. The hard thing will be to wrap it with 
meaningful exceptions rather than a generic IOE. Furthermore, if anyone has 
been catching AWS exceptions, they are going to be disappointed. That means 
that fixing this situation could be considered "incompatible" —but only for 
code which contains assumptions about the underlying FS and the exceptions they 
raise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279905#comment-15279905
 ] 

Steve Loughran commented on HADOOP-13028:
-

Because one place I'm using this to look at the logs and see how to tune the 
performance is in spark code which doesn't have access to those internals and 
is built against Hadoop 2.6.x anyway. It lets me have code which can be run 
with -Dhadoop.version=2.7.1 and -Dhadoop.version=2.8.0-SNAPSHOT, I can not only 
measure the duration in the spark code itself, I can see the logged info and 
see what's been happening —where things can be improved futher.

We cannot do this if the way to log this data is via a class which is package 
private and in Hadoop 2.8+ only. As requested, I've scoped that statistics 
class so that the only way to get at it is to inject code into the 
org.apache.hadoop.fs.s3a package. Do you really, really, want me to do that in 
spark code? And use introspection to get at a class it can't compile against.

Please, give me the string: it'll be better for all of us. As and when your 
colleagues sit down to look at Parquet performance on S3, they'll appreciate it.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> HADOOP-13028-branch-2-010.patch, HADOOP-13028-branch-2-011.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13113) Enable parallel test execution for hadoop-aws.

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279798#comment-15279798
 ] 

Hadoop QA commented on HADOOP-13113:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} hadoop-tools/hadoop-aws: The patch generated 0 new 
+ 1 unchanged - 10 fixed = 1 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803376/HADOOP-13113.003.patch
 |
| JIRA Issue | HADOOP-13113 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 4a30a38e7487 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 

[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279696#comment-15279696
 ] 

Hadoop QA commented on HADOOP-9613:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 32 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
58s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 10s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 28s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 49s {color} 
| {color:red} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 1 new + 662 
unchanged - 0 fixed = 663 total (was 662) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 51s {color} 
| {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 672 
unchanged - 0 fixed = 673 total (was 672) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 28s 
{color} | {color:red} root: The patch generated 4 new + 377 unchanged - 51 
fixed = 381 total (was 428) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} 

[jira] [Created] (HADOOP-13129) fix typo in dynamic subcommand docs

2016-05-11 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-13129:


 Summary: fix typo in dynamic subcommand docs
 Key: HADOOP-13129
 URL: https://issues.apache.org/jira/browse/HADOOP-13129
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: scripts
Affects Versions: 3.0.0
Reporter: Sean Busbey
Priority: Trivial


hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md line 
128 "funciton" should be "function"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13113) Enable parallel test execution for hadoop-aws.

2016-05-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13113:
---
Attachment: HADOOP-13113.003.patch

I'm attaching patch v003.  This adds the rety loop on 
{{TestS3AContractRootDir#testListEmptyRootDirectory}}.  I also took the 
opportunity to clear some Checkstyle nits in files I'm touching.

bq. Do we need to rely on a new system property, or could we just rely on the 
test class name being unique? that is {{this.getClass().getName()}} could 
provide the path?

Currently the patch relies on overriding 
{{AbstractBondedFSContract#getTestPath}} as a convenient place to parameterize 
the test path per unique fork.  This layer of the code doesn't have direct 
access to the test suite class.  We could change the contract classes so that 
the test suite class needs to be passed in during construction, but then every 
test suite would need to be changed.  Overall, I prefer what the current patch 
is doing, but I could be convinced otherwise if you feel strongly about it.


> Enable parallel test execution for hadoop-aws.
> --
>
> Key: HADOOP-13113
> URL: https://issues.apache.org/jira/browse/HADOOP-13113
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13113.001.patch, HADOOP-13113.002.patch, 
> HADOOP-13113.003.patch
>
>
> The full hadoop-aws test suite takes ~30 minutes to execute.  The tests spend 
> most of their time blocked on network I/O with the S3 back-end, but they 
> don't saturate the bandwidth of the NIC.  We can improve overall execution 
> time by enabling parallel test execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org