[jira] [Commented] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2016-06-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343756#comment-15343756
 ] 

Akira AJISAKA commented on HADOOP-12588:


Thanks [~iwasakims] for digging the cause of the problem and updating the 
patch. LGTM, +1.

> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12588.001.patch, HADOOP-12588.addendum.02.patch, 
> HADOOP-12588.addendum.03.patch, HADOOP-12588.addendum.patch
>
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13307) add rsync to Dockerfile so that precommit archive works

2016-06-21 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-13307:
-

 Summary: add rsync to Dockerfile so that precommit archive works
 Key: HADOOP-13307
 URL: https://issues.apache.org/jira/browse/HADOOP-13307
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Allen Wittenauer
Priority: Trivial


Apache Yetus 0.4.0 adds an archiving capability to store files from the build 
tree.  In order to use the Hadoop Dockerfile, the rsync package needs to be 
added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343742#comment-15343742
 ] 

Hadoop QA commented on HADOOP-11820:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
4s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
14s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  2m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809522/quickie.patch |
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux cc66f6dd81c5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d433b16 |
| shellcheck | v0.4.4 |
| modules | C:  U:  |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9850/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: 0001-domainsocket.patch, HADOOP-13245.02.patch, 
> YARN-5132-v1.patch, quickie.patch, quickie.patch, quickie.patch, socket.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343740#comment-15343740
 ] 

Hadoop QA commented on HADOOP-11820:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9850/console in case of 
problems.


> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: 0001-domainsocket.patch, HADOOP-13245.02.patch, 
> YARN-5132-v1.patch, quickie.patch, quickie.patch, quickie.patch, socket.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13306) add filter doesnot check if it exists

2016-06-21 Thread chillon_m (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chillon_m updated HADOOP-13306:
---
Description: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
line-705:defineFilter() doesnot check filter if it exists.we need check if it 
exists before add it.

another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
addNoCacheFilter() has already been invoked by createWebAppContext()  in 
httpServer2 constructor method.

  was:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
line-705:defineFilter() doesnot check filter if it exists.we need check if it 
exists before add it.

another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
addDefaultApps()  invoke addNoCacheFilter() is unnecessary.


> add filter doesnot check if it exists
> -
>
> Key: HADOOP-13306
> URL: https://issues.apache.org/jira/browse/HADOOP-13306
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.5.2
>Reporter: chillon_m
>Priority: Minor
>
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
> line-705:defineFilter() doesnot check filter if it exists.we need check if it 
> exists before add it.
> another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
> addDefaultApps()  invoke addNoCacheFilter() is unnecessary.because 
> addNoCacheFilter() has already been invoked by createWebAppContext()  in 
> httpServer2 constructor method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13306) add filter doesnot check if it exists

2016-06-21 Thread chillon_m (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chillon_m updated HADOOP-13306:
---
Issue Type: Bug  (was: Improvement)

> add filter doesnot check if it exists
> -
>
> Key: HADOOP-13306
> URL: https://issues.apache.org/jira/browse/HADOOP-13306
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.5.2
>Reporter: chillon_m
>Priority: Minor
>
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
> line-705:defineFilter() doesnot check filter if it exists.we need check if it 
> exists before add it.
> another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
> addDefaultApps()  invoke addNoCacheFilter() is unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13306) add filter doesnot check if it exists

2016-06-21 Thread chillon_m (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chillon_m updated HADOOP-13306:
---
Description: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
line-705:defineFilter() doesnot check filter if it exists.we need check if it 
exists before add it.

another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
addDefaultApps()  invoke addNoCacheFilter() is unnecessary.

  was:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
line-705:defineFilter() doesnot check filter if it exists.we need check if it 
exists before add it.

another,No_Cache_Filter added twice when create a HttpServer2 object.
I think addDefaultApps()  invoke addNoCacheFilter() is unnecessary.


> add filter doesnot check if it exists
> -
>
> Key: HADOOP-13306
> URL: https://issues.apache.org/jira/browse/HADOOP-13306
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.5.2
>Reporter: chillon_m
>Priority: Minor
>
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
> line-705:defineFilter() doesnot check filter if it exists.we need check if it 
> exists before add it.
> another,No_Cache_Filter added twice when create a HttpServer2 object.I think 
> addDefaultApps()  invoke addNoCacheFilter() is unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13306) add filter doesnot check if it exists

2016-06-21 Thread chillon_m (JIRA)
chillon_m created HADOOP-13306:
--

 Summary: add filter doesnot check if it exists
 Key: HADOOP-13306
 URL: https://issues.apache.org/jira/browse/HADOOP-13306
 Project: Hadoop Common
  Issue Type: Improvement
  Components: net
Affects Versions: 2.5.2
Reporter: chillon_m
Priority: Minor


hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
line-705:defineFilter() doesnot check filter if it exists.we need check if it 
exists before add it.

another,No_Cache_Filter added twice when create a HttpServer2 object.
I think addDefaultApps()  invoke addNoCacheFilter() is unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11873) Include disk read/write time in FileSystem.Statistics

2016-06-21 Thread Chen Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343403#comment-15343403
 ] 

Chen Yang commented on HADOOP-11873:


I used the Hadoop2.6.0. I only got the "ReadBlockOpNumOps", 
"ReadBlockOpAvgTime","WriteBlockOpNumOps" and "WriteBlockOpAvgTime" via 
http://localhost:50075/jmx. According to the explain about them in Metrics.md, 
I knew "ReadBlockOpNumOps" and  "ReadBlockOpAvgTime" meant that "total number 
of read operations" and "Average time of read operations in milliseconds", 
respectively. So I thought the "TotalReadTime" should be equal to 
"ReadBlockOpNumOps" multiplied by "ReadBlockOpAvgTime". When I ran Spark 
reading from HDFS, I sampled "ReadBlockOpNumOps" and "ReadBlockOpAvgTime" once 
per second. As the increment of time, the "ReadBlockOpNumOps" is always 
increasing, but "TotalReadTime" was not always increasing. Most of the time, 
the later "TotalReadTime" was less than the previous "TotalReadTime". All 
values of "TotalReadTime" is totally fluctuant and not increasing as the 
increment of time. Maybe, my computation is wrong. I want to know how to 
compute the "TotalReadTime" and "TotalWriteTime" and the calculation method of 
the "ReadBlockOpAvgTime" in HDFS.

> Include disk read/write time in FileSystem.Statistics
> -
>
> Key: HADOOP-11873
> URL: https://issues.apache.org/jira/browse/HADOOP-11873
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Kay Ousterhout
>Priority: Minor
>
> Measuring the time spent blocking on reading / writing data from / to disk is 
> very useful for debugging performance problems in applications that read data 
> from Hadoop, and can give much more information (e.g., to reflect disk 
> contention) than just knowing the total amount of data read.  I'd like to add 
> something like "diskMillis" to FileSystem#Statistics to track this.
> For data read from HDFS, this can be done with very low overhead by adding 
> logging around calls to RemoteBlockReader2.readNextPacket (because this reads 
> larger chunks of data, the time added by the instrumentation is very small 
> relative to the time to actually read the data).  For data written to HDFS, 
> this can be done in DFSOutputStream.waitAndQueueCurrentPacket.
> As far as I know, if you want this information today, it is only currently 
> accessible by turning on HTrace. It looks like HTrace can't be selectively 
> enabled, so a user can't just turn on the tracing on 
> RemoteBlockReader2.readNextPacket for example, and instead needs to turn on 
> tracing everywhere (which then introduces a bunch of overhead -- so sampling 
> is necessary).  It would be hugely helpful to have native metrics for time 
> reading / writing to disk that are sufficiently low-overhead to be always on. 
> (Please correct me if I'm wrong here about what's possible today!)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13299) JMXJsonServlet is vulnerable to TRACE

2016-06-21 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343261#comment-15343261
 ] 

Haibo Chen commented on HADOOP-13299:
-

Hi [~steve_l] There is no specific CVE here.  This is found in a network scan. 
Is there any component relying on the TRACE? If not, we can disable it just in 
case, which is exactly what the patch is doing.
If this needs to be discussed in the security mailing list first, I can start a 
discussion there.

> JMXJsonServlet is vulnerable to TRACE 
> --
>
> Key: HADOOP-13299
> URL: https://issues.apache.org/jira/browse/HADOOP-13299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Attachments: hadoop13299.001.patch
>
>
> Nessus scan shows that JMXJsonServlet is vulnerable to TRACE/TRACK requests.  
> We could disable this to avoid such vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13305) Define common statistics names across schemes

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343162#comment-15343162
 ] 

Hadoop QA commented on HADOOP-13305:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
19s{color} | {color:red} root: The patch generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 43s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812345/HADOOP-13305.000.patch
 |
| JIRA Issue | HADOOP-13305 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e03aa5530728 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d8107fc |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9849/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9849/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test 

[jira] [Updated] (HADOOP-13305) Define common statistics names across schemes

2016-06-21 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13305:
---
Status: Patch Available  (was: Open)

> Define common statistics names across schemes
> -
>
> Key: HADOOP-13305
> URL: https://issues.apache.org/jira/browse/HADOOP-13305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13305.000.patch
>
>
> The {{StorageStatistics}} provides a pretty general interface, i.e. 
> {{getLong(name)}} and {{getLongStatistics()}}. There is no shared or standard 
> names for the storage statistics and thus the getLong(name) is up to the 
> implementation of storage statistics. The problems:
> # For the common statistics, downstream applications expect the same 
> statistics name across different storage statistics and/or file system 
> schemes. Chances are they have to use 
> {{DFSOpsCountStorageStatistics#getLong(“getStatus”)}} and 
> {{S3A.Statistics#getLong(“get_status”)}} for retrieving the getStatus 
> operation stat.
> # Moreover, probing per-operation stats is hard if there is no 
> standard/shared common names.
> It makes a lot of sense for different schemes to issue the per-operation 
> stats of the same name. Meanwhile, every FS will have its own internal things 
> to count, which can't be centrally defined or managed. But there are some 
> common which would be easier managed if they all had the same name.
> Another motivation is that having a common set of names here will encourage 
> uniform instrumentation of all filesystems; it will also make it easier to 
> analyze the output of runs, were the stats to be published to a "performance 
> log" similar to the audit log. See Steve's work for S3  (e.g. [HADOOP-13171])
> This jira is track the effort of defining common StorageStatistics entry 
> names. Thanks to [~cmccabe], [~ste...@apache.org], [~hitesh] and [~jnp] for 
> offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13305) Define common statistics names across schemes

2016-06-21 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13305:
---
Attachment: HADOOP-13305.000.patch

The v0 patch:
- Defines common file system operation related statistics in a interface
- Refers the common names in the {{DFSOpsCountStatistics}} and 
{{s3a/Statistic}} classes
- Makes {{StorageStatistics}} abstract class return its scheme, if it's scheme 
specific (mostly it is, e.g. {{DFSOpsCountStatistics}}, {{s3a/Statistic}}, and 
{{FileSystemStorageStatistics}}). Considering the common names are shared 
across different file system schemes, downstream applications need this 
information for eaiser interpretation and categorization.
- Adds a simple unit test for unique OpType names

> Define common statistics names across schemes
> -
>
> Key: HADOOP-13305
> URL: https://issues.apache.org/jira/browse/HADOOP-13305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13305.000.patch
>
>
> The {{StorageStatistics}} provides a pretty general interface, i.e. 
> {{getLong(name)}} and {{getLongStatistics()}}. There is no shared or standard 
> names for the storage statistics and thus the getLong(name) is up to the 
> implementation of storage statistics. The problems:
> # For the common statistics, downstream applications expect the same 
> statistics name across different storage statistics and/or file system 
> schemes. Chances are they have to use 
> {{DFSOpsCountStorageStatistics#getLong(“getStatus”)}} and 
> {{S3A.Statistics#getLong(“get_status”)}} for retrieving the getStatus 
> operation stat.
> # Moreover, probing per-operation stats is hard if there is no 
> standard/shared common names.
> It makes a lot of sense for different schemes to issue the per-operation 
> stats of the same name. Meanwhile, every FS will have its own internal things 
> to count, which can't be centrally defined or managed. But there are some 
> common which would be easier managed if they all had the same name.
> Another motivation is that having a common set of names here will encourage 
> uniform instrumentation of all filesystems; it will also make it easier to 
> analyze the output of runs, were the stats to be published to a "performance 
> log" similar to the audit log. See Steve's work for S3  (e.g. [HADOOP-13171])
> This jira is track the effort of defining common StorageStatistics entry 
> names. Thanks to [~cmccabe], [~ste...@apache.org], [~hitesh] and [~jnp] for 
> offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13305) Define common statistics names across schemes

2016-06-21 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13305:
--

 Summary: Define common statistics names across schemes
 Key: HADOOP-13305
 URL: https://issues.apache.org/jira/browse/HADOOP-13305
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


The {{StorageStatistics}} provides a pretty general interface, i.e. 
{{getLong(name)}} and {{getLongStatistics()}}. There is no shared or standard 
names for the storage statistics and thus the getLong(name) is up to the 
implementation of storage statistics. The problems:
# For the common statistics, downstream applications expect the same statistics 
name across different storage statistics and/or file system schemes. Chances 
are they have to use {{DFSOpsCountStorageStatistics#getLong(“getStatus”)}} and 
{{S3A.Statistics#getLong(“get_status”)}} for retrieving the getStatus operation 
stat.
# Moreover, probing per-operation stats is hard if there is no standard/shared 
common names.

It makes a lot of sense for different schemes to issue the per-operation stats 
of the same name. Meanwhile, every FS will have its own internal things to 
count, which can't be centrally defined or managed. But there are some common 
which would be easier managed if they all had the same name.

Another motivation is that having a common set of names here will encourage 
uniform instrumentation of all filesystems; it will also make it easier to 
analyze the output of runs, were the stats to be published to a "performance 
log" similar to the audit log. See Steve's work for S3  (e.g. [HADOOP-13171])

This jira is track the effort of defining common StorageStatistics entry names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13305) Define common statistics names across schemes

2016-06-21 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13305:
---
Description: 
The {{StorageStatistics}} provides a pretty general interface, i.e. 
{{getLong(name)}} and {{getLongStatistics()}}. There is no shared or standard 
names for the storage statistics and thus the getLong(name) is up to the 
implementation of storage statistics. The problems:
# For the common statistics, downstream applications expect the same statistics 
name across different storage statistics and/or file system schemes. Chances 
are they have to use {{DFSOpsCountStorageStatistics#getLong(“getStatus”)}} and 
{{S3A.Statistics#getLong(“get_status”)}} for retrieving the getStatus operation 
stat.
# Moreover, probing per-operation stats is hard if there is no standard/shared 
common names.

It makes a lot of sense for different schemes to issue the per-operation stats 
of the same name. Meanwhile, every FS will have its own internal things to 
count, which can't be centrally defined or managed. But there are some common 
which would be easier managed if they all had the same name.

Another motivation is that having a common set of names here will encourage 
uniform instrumentation of all filesystems; it will also make it easier to 
analyze the output of runs, were the stats to be published to a "performance 
log" similar to the audit log. See Steve's work for S3  (e.g. [HADOOP-13171])

This jira is track the effort of defining common StorageStatistics entry names. 
Thanks to [~cmccabe], [~ste...@apache.org], [~hitesh] and [~jnp] for offline 
discussion.

  was:
The {{StorageStatistics}} provides a pretty general interface, i.e. 
{{getLong(name)}} and {{getLongStatistics()}}. There is no shared or standard 
names for the storage statistics and thus the getLong(name) is up to the 
implementation of storage statistics. The problems:
# For the common statistics, downstream applications expect the same statistics 
name across different storage statistics and/or file system schemes. Chances 
are they have to use {{DFSOpsCountStorageStatistics#getLong(“getStatus”)}} and 
{{S3A.Statistics#getLong(“get_status”)}} for retrieving the getStatus operation 
stat.
# Moreover, probing per-operation stats is hard if there is no standard/shared 
common names.

It makes a lot of sense for different schemes to issue the per-operation stats 
of the same name. Meanwhile, every FS will have its own internal things to 
count, which can't be centrally defined or managed. But there are some common 
which would be easier managed if they all had the same name.

Another motivation is that having a common set of names here will encourage 
uniform instrumentation of all filesystems; it will also make it easier to 
analyze the output of runs, were the stats to be published to a "performance 
log" similar to the audit log. See Steve's work for S3  (e.g. [HADOOP-13171])

This jira is track the effort of defining common StorageStatistics entry names.


> Define common statistics names across schemes
> -
>
> Key: HADOOP-13305
> URL: https://issues.apache.org/jira/browse/HADOOP-13305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
>
> The {{StorageStatistics}} provides a pretty general interface, i.e. 
> {{getLong(name)}} and {{getLongStatistics()}}. There is no shared or standard 
> names for the storage statistics and thus the getLong(name) is up to the 
> implementation of storage statistics. The problems:
> # For the common statistics, downstream applications expect the same 
> statistics name across different storage statistics and/or file system 
> schemes. Chances are they have to use 
> {{DFSOpsCountStorageStatistics#getLong(“getStatus”)}} and 
> {{S3A.Statistics#getLong(“get_status”)}} for retrieving the getStatus 
> operation stat.
> # Moreover, probing per-operation stats is hard if there is no 
> standard/shared common names.
> It makes a lot of sense for different schemes to issue the per-operation 
> stats of the same name. Meanwhile, every FS will have its own internal things 
> to count, which can't be centrally defined or managed. But there are some 
> common which would be easier managed if they all had the same name.
> Another motivation is that having a common set of names here will encourage 
> uniform instrumentation of all filesystems; it will also make it easier to 
> analyze the output of runs, were the stats to be published to a "performance 
> log" similar to the audit log. See Steve's work for S3  (e.g. [HADOOP-13171])
> This jira is track the effort of defining common StorageStatistics entry 
> names. Thanks to [~cmccabe], [~ste...@apache.org], 

[jira] [Commented] (HADOOP-13295) Possible Vulnerability in DataNodes via SSH

2016-06-21 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343032#comment-15343032
 ] 

Joep Rottinghuis commented on HADOOP-13295:
---

b.q. I don't think is is directly related to Hadoop at all: it doesn't use SSH 
at all.
agreed, probably not Hadoop. SSH isn't used on DN side. Only place I can 
imagine SSH is used is in possible fencing script used to fence a NN HA pair 
with the failover controller setup.

> Possible Vulnerability in DataNodes via SSH
> ---
>
> Key: HADOOP-13295
> URL: https://issues.apache.org/jira/browse/HADOOP-13295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mobin Ranjbar
>
> I suspected something weird in my Hadoop cluster. When I run datanodes, after 
> a while my servers(except namenode) will be down for SSH Max Attempts. When I 
> checked the 'systemctl status ssh', I figured out there are some invalid 
> username/password attempts via SSH and the SSH daemon blocked all incoming 
> connections and I got connection refused.
> I have no problem when my datanodes are not running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12803) RunJar should allow overriding the manifest Main-Class via a cli parameter.

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343008#comment-15343008
 ] 

Hadoop QA commented on HADOOP-12803:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-12803 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792781/HADOOP-12803.003.patch
 |
| JIRA Issue | HADOOP-12803 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9848/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RunJar should allow overriding the manifest Main-Class via a cli parameter.
> ---
>
> Key: HADOOP-12803
> URL: https://issues.apache.org/jira/browse/HADOOP-12803
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12803.001.patch, HADOOP-12803.002.patch, 
> HADOOP-12803.003.patch
>
>
> Currently there is no way to override the main class in the manifest even 
> though main class can be passed as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342983#comment-15342983
 ] 

Hadoop QA commented on HADOOP-13263:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
35s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 31s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Inconsistent synchronization of 
org.apache.hadoop.security.Groups$GroupCacheLoader.executorService; locked 66% 
of time  Unsynchronized access at Groups.java:66% of time  Unsynchronized 
access at Groups.java:[line 342] |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812316/HADOOP-13263.005.patch
 |
| JIRA Issue | HADOOP-13263 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fe600a1facfd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d8107fc |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9847/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9847/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9847/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Commented] (HADOOP-13303) Detail Informations of KMS High Avalibale

2016-06-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342943#comment-15342943
 ] 

Xiao Chen commented on HADOOP-13303:


Hi,
Arun had some replies in HADOOP-11862, which should answer the above questions. 
Thanks.

> Detail Informations of KMS High Avalibale
> -
>
> Key: HADOOP-13303
> URL: https://issues.apache.org/jira/browse/HADOOP-13303
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha, kms
>Affects Versions: 2.7.2
>Reporter: qiushi fan
>
> I have some confusions of kms HA recently. 
> 1. we can set up multiple KMS instances  behind a load balancer. Among all 
> these kms instances, there is only one master kms, others are slave kms. The 
> master kms can handle Key create/store/rollover/delete operations by directly 
> contacting with JCE keystore file. The slave kms can handle  Key 
> create/store/rollover/delete operations by delegating it to the master kms.
> so although we set up multiple kms, there is only one  JCE keystore file, and 
> only the master kms can access to this file.   Both the JCE keystore file and 
> the master kms don't have a backup. If one of them died, there is no way to 
> avoid losing data.
> Is all of the above true? KMS doesn't have a solution to handle the failure 
> of master kms and  JCE keystore file?
> 2. I heard another way to achieve kms HA: make use of 
> LoadBalancingKMSClientProvider. But  I can't find detail informations of 
> LoadBalancingKMSClientProvider.  So why the  LoadBalancingKMSClientProvider 
> can achieve kms HA?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12803) RunJar should allow overriding the manifest Main-Class via a cli parameter.

2016-06-21 Thread Shlomi Vaknin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342911#comment-15342911
 ] 

Shlomi Vaknin edited comment on HADOOP-12803 at 6/21/16 10:30 PM:
--

Until this patch makes it to production, I would highly recommend adding a note 
of this behavior to the tool's help message.

I just spent a few hours trying to force an emr cluster to use a main class 
other than the one specified in the manifest, not understanding why it treated 
it as an argument.. 

Thanks,
Shlomi


was (Author: shlomiv):
Until this patch makes it to production, I would highly recommend adding a note 
of this behavior to the tool's help message.

I spend a few hours trying to force an emr cluster to use a main class other 
than the one specified in the manifest, not understanding why it treated it as 
an argument.. 

Thanks,
Shlomi

> RunJar should allow overriding the manifest Main-Class via a cli parameter.
> ---
>
> Key: HADOOP-12803
> URL: https://issues.apache.org/jira/browse/HADOOP-12803
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12803.001.patch, HADOOP-12803.002.patch, 
> HADOOP-12803.003.patch
>
>
> Currently there is no way to override the main class in the manifest even 
> though main class can be passed as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12803) RunJar should allow overriding the manifest Main-Class via a cli parameter.

2016-06-21 Thread Shlomi Vaknin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342911#comment-15342911
 ] 

Shlomi Vaknin commented on HADOOP-12803:


Until this patch makes it to production, I would highly recommend adding a note 
of this behavior to the tool's help message.

I spend a few hours trying to force an emr cluster to use a main class other 
than the one specified in the manifest, not understanding why it treated it as 
an argument.. 

Thanks,
Shlomi

> RunJar should allow overriding the manifest Main-Class via a cli parameter.
> ---
>
> Key: HADOOP-12803
> URL: https://issues.apache.org/jira/browse/HADOOP-12803
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12803.001.patch, HADOOP-12803.002.patch, 
> HADOOP-12803.003.patch
>
>
> Currently there is no way to override the main class in the manifest even 
> though main class can be passed as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13263) Reload cached groups in background after expiry

2016-06-21 Thread Stephen O'Donnell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HADOOP-13263:
---
Attachment: HADOOP-13263.005.patch

Additional patch to address style issues

> Reload cached groups in background after expiry
> ---
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch, 
> HADOOP-13263.003.patch, HADOOP-13263.004.patch, HADOOP-13263.005.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the 
> Namenode group cache to run in the background, avoiding many slow group 
> lookups. Even with this change, I have seen quite a few clusters with issues 
> due to slow group lookups. The problem is most prevalent in HA clusters, 
> where a slow group lookup on the hdfs user can fail to return for over 45 
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user 
> blocks until it returns. Any subsequent threads requesting that user block 
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry 
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on 
> slow group lookups. If the call from the FC is the one that blocks and 
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background, 
> where the first thread that hits an expired key schedules a background cache 
> reload, but still returns the old value. Then the cache is eventually 
> updated. This patch introduces this background reload feature. There are two 
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the 
> current behaviour. Set to true to enable a small thread pool and background 
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if 
> the above is set to true. Controls how many threads are in the background 
> refresh pool. Default is 1, which is likely to be enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem

2016-06-21 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342775#comment-15342775
 ] 

Junping Du commented on HADOOP-10048:
-

I would prefer not to backport to 2.6 and 2.7 for the same reason as [~jlowe] 
mentioned. This is a performance gain but not a bug fix which should be kept 
out of maintenance release as our previous practices.

> LocalDirAllocator should avoid holding locks while accessing the filesystem
> ---
>
> Key: HADOOP-10048
> URL: https://issues.apache.org/jira/browse/HADOOP-10048
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Fix For: 2.8.0
>
> Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, 
> HADOOP-10048.005.patch, HADOOP-10048.006.patch, HADOOP-10048.patch, 
> HADOOP-10048.trunk.patch
>
>
> As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a 
> bottleneck for multithreaded setups like the ShuffleHandler.  We should 
> consider moving to a lockless design or minimizing the critical sections to a 
> very small amount of time that does not involve I/O operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342641#comment-15342641
 ] 

Xiao Chen commented on HADOOP-12893:


bq. Am I right in understanding that L files are not to be edited by hand but 
generated completely by the Python scripts?
Yep, generally we update the spreadsheet, then parse it to get a new L After 
that we'll manually merge it into existing L This way all the information 
is tracked in the spreadsheet history, and individual efforts can be combined.

If you want to do a quick edit on the L directly, make sure to also put it in 
the spreadsheet so it doesn't get overwritten by the next run. Thanks.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342624#comment-15342624
 ] 

Arpit Agarwal commented on HADOOP-12893:


Thanks for the clarifications Sean and Xiao.

bq. Arpit Agarwal, could you edit the spreadsheet, and generate + merge the new 
L? That way we don't lose any changes during the iterations. I can also do a 
pass this week after you're done. Thank you.
Hi [~xiaochen], I can do a scan by next week in case there are any more 
additions to NOTICE apart from jcip. Will update the spreadsheet. Am I right in 
understanding that L files are not to be edited by hand but generated 
completely by the Python scripts?

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342561#comment-15342561
 ] 

Xiao Chen commented on HADOOP-12893:


Yep, that's the rule we used. Thanks for helping to explain, Sean.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.

2016-06-21 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342479#comment-15342479
 ] 

Ravi Prakash commented on HADOOP-13287:
---

Sorry! I was trying to run the tests, but got distracted before I could figure 
out how. By the way my handle is raviprak ;-)

> TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains 
> '+'.
> ---
>
> Key: HADOOP-13287
> URL: https://issues.apache.org/jira/browse/HADOOP-13287
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13287.001.patch, HADOOP-13287.002.patch
>
>
> HADOOP-3733 fixed accessing S3A with credentials on the command line for an 
> AWS secret key containing a '/'.  The patch added a new test suite: 
> {{TestS3ACredentialsInURL}}.  One of the tests fails if your AWS secret key 
> contains a '+'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342477#comment-15342477
 ] 

Hudson commented on HADOOP-13287:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9996 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9996/])
HADOOP-13287. TestS3ACredentials#testInstantiateFromURL fails if AWS (cnauroth: 
rev b2c596cdda7c129951074bc53b4b9ecfedbf080a)
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ACredentialsInURL.java


> TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains 
> '+'.
> ---
>
> Key: HADOOP-13287
> URL: https://issues.apache.org/jira/browse/HADOOP-13287
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13287.001.patch, HADOOP-13287.002.patch
>
>
> HADOOP-3733 fixed accessing S3A with credentials on the command line for an 
> AWS secret key containing a '/'.  The patch added a new test suite: 
> {{TestS3ACredentialsInURL}}.  One of the tests fails if your AWS secret key 
> contains a '+'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13203:
---
Hadoop Flags: Reviewed

+1 for patch 010, pending pre-commit.  I'll also kick off another full test run 
against S3.

bq. failure of TestS3AContractRootDir which went away when run standalone ... 
some race conditions/consistency condition to look at there

If this was a run with both {{-Pparallel-tests}} and {{-Dtest=TestS3A*}}, then 
it's probably the problem we discussed elsewhere that passing these arguments 
would erroneously include {{TestS3AContractRootDir}} in the parallel testing 
phase.

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, HADOOP-13203-branch-2-009.patch, 
> HADOOP-13203-branch-2-010.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342433#comment-15342433
 ] 

Sean Busbey commented on HADOOP-12893:
--

LICENSE already contains the ASLv2 license covering the aggregate work, 
including Okio. You need not list the individual components of an aggregate 
work made up of ASLv2 licensed works so long as you comply with their ASLv2 
notifications, which are contained in NOTICE. Okio does not appear to specify a 
NOTICE, so none is needed (provided that both their source repo and the 
specific jar we incorporate agree).

ref [licensing howto for guidelines that agree with the 
above|http://www.apache.org/dev/licensing-howto.html#alv2-dep]

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342430#comment-15342430
 ] 

Hadoop QA commented on HADOOP-13203:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
19s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
33s{color} | {color:red} root: The patch generated 20 new + 43 unchanged - 9 
fixed = 63 total (was 52) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 50 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  2s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_101 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d1c475d |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342417#comment-15342417
 ] 

Arpit Agarwal commented on HADOOP-12893:


bq. This is ASFv2, and I don't see any NOTICE on https://github.com/square/okio 
?
Sorry I meant to say LICENSE.txt. Here is what the guidelines say:
bq. In LICENSE, add a pointer to the dependency's license within the 
distribution and a short note summarizing its licensing:

So it sounds like we need a pointer in LICENSE.txt, for okio that might be 
something like:
{code}
This product bundles Okio 1.4.0, which is available under a
Apache license.  For details, see 
https://github.com/square/okio/blob/master/LICENSE.txt.
{code}


> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.

2016-06-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13287:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Steve, thank you for the review.  I committed this to trunk, branch-2 and 
branch-2.8.  [~raviprakash], if you do notice any problems when you get around 
to trying the tests, please let me know.

> TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains 
> '+'.
> ---
>
> Key: HADOOP-13287
> URL: https://issues.apache.org/jira/browse/HADOOP-13287
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13287.001.patch, HADOOP-13287.002.patch
>
>
> HADOOP-3733 fixed accessing S3A with credentials on the command line for an 
> AWS secret key containing a '/'.  The patch added a new test suite: 
> {{TestS3ACredentialsInURL}}.  One of the tests fails if your AWS secret key 
> contains a '+'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342336#comment-15342336
 ] 

Steve Loughran commented on HADOOP-13203:
-

BTW, this patch enhances the range validation checks in {{FSInputStream}} so 
that on a block read where the length > buffer capacity, the details of the 
request are included in the exception. You'll appreciate this if you ever have 
problems here.

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, HADOOP-13203-branch-2-009.patch, 
> HADOOP-13203-branch-2-010.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342326#comment-15342326
 ] 

Steve Loughran commented on HADOOP-13203:
-

parallel test run against s3 ireland: completed in < 9 minutes; failure of 
{{TestS3AContractRootDir}} which went away when run standalone ... some race 
conditions/consistency condition to look at there

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, HADOOP-13203-branch-2-009.patch, 
> HADOOP-13203-branch-2-010.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342317#comment-15342317
 ] 

Xiao Chen commented on HADOOP-12893:


Thanks for the comments all, and Sean for the explanation. Looks like we need 
more changes for category B then.

[~arpitagarwal], could you edit the 
[spreadsheet|https://docs.google.com/spreadsheets/d/1HL2b4PSdQMZDVJmum1GIKrteFr2oainApTLiJTPnfd4/edit?usp=sharing],
 and generate + merge the new L? That way we don't lose any changes during 
the iterations. I can also do a pass this week after you're done. Thank you.

bq. e.g. we bundle okio but there is no reference to it in NOTICE.txt. I still 
haven't done a full audit so there may be more. I can help fix this too
This is ASFv2, and I don't see any NOTICE on https://github.com/square/okio ?

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-06-21 Thread Oscar Morante (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342291#comment-15342291
 ] 

Oscar Morante commented on HADOOP-13075:


Hi Federico,
Would you share your patch for 2.7.2 in the meantime? I would love to try it.

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Attachment: HADOOP-13203-branch-2-010.patch

patch 010: chris's review, +one other IDE complaint about mixed-sync use of a 
field.

test run in progress

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, HADOOP-13203-branch-2-009.patch, 
> HADOOP-13203-branch-2-010.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Status: Open  (was: Patch Available)

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, HADOOP-13203-branch-2-009.patch, 
> stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Attachment: HADOOP-13203-branch-2-009.patch

patch 009: docs, checkstyle and findbugs

I have not addressed those bits of the checkstyle complaints about constants 
called _128K in tests, as it's only testing and the name is the value.

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, HADOOP-13203-branch-2-009.patch, 
> stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Status: Patch Available  (was: Open)

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, HADOOP-13203-branch-2-009.patch, 
> stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Status: Open  (was: Patch Available)

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-21 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342205#comment-15342205
 ] 

Daniel Templeton commented on HADOOP-13254:
---

Thanks, [~yufeigu].  Almost there.  We're now down to quibbles:

In {{TestBasicDiskValidator.checkDirs()}}, the try-finally needs to start 
immediately after you create the file.

{code}
   * Returns {@link DiskValidator} instance corresponding to its name.
   * DiskValidator can be "basic" for BasicDiskValidator;
   * @param diskValidator canonical class name, e.g. "basic"
{code}

has a couple of issues.  In the second line, it would be clearer if 
"DiskValidator" were "The parameter" or "The diskValidator parameter".  Also in 
the second line, "BasicDiskValidator" should be a link.  In the @param tag, 
"e.g." should be "for example" per the javadoc guidelines.

{code}
 * A {@link DiskValidator} is the interface of a disk validator.
{code}

should not use a link for {{DiskValidator}} since the comment is in the 
{{DiskValidator}} class.

{code}
/**
 * The basic DiskValidator do the same thing as existing DiskChecker do.
 */
{code}

This comment would be clearer if it said that {{BasicDiskValidator}} is a 
wrapper around {{DiskChecker}}.

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch, HADOOP-13254.005.patch, 
> HADOOP-13254.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342185#comment-15342185
 ] 

Chris Nauroth commented on HADOOP-13203:


Steve and Rajesh, this looks great to me.  We'll get the best of both worlds.  
Thank you very much.

All of the random vs. sequential logic looks correct to me.  All tests passed 
for me against a bucket in US-west-2, barring the known failure related to a 
secret with a '\+' in it, which is tracked elsewhere.  I only have a few minor 
nitpicks on patch 008.

1. Please add audience and stability annotations to {{S3AInputPolicy}}.

{code}
   * Optimised purely for random seek+reed/positionedRead operations;
{code}

2. s/reed/read

{code}
// Better to set it to the value requested by higher level layer.
// In case this is set to contentLength, expect lots of connection
// closes when backwards-seeks are executed.
// Note that abort would force the internal connection to be
// closed and makes it un-usable.
{code}

3. I think that comment can be removed.  I don't think it's relevant anymore.

{code}
LOG.info("Stream Statistics\n{}", streamStatistics);
{code}

4. I suggest changing to this for platform-agnostic line endings:

{code}
LOG.info(String.format("Stream Statistics%n{}"), streamStatistics);
{code}


> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342191#comment-15342191
 ] 

Hadoop QA commented on HADOOP-13203:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
40s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
21s{color} | {color:red} root: The patch generated 33 new + 43 unchanged - 9 
fixed = 76 total (was 52) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 50 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
26s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.S3AInputStream.contentRangeFinish; locked 75% of time  

[jira] [Commented] (HADOOP-13296) Cleanup javadoc for Path

2016-06-21 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342144#comment-15342144
 ] 

Daniel Templeton commented on HADOOP-13296:
---

Thanks for the commit, [~ajisakaa]!

> Cleanup javadoc for Path
> 
>
> Key: HADOOP-13296
> URL: https://issues.apache.org/jira/browse/HADOOP-13296
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13296.001.patch
>
>
> The javadoc in the Path class needs lots of help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342115#comment-15342115
 ] 

Arpit Agarwal commented on HADOOP-12893:


Thanks for the explanation [~xiaochen]. For jcip, their licensing terms appear 
to require we include the copyright and license notice. I can make this update 
if we agree it is necessary.
{code}
Any republication or derived work distributed in source code form
must include this copyright and license notice.
{code}

Also I think we need to include references to bundled permissively-licensed 
works in LICENSE.txt as Sean described above.
bq. That's correct, you should not include anything from BSD or MIT licensed 
deps in NOTICE. That includes bundled JARs; in both source and bundled binary 
cases you should have a reference in the LICENSE file for the included work. 
(ref licensing howto on permissive licenses)
http://www.apache.org/dev/licensing-howto.html#permissive-deps

e.g. we bundle okio but there is no reference to it in NOTICE.txt. I still 
haven't done a full audit so there may be more. I can help fix this too.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Status: Patch Available  (was: Open)

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13203:

Attachment: HADOOP-13203-branch-2-008.patch

Patch 008; tested against s3 ireland.

This revision has the test to demonstrate what I suspected: reads spanning 
block boundaries were going to have problems —and it has the fix. Which 
consists of always calling {{seekInStream(pos, len)}} before a read, even if 
{{targetPos==currentPos}} —and in that situation, closing the current stream if 
the currentPos is at the end of the current request range (i.e. there's no 
seek, but no data either). The test does block-spanning reads, on a file built 
up with the byte at each position being {{(position % 64)}} ... this is used in 
the tests to verify the bytes returned really are the bytes in the file at the 
specific read positions.

BTW, note that some of the -Len fields in the input stream now refer to range 
start and finish; Len isn't appropriate now the range of the HTTP request may 
be less than the length of the actual blob. It was getting confusing.

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, 
> HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, 
> HADOOP-13203-branch-2-006.patch, HADOOP-13203-branch-2-007.patch, 
> HADOOP-13203-branch-2-008.patch, stream_stats.tar.gz
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341782#comment-15341782
 ] 

Sean Busbey commented on HADOOP-12893:
--

{quote}
Per ASF requirement, NOTICE file is intended for explicit NOTICE by the 
dependencies, not listing copyright info. (I had similar misconception too when 
working on the spreadsheet). So for jcip, we're correct to have it's NOTICE to 
be empty.
{quote}

That is specifically talking about the copyright notifications from Category A 
licenses (like MIT and BSD-3). CC-BY is a Category B license and should get 
called out [ref cat b|http://apache.org/legal/resolved#category-b]. The call 
out can be in either README or NOTICE, but so far it looks like we're taking 
the NOTICE route. (it also needs to be in the LICENSE file for any artifacts 
that bundle it, like all non-ASLv2 bundled third party works)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13296) Cleanup javadoc for Path

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341611#comment-15341611
 ] 

Hudson commented on HADOOP-13296:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9994 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9994/])
HADOOP-13296. Cleanup javadoc for Path. Contributed by Daniel Templeton. 
(aajisaka: rev e15cd43369eb6d478844f25897e4a86065c62168)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java


> Cleanup javadoc for Path
> 
>
> Key: HADOOP-13296
> URL: https://issues.apache.org/jira/browse/HADOOP-13296
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13296.001.patch
>
>
> The javadoc in the Path class needs lots of help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341599#comment-15341599
 ] 

Hadoop QA commented on HADOOP-12588:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812131/HADOOP-12588.addendum.03.patch
 |
| JIRA Issue | HADOOP-12588 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 83bb80077649 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f2ac132 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9844/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9844/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12588.001.patch, HADOOP-12588.addendum.02.patch, 
> HADOOP-12588.addendum.03.patch, HADOOP-12588.addendum.patch
>
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, 

[jira] [Updated] (HADOOP-13296) Cleanup javadoc for Path

2016-06-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13296:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to branch-2.8 and above. Thanks [~templedf] for the contribution!

> Cleanup javadoc for Path
> 
>
> Key: HADOOP-13296
> URL: https://issues.apache.org/jira/browse/HADOOP-13296
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13296.001.patch
>
>
> The javadoc in the Path class needs lots of help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13296) Cleanup javadoc for Path

2016-06-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13296:
---
Hadoop Flags: Reviewed
 Component/s: documentation

LGTM, +1.

> Cleanup javadoc for Path
> 
>
> Key: HADOOP-13296
> URL: https://issues.apache.org/jira/browse/HADOOP-13296
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HADOOP-13296.001.patch
>
>
> The javadoc in the Path class needs lots of help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13296) Cleanup javadoc for Path

2016-06-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341560#comment-15341560
 ] 

Akira AJISAKA commented on HADOOP-13296:


Copied from https://builds.apache.org/job/PreCommit-HADOOP-Build/9835/console

-1 overall

|| Vote ||  Subsystem ||  Runtime   || Comment
|   0  |reexec  |   0m 18s   | Docker mode activated. 
|  +1  |   @author  |   0m  0s   | The patch does not contain any @author 
|  ||| tags.
|  -1  |test4tests  |   0m  0s   | The patch doesn't appear to include any 
|  ||| new or modified tests. Please justify why
|  ||| no new tests are needed for this patch.
|  ||| Also please list what manual steps were
|  ||| performed to verify this patch.
|  +1  |mvninstall  |   8m  3s   | trunk passed 
|  +1  |   compile  |   7m 31s   | trunk passed 
|  +1  |checkstyle  |   0m 24s   | trunk passed 
|  +1  |   mvnsite  |   0m 56s   | trunk passed 
|  +1  |mvneclipse  |   0m 13s   | trunk passed 
|  +1  |  findbugs  |   1m 23s   | trunk passed 
|  +1  |   javadoc  |   0m 46s   | trunk passed 
|  +1  |mvninstall  |   0m 40s   | the patch passed 
|  +1  |   compile  |   6m 30s   | the patch passed 
|  +1  | javac  |   6m 30s   | the patch passed 
|  +1  |checkstyle  |   0m 24s   | hadoop-common-project/hadoop-common: The 
|  ||| patch generated 0 new + 22 unchanged - 3
|  ||| fixed = 22 total (was 25)
|  +1  |   mvnsite  |   0m 53s   | the patch passed 
|  +1  |mvneclipse  |   0m 12s   | the patch passed 
|  +1  |whitespace  |   0m  0s   | The patch has no whitespace issues. 
|  +1  |  findbugs  |   1m 37s   | the patch passed 
|  +1  |   javadoc  |   0m 46s   | the patch passed 
|  +1  |  unit  |   7m 42s   | hadoop-common in the patch passed. 
|  +1  |asflicense  |   0m 22s   | The patch does not generate ASF License 
|  ||| warnings.
|  ||  39m 27s   | 

|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811863/HADOOP-13296.001.patch
 |
| JIRA Issue | HADOOP-13296 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 32255c9870ae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5107a96 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9835/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9835/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


> Cleanup javadoc for Path
> 
>
> Key: HADOOP-13296
> URL: https://issues.apache.org/jira/browse/HADOOP-13296
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HADOOP-13296.001.patch
>
>
> The javadoc in the Path class needs lots of help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-06-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341555#comment-15341555
 ] 

Akira AJISAKA commented on HADOOP-12064:


LGTM, +1. I ran {{mvn test -Dtest=\*Web\*}} and all the tests passed.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2016-06-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12588:
---
Status: Patch Available  (was: Open)

> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12588.001.patch, HADOOP-12588.addendum.02.patch, 
> HADOOP-12588.addendum.03.patch, HADOOP-12588.addendum.patch
>
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2016-06-21 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12588:
--
Attachment: HADOOP-12588.addendum.03.patch

> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12588.001.patch, HADOOP-12588.addendum.02.patch, 
> HADOOP-12588.addendum.03.patch, HADOOP-12588.addendum.patch
>
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13302) Remove unused variable in TestRMWebServicesForCSWithPartitions#setupQueueConfiguration

2016-06-21 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341430#comment-15341430
 ] 

Brahma Reddy Battula commented on HADOOP-13302:
---

[~ajisakaa] I think, this should raised in YARN project, isn't it..?

> Remove unused variable in 
> TestRMWebServicesForCSWithPartitions#setupQueueConfiguration
> --
>
> Key: HADOOP-13302
> URL: https://issues.apache.org/jira/browse/HADOOP-13302
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
>
> {code}
>   private static void setupQueueConfiguration(
>   CapacitySchedulerConfiguration config, ResourceManager resourceManager) 
> {
> {code}
> {{resourceManager}} is not used, so it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13299) JMXJsonServlet is vulnerable to TRACE

2016-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341429#comment-15341429
 ] 

Steve Loughran commented on HADOOP-13299:
-

Is there a specific CVE here?

> JMXJsonServlet is vulnerable to TRACE 
> --
>
> Key: HADOOP-13299
> URL: https://issues.apache.org/jira/browse/HADOOP-13299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Attachments: hadoop13299.001.patch
>
>
> Nessus scan shows that JMXJsonServlet is vulnerable to TRACE/TRACK requests.  
> We could disable this to avoid such vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13295) Possible Vulnerability in DataNodes via SSH

2016-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13295:

Component/s: security

> Possible Vulnerability in DataNodes via SSH
> ---
>
> Key: HADOOP-13295
> URL: https://issues.apache.org/jira/browse/HADOOP-13295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mobin Ranjbar
>
> I suspected something weird in my Hadoop cluster. When I run datanodes, after 
> a while my servers(except namenode) will be down for SSH Max Attempts. When I 
> checked the 'systemctl status ssh', I figured out there are some invalid 
> username/password attempts via SSH and the SSH daemon blocked all incoming 
> connections and I got connection refused.
> I have no problem when my datanodes are not running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13295) Possible Vulnerability in DataNodes via SSH

2016-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341425#comment-15341425
 ] 

Steve Loughran commented on HADOOP-13295:
-

I don't think is is directly related to Hadoop at all: it doesn't use SSH at 
all.

How are you deploying it?

> Possible Vulnerability in DataNodes via SSH
> ---
>
> Key: HADOOP-13295
> URL: https://issues.apache.org/jira/browse/HADOOP-13295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mobin Ranjbar
>
> I suspected something weird in my Hadoop cluster. When I run datanodes, after 
> a while my servers(except namenode) will be down for SSH Max Attempts. When I 
> checked the 'systemctl status ssh', I figured out there are some invalid 
> username/password attempts via SSH and the SSH daemon blocked all incoming 
> connections and I got connection refused.
> I have no problem when my datanodes are not running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2016-06-21 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341422#comment-15341422
 ] 

Masatake Iwasaki commented on HADOOP-12588:
---

If metrics is published by the timer in MetricsSystemImpl after sinks are 
registered and before metrics system is stopped, we get metrics more than 
expected, though I could not reproduce this without adding artificial delay on 
my environment.

{code}
// register the sinks
ms.register("gsink30", "gsink30 desc", gsink30);
ms.register("gsink31", "gsink31 desc", gsink31);
ms.publishMetricsNow(); // publish the metrics

ms.stop();
{code}

We can avoid the situation by setting long publishing interval. Since we 
manually publish metrics by calling {{publishMetricsNow}}, we don't need 
periodic publishing. The configuration to set the interval must be {{*.period}} 
rather than {{default.period}}.

I will upload a patch addressing this.


> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12588.001.patch, HADOOP-12588.addendum.02.patch, 
> HADOOP-12588.addendum.patch
>
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13304) distributed database for store , mapreduce for compute

2016-06-21 Thread jiang hehui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiang hehui updated HADOOP-13304:
-
Description: 
in hadoop ,hdfs is responsible for store , mapreduce is responsible for compute 
.
my idea is that data are stored in distributed database , data compute is like 
mapreduce.

h2. how to do ?

* insert: 
using two-phase commit ,according to the split policy ,just execute insert in 
nodes

* delete: 
using two-phase commit ,according to the split policy ,just execute delete in 
nodes

* update:
using two-phase commit, according to the split policy, if record node does not 
change ,just execute update in nodes, if record node change, first delete old 
value in source node , and insert new value in destination node .
* select:
** simple select (like data just in one node , or data fusion across multi 
nodes not need)is just the same like standalone database server;
** complex select (like distinct , group by, order by, sub query, join across 
multi nodes),we call a job 
{panel}
{color:red}job are parsed into stages , stages have lineage , all stages in a 
job make up dag( Directed Acyclic Graph ) ,every stage contains mapsql 
,shuffle, reducesql .
when receive request sql, according to metadata ,generate the execution plan 
which contain the dag , including stage and mapsql ,shuffle, reducesql in each 
stage; then just execute the plan , and return result to client.

as in spark , it is the same ; rdd is table , job is job;
as mapreduce in hadoop, it is the same ; mapsql is map , shuffle is shuffle , 
reducesql is reduce.
{color}
{panel}


h2. architecture:
!http://images2015.cnblogs.com/blog/439702/201606/439702-2016062112414-32823985.png!

* client : user interface 
* master : master like nameserver in hdfs
* meta database : contain the base information about system , nodes , tables 
and so on 
* store node :  database node where data is stored , insert,delete,update is 
always executed on 
* calculate node : where execure select , source data is in store nodes , then 
other task is run on calculate node . calculae node may be the same as store 
node in practice

h2. Feature & Advantage

{panel}
{color:green}
data is stored in some nodes , split by field values ,with one policy;
this feature is very useful in full-text indexing (like solr ,elastic search);
{color}
{panel}

{panel}
{color:green}
contrast with hdfs , data location can be get in your mind ,not get by execute 
some command;
{color}
{panel}

{panel}
{color:green}
when insert, update, delete multi record, xa transaction can se used to support 
consistence;
{color}
{panel}

{panel}
{color:green}
we know that random read/write is not supported in hdfs, so update, delete is 
difficult , and insert are batch normally;
{color}
{panel}

{panel}
{color:green}
so data store in database have big advantage
sql across multi nodes are supported , including group by , order by , having, 
specially sub query and join
data is stored in database ,so index can speed our query and data can be cached 
in memory automatically ;
{color}
{panel}

{panel}
{color:green}
we can get a little records from billon-level records very quickly using index 
and cache, when using hadoop is very slow.
when using hadoop, data from online database to offline data warehouse is hard 
for update and delete, and delayed because of data merge;
{color}
{panel}

{panel}
{color:green}
if we use database to store data , data sync is very simple and real-time, just 
use replication , all issues are resolved
you can see, online and offline can both use this system , 
refer to application architecture(online & offline)
{color}
{panel}


h2. example (about group by):

{panel}

sql :
select age,count(u_id) v from tab_user_info t where u_reg_dt>=? and u_reg_dt<=? 
group by age


execution plan may be:
stage0:
{quote}
mapsql:
select age,count(u_id) v from tab_user_info t where u_reg_dt>=? 
and u_reg_dt<=? group by age

shuffle: 
shuffle by age with range policy, 
for example ,if number of reduce node is N , then every node 
has (max(u_id)-min(u_id))/N record , 
reduce node have id , node with small id store data with small 
range of age , so we can group by in each node

reducesql: 
select age,sum(v) from t where group by age
{quote}
note:
we must execute group by on reduce node because of data coming from different 
mapsql need to be aggregated



{panel}

h2. example (about join):

{panel}
sql:
select t1.u_id,t1.u_name,t2.login_product 
from tab_user_info t1 join tab_login_info t2 
on (t1.u_id=t2.u_id and t1.u_reg_dt>=? and t1.u_reg_dt<=?)

execution plan may be:

stage0:
{quote}
mapsql:
select u_id,u_name from tab_user_info t where u_reg_dt>=? and 
t1.u_reg_dt<=? ;
select u_id, login_product from tab_login_info t ;


  

[jira] [Updated] (HADOOP-13304) distributed database for store , mapreduce for compute

2016-06-21 Thread jiang hehui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiang hehui updated HADOOP-13304:
-
Description: 
in hadoop ,hdfs is responsible for store , mapreduce is responsible for compute 
.
my idea is that data are stored in distributed database , data compute is like 
mapreduce.

h2. how to do ?

* insert: 
using two-phase commit ,according to the split policy ,just execute insert in 
nodes

* delete: 
using two-phase commit ,according to the split policy ,just execute delete in 
nodes

* update:
using two-phase commit, according to the split policy, if record node does not 
change ,just execute update in nodes, if record node change, first delete old 
value in source node , and insert new value in destination node .
* select:
** simple select (like data just in one node , or data fusion across multi 
nodes not need)is just the same like standalone database server;
** complex select (like distinct , group by, order by, sub query, join across 
multi nodes),we call a job 
{panel}
{color:red}job are parsed into stages , stages have lineage , all stages in a 
job make up dag( Directed Acyclic Graph ) ,every stage contains mapsql 
,shuffle, reducesql .
when receive request sql, according to metadata ,generate the execution plan 
which contain the dag , including stage and mapsql ,shuffle, reducesql in each 
stage; then just execute the plan , and return result to client.

as in spark , it is the same ; rdd is table , job is job;
as mapreduce in hadoop, it is the same ; mapsql is map , shuffle is shuffle , 
reducesql is reduce.
{color}
{panel}


h2. architecture:
!http://images2015.cnblogs.com/blog/439702/201606/439702-2016062112414-32823985.png!

* client : user interface 
* master : master like nameserver in hdfs
* meta database : contain the base information about system , nodes , tables 
and so on 
* store node :  database node where data is stored , insert,delete,update is 
always executed on 
* calculate node : where execure select , source data is in store nodes , then 
other task is run on calculate node . calculae node may be the same as store 
node in practice


h2. example:

{panel}
select age,count(u_id) v from tab_user_info t where u_reg_dt>=? and u_reg_dt<=? 
group by age


execution plan may be:
stage0:
mapsql:
select age,count(u_id) v from tab_user_info t where u_reg_dt>=? 
and u_reg_dt<=? group by age

shuffle: 
shuffle by age with range policy, 
for example ,if number of reduce node is N , then every node 
has (max(u_id)-min(u_id))/N record , 
reduce node have id , node with small id store data with small 
range of age , so we can group by in each node

reducesql: 
select age,sum(v) from t where group by age

note:
we must execute group by on reduce node because of data coming from different 
mapsql need to be aggregated



{panel}


{panel}

select t1.u_id,t1.u_name,t2.login_product 
from tab_user_info t1 join tab_login_info t2 
on (t1.u_id=t2.u_id and t1.u_reg_dt>=? and t1.u_reg_dt<=?)

execution plan may be:

stage0:
mapsql:
select u_id,u_name from tab_user_info t where u_reg_dt>=? and 
t1.u_reg_dt<=? ;
select u_id, login_product from tab_login_info t ;


shuffle: 
shuffle by u_id with range policy, 
for example ,if number of reduce node is N , then every node 
has (max(u_id)-min(u_id))/N record , 
reduce node have id , node with small id store data with small 
range of u_id , so we can join in each node

reducesql: 
select t1.u_id,t1.u_name,t2.login_product 
from tab_user_info t1 join tab_login_info t2 
on (t1.u_id=t2.u_id)


note:
because of join ,each table need to be tagged so that reduce can determine each 
record belongs to which table


{panel}

  was:
in hadoop ,hdfs is responsible for store , mapreduce is responsible for compute 
.
my idea is that data are stored in distributed database , data compute is like 
mapreduce.

!http://images2015.cnblogs.com/blog/439702/201606/439702-2016062112414-32823985.png!

* insert: 
using two-phase commit ,according to the split policy ,just execute insert in 
nodes

* delete: 
using two-phase commit ,according to the split policy ,just execute delete in 
nodes

* update:
using two-phase commit, according to the split policy, if record node does not 
change ,just execute update in nodes, if record node change, first delete old 
value in source node , and insert new value in destination node .
* select:
** simple select (like data just in one node , or data fusion across multi 
nodes not need)is just the same like standalone database server;
** complex select (like distinct , group by, order by, sub query, join across 
multi nodes),we call a job 

[jira] [Created] (HADOOP-13304) distributed database for store , mapreduce for compute

2016-06-21 Thread jiang hehui (JIRA)
jiang hehui created HADOOP-13304:


 Summary: distributed database for store , mapreduce for compute
 Key: HADOOP-13304
 URL: https://issues.apache.org/jira/browse/HADOOP-13304
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.6.4
Reporter: jiang hehui


in hadoop ,hdfs is responsible for store , mapreduce is responsible for compute 
.
my idea is that data are stored in distributed database , data compute is like 
mapreduce.

!http://images2015.cnblogs.com/blog/439702/201606/439702-2016062112414-32823985.png!

* insert: 
using two-phase commit ,according to the split policy ,just execute insert in 
nodes

* delete: 
using two-phase commit ,according to the split policy ,just execute delete in 
nodes

* update:
using two-phase commit, according to the split policy, if record node does not 
change ,just execute update in nodes, if record node change, first delete old 
value in source node , and insert new value in destination node .
* select:
** simple select (like data just in one node , or data fusion across multi 
nodes not need)is just the same like standalone database server;
** complex select (like distinct , group by, order by, sub query, join across 
multi nodes),we call a job 
{panel}
{color:red}job are parsed into stages , stages have lineage , all stages in a 
job make up dag( Directed Acyclic Graph ) ,every stage contains mapsql 
,shuffle, reducesql .
when receive request sql, according to metadata ,generate the execution plan 
which contain the dag , including stage and mapsql ,shuffle, reducesql in each 
stage; then just execute the plan , and return result to client.

as in spark , it is the same ; rdd is table , job is job;
as mapreduce in hadoop, it is the same ; mapsql is map , shuffle is shuffle , 
reducesql is reduce.
{color}
{panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-21 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341328#comment-15341328
 ] 

Tsuyoshi Ozawa commented on HADOOP-12893:
-

Thanks all for taking a look at this issue.

[~xiaochen] I'm very sorry for the delay. 

{quote}
I should have mentioned that the current jdiff scope provided doesn't bundle it 
into the jars. Making the scope of jdiff to "compile" does make it show up 
though, so I didn't include that change in the latest patch. Is it okay to have 
it in our deps, but not bundled? (That is, the as-is option in my above comment)
{quote}

You're right. Thank you for fixing it.


> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12893-addendum-branch-2.7.01.patch, 
> HADOOP-12893.002.patch, HADOOP-12893.003.patch, HADOOP-12893.004.patch, 
> HADOOP-12893.005.patch, HADOOP-12893.006.patch, HADOOP-12893.007.patch, 
> HADOOP-12893.008.patch, HADOOP-12893.009.patch, HADOOP-12893.01.patch, 
> HADOOP-12893.011.patch, HADOOP-12893.012.patch, HADOOP-12893.10.patch, 
> HADOOP-12893.branch-2.01.patch, HADOOP-12893.branch-2.6.01.patch, 
> HADOOP-12893.branch-2.7.01.patch, HADOOP-12893.branch-2.7.02.patch, 
> HADOOP-12893.branch-2.7.3.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13303) Detail Informations of KMS High Avalibale

2016-06-21 Thread qiushi fan (JIRA)
qiushi fan created HADOOP-13303:
---

 Summary: Detail Informations of KMS High Avalibale
 Key: HADOOP-13303
 URL: https://issues.apache.org/jira/browse/HADOOP-13303
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha, kms
Affects Versions: 2.7.2
Reporter: qiushi fan


I have some confusions of kms HA recently. 

1. we can set up multiple KMS instances  behind a load balancer. Among all 
these kms instances, there is only one master kms, others are slave kms. The 
master kms can handle Key create/store/rollover/delete operations by directly 
contacting with JCE keystore file. The slave kms can handle  Key 
create/store/rollover/delete operations by delegating it to the master kms.

so although we set up multiple kms, there is only one  JCE keystore file, and 
only the master kms can access to this file.   Both the JCE keystore file and 
the master kms don't have a backup. If one of them died, there is no way to 
avoid losing data.

Is all of the above true? KMS doesn't have a solution to handle the failure of 
master kms and  JCE keystore file?

2. I heard another way to achieve kms HA: make use of 
LoadBalancingKMSClientProvider. But  I can't find detail informations of 
LoadBalancingKMSClientProvider.  So why the  LoadBalancingKMSClientProvider can 
achieve kms HA?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341298#comment-15341298
 ] 

Hadoop QA commented on HADOOP-12064:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
1s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
6s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812110/HADOOP-12064.002.patch
 |
| JIRA Issue | HADOOP-12064 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 2b73f254da60 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 46f1602 |
| Default Java | 1.8.0_91 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9843/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9843/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-06-21 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12064:

Attachment: HADOOP-12064.002.patch

Rebasing v2 patch on trunk.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341273#comment-15341273
 ] 

Hadoop QA commented on HADOOP-10101:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-10101 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12699092/HADOOP-10101-011.patch
 |
| JIRA Issue | HADOOP-10101 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9842/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update guava dependency to the latest version
> -
>
> Key: HADOOP-10101
> URL: https://issues.apache.org/jira/browse/HADOOP-10101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Rakesh R
>Assignee: Vinayakumar B
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
> HADOOP-10101-005.patch, HADOOP-10101-006.patch, HADOOP-10101-007.patch, 
> HADOOP-10101-008.patch, HADOOP-10101-009.patch, HADOOP-10101-009.patch, 
> HADOOP-10101-010.patch, HADOOP-10101-010.patch, HADOOP-10101-011.patch, 
> HADOOP-10101.patch, HADOOP-10101.patch
>
>
> The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-06-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341251#comment-15341251
 ] 

Akira AJISAKA commented on HADOOP-12064:


bq. Hence, I prefer 4.0.0 to 4.1.0 at this point.
Agreed.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-06-21 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341231#comment-15341231
 ] 

Tsuyoshi Ozawa commented on HADOOP-12064:
-

I surveyed whether Guice 4.1.0 is acceptable for Hadoop: the answer is we 
cannot upgrade Guice 4.1.0 because it uses Guava 19.0.0. On the other hand, 
Guice 4.0.0 uses Guava 16.0.1, which is acceptable version for Hadoop for now. 

Hence, I prefer 4.0.0 to 4.1.0 at this point. 

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org