[jira] [Updated] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12700:
---
Target Version/s: 2.8.0
Priority: Minor  (was: Trivial)
Hadoop Flags: Reviewed

> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098249#comment-15098249
 ] 

Akira AJISAKA commented on HADOOP-12700:


LGTM, +1.

> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12700:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks [~jzhuge] for the 
contribution.

> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098343#comment-15098343
 ] 

John Zhuge commented on HADOOP-12700:
-

Thanks [~ajisakaa] for the review and commit.

> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12707) key of FileSystem inner class Cache contains UGI.hascode which uses the defualt hascode method, leading to the memory leak

2016-01-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098415#comment-15098415
 ] 

Steve Loughran commented on HADOOP-12707:
-

...we could consider using weak references, or at least having some idleness

> key of FileSystem inner class Cache contains UGI.hascode which uses the 
> defualt hascode method, leading to the memory leak
> --
>
> Key: HADOOP-12707
> URL: https://issues.apache.org/jira/browse/HADOOP-12707
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: sunhaitao
>Assignee: sunhaitao
>
> FileSystem.get(conf) method,By default it will get the fs object from 
> CACHE,But the key of the CACHE  constains ugi.hashCode, which uses the 
> default hascode method of subject instead of the hascode method overwritten 
> by subject.
>@Override
>   public int hashCode() {
> return (scheme + authority).hashCode() + ugi.hashCode() + (int)unique;
>   }
> In this case, even if same user, if the calll FileSystem.get(conf) twice, two 
> different key will be created. In long duartion, this will lead to memory 
> leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12707) key of FileSystem inner class Cache contains UGI.hascode which uses the defualt hascode method, leading to the memory leak

2016-01-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098508#comment-15098508
 ] 

Chris Nauroth commented on HADOOP-12707:


bq. ...we could consider using weak references, or at least having some idleness

I thought about weak references at one point but abandoned it for a few reasons:

# Weak references would provide different semantics compared to the current 
cache implementation.  Right now, applications can get a {{FileSystem}}, use 
it, abandon it for a long time (no longer holding a reference to it), and then 
come back and still find that instance sitting in the cache.  With weak 
references, there would be a race condition if a GC swept away the instance 
before the application came back to claim it again.  This could change 
performance and load patterns in negative ways if clients need to reconnect 
sockets.  Whether or not it would really be problematic in practice is unclear, 
but there is a ton of ecosystem and application code that would need testing.  
Any such testing would be difficult and perhaps unconvincing, because it would 
be dependent on external factors like heap configuration, and ultimately, the 
timing of GC is non-deterministic anyway.
# The cache goes beyond just resource management and is actually coupled to 
some logic that implements the API contract.  Delete-on-exit becomes 
problematic.  With weak references, I suppose we'd have to trigger the deletes 
from {{finalize}}.  I've experienced a lot of bugs in other codebases that rely 
on a finalizer to do significant work, so I have an immediate aversion to this. 
 I also worry about tricky interactions with the {{ClientFinalizer}} shutdown 
hook.  I guess an alternative could be to somehow "resurrect" a reaped instance 
that still has pending delete-on-exit work, but I expect this would be complex.
# Even assuming we do a correct bug-free implementation of delete-on-exit from 
a finalizer, it still changes the semantics.  The deletions are triggered from 
the {{close}} method.  Many applications don't bother calling {{close}} at all. 
 In that case, the deletes would happen during the {{ClientFinalizer}} shutdown 
hook.  Effectively, these applications expect that delete-on-exit means 
delete-on-process-exit.  They might even be calling {{cancelDeleteOnExit}} to 
cancel a prior queued delete.  If a weak-referenced instance drops out because 
of GC, then the deletes would happen at an unexpected time, all of that prior 
state would be lost, and the application has lost its opportunity to call 
{{cancelDeleteOnExit}}.

An idleness policy would have similar challenges.  It's hard to identify 
idleness correctly within the {{FileSystem}} layer, because current 
applications expect they can come back to the cache at any arbitrary time and 
get the same instance again.

Effectively, this cache isn't just a cache.  It's really a set of global 
variables where clients expect that they can do stateful interactions.  
Pitfalls of global variables, blah blah blah...  :-)

I don't like the current implementation, but I also don't see a safe way 
forward that is backwards-compatible.  If I had my preference, we'd 
re-implement it with explicit reference counting, require by contract that all 
callers call {{close}} when finished, and perhaps try to scrap the 
delete-on-exit feature entirely.  There has been pushback on the 
reference-counting proposal in the past, but I don't remember the exact JIRA.

Maybe explore backwards-incompatible changes for 3.x?

> key of FileSystem inner class Cache contains UGI.hascode which uses the 
> defualt hascode method, leading to the memory leak
> --
>
> Key: HADOOP-12707
> URL: https://issues.apache.org/jira/browse/HADOOP-12707
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: sunhaitao
>Assignee: sunhaitao
>
> FileSystem.get(conf) method,By default it will get the fs object from 
> CACHE,But the key of the CACHE  constains ugi.hashCode, which uses the 
> default hascode method of subject instead of the hascode method overwritten 
> by subject.
>@Override
>   public int hashCode() {
> return (scheme + authority).hashCode() + ugi.hashCode() + (int)unique;
>   }
> In this case, even if same user, if the calll FileSystem.get(conf) twice, two 
> different key will be created. In long duartion, this will lead to memory 
> leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2016-01-14 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097806#comment-15097806
 ] 

Masatake Iwasaki commented on HADOOP-12588:
---

Hmm...  The submitted patch is addressing the issue that expected result is not 
found. {{Mismatch in record count:  expected:<6> but was:<27>}} means it got 
results more than expected. I think it is another problem but maybe related. 
I'm digging and will address the issue too here.

> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12588.001.patch, HADOOP-12588.addendum.02.patch, 
> HADOOP-12588.addendum.patch
>
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12708) SSL based embeded jetty for hadoop services should allow configuration for elect cipher suites

2016-01-14 Thread Vijay Singh (JIRA)
Vijay Singh created HADOOP-12708:


 Summary: SSL based embeded jetty for hadoop services should allow 
configuration for elect cipher suites
 Key: HADOOP-12708
 URL: https://issues.apache.org/jira/browse/HADOOP-12708
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vijay Singh


Currently all hadoop services use emebeded jetty server. Effort to exclude 
cipher suite is being tracked under Hadoop-12668. However, hadoop should allow 
to restrict the cipher suites using explicit list of cipher suites. Current, 
pom dependency does not allow interface to include select cipher suites. As a 
result, the options include following:
1) Upgrade jetty from 6.x to 8.x or 9.x
2) Contemplate and Implement WAR based implementation for these services
3) Include ability to select specific cipher suites



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2016-01-14 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-12588:
---
Fix Version/s: (was: 2.7.3)
   (was: 2.8.0)

> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12588.001.patch, HADOOP-12588.addendum.02.patch, 
> HADOOP-12588.addendum.patch
>
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12668) Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak Ciphers through ssl-server.conf

2016-01-14 Thread Vijay Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097974#comment-15097974
 ] 

Vijay Singh commented on HADOOP-12668:
--

Hi [~wheat9],
Based on current jetty depndency in pom (release 6), setInluceCiphersuites is 
not available for SSLSocketConnector yet. Please refer the maven for current 
jetty dependency. 
http://mvnrepository.com/artifact/org.mortbay.jetty/jetty/6.1.26

I suggest that we open a new ticket to include cipher suites when we upgrade 
jetty version in pom. I am going to open two jira issues: 1) To upgrade jetty 
and 2) To include cipher suites based on configuration.

> Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak 
> Ciphers through ssl-server.conf
> --
>
> Key: HADOOP-12668
> URL: https://issues.apache.org/jira/browse/HADOOP-12668
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Vijay Singh
>Assignee: Vijay Singh
>Priority: Critical
>  Labels: common, ha, hadoop, hdfs, security
> Attachments: Hadoop-12668.006.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Embeded jetty Server used across all hadoop services is configured 
> through ssl-server.xml file from their respective configuration section. 
> However, the SSL/TLS protocol being used for this jetty servers can be 
> downgraded to weak cipher suites. This code changes aims to add following 
> functionality:
> 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
> spawn jetty servers with ability to exclude weak cipher suites. I propose we 
> make this though ssl-server.xml and hence each service can choose to disable 
> specific ciphers.
> 2) Modify DFSUtil.java used by HDFS code to supply new parameter 
> ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
> ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12668) Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak Ciphers through ssl-server.conf

2016-01-14 Thread Vijay Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097989#comment-15097989
 ] 

Vijay Singh commented on HADOOP-12668:
--

Added [HADOOP-12708|https://issues.apache.org/jira/browse/HADOOP-12708] to 
track request for configuring specfic ciphers.

> Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak 
> Ciphers through ssl-server.conf
> --
>
> Key: HADOOP-12668
> URL: https://issues.apache.org/jira/browse/HADOOP-12668
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Vijay Singh
>Assignee: Vijay Singh
>Priority: Critical
>  Labels: common, ha, hadoop, hdfs, security
> Attachments: Hadoop-12668.006.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Embeded jetty Server used across all hadoop services is configured 
> through ssl-server.xml file from their respective configuration section. 
> However, the SSL/TLS protocol being used for this jetty servers can be 
> downgraded to weak cipher suites. This code changes aims to add following 
> functionality:
> 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
> spawn jetty servers with ability to exclude weak cipher suites. I propose we 
> make this though ssl-server.xml and hence each service can choose to disable 
> specific ciphers.
> 2) Modify DFSUtil.java used by HDFS code to supply new parameter 
> ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
> ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098272#comment-15098272
 ] 

Hudson commented on HADOOP-12700:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9110 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9110/])
HADOOP-12700. Remove unused import in TestCompressorDecompressor.java. 
(aajisaka: rev ff8758377cf68b650c5f119a377ff86055b8d3f2)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressorDecompressor.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12707) key of FileSystem inner class Cache contains UGI.hascode which uses the defualt hascode method, leading to the memory leak

2016-01-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098366#comment-15098366
 ] 

Chris Nauroth commented on HADOOP-12707:


bq. For the current design, even if the same user ,if the user calls this 
method twice then the filesystem ojbect will be created twice.

Just a small clarification: it happens only if those 2 calls use different 
{{UserGroupInformation}} instances that really represent the same "logical" 
underlying user ({{Subject}}).  This is a much less common usage pattern 
compared to usage of a single {{UserGroupInformation}} instance that always 
represents that "logical" user.

There are no plans to change this behavior at this time.  The standard solution 
is to do one of the two things I described in my last comment: disable the 
cache in configuration or use {{FileSystem#closeAllForUGI}}.

There is a lot of legacy behind the current behavior of the {{FileSystem}} 
cache.  Some of it is described in HADOOP-6670, and some of it is described in 
other old JIRAs.  I wish we could make some big changes there, but a lot of 
applications are coded to expect the current behavior.

> key of FileSystem inner class Cache contains UGI.hascode which uses the 
> defualt hascode method, leading to the memory leak
> --
>
> Key: HADOOP-12707
> URL: https://issues.apache.org/jira/browse/HADOOP-12707
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: sunhaitao
>Assignee: sunhaitao
>
> FileSystem.get(conf) method,By default it will get the fs object from 
> CACHE,But the key of the CACHE  constains ugi.hashCode, which uses the 
> default hascode method of subject instead of the hascode method overwritten 
> by subject.
>@Override
>   public int hashCode() {
> return (scheme + authority).hashCode() + ugi.hashCode() + (int)unique;
>   }
> In this case, even if same user, if the calll FileSystem.get(conf) twice, two 
> different key will be created. In long duartion, this will lead to memory 
> leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101045#comment-15101045
 ] 

Akira AJISAKA commented on HADOOP-12700:


bq. I'm reverting this change from branch-2.8 for now.
Thanks [~yzhangal] for reverting this. As HADOOP-12590 is committed to 
branch-2.8, I'll commit this again shortly.

> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-01-14 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101077#comment-15101077
 ] 

Kai Zheng commented on HADOOP-12579:


Ping [~daryn] in case I missed more points. 

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
> Attachments: HADOOP-12579-v1.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101139#comment-15101139
 ] 

John Zhuge commented on HADOOP-12700:
-

Thanks [~ajisakaa], really appreciate it !

> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12590) TestCompressorDecompressor failing without stack traces

2016-01-14 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101138#comment-15101138
 ] 

John Zhuge commented on HADOOP-12590:
-

Thanks [~ajisakaa]

> TestCompressorDecompressor failing without stack traces
> ---
>
> Key: HADOOP-12590
> URL: https://issues.apache.org/jira/browse/HADOOP-12590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12590.001.patch
>
>
> Jenkins failing on {{TestCompressorDecompressor}}.
> The exception is being caught and converted to a fail *so there is no stack 
> trace of any value*
> {code}
> testCompressorDecompressor error !!!java.lang.NullPointerException
> Stacktrace
> java.lang.AssertionError: testCompressorDecompressor error 
> !!!java.lang.NullPointerException
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor(TestCompressorDecompressor.java:69)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12713) Disable spurious checkstyle checks

2016-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099013#comment-15099013
 ] 

Colin Patrick McCabe commented on HADOOP-12713:
---

+1 pending jenkins.  Thanks, [~awang].

> Disable spurious checkstyle checks
> --
>
> Key: HADOOP-12713
> URL: https://issues.apache.org/jira/browse/HADOOP-12713
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-12713.001.patch, HADOOP-12713.002.patch
>
>
> Some of the checkstyle checks are not realistic (like the line length), 
> leading to spurious -1 in precommit. Let's disable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12712) Fix some cmake plugin and native build warnings

2016-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099030#comment-15099030
 ] 

Hadoop QA commented on HADOOP-12712:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
28s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 1s 
{color} | {color:red} root in trunk failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 8s 
{color} | {color:red} root in trunk failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
48s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-maven-plugins in trunk failed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
23s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 0s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 1m 0s {color} | 
{color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 0s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 4s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 7m 4s {color} | 
{color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 4s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-maven-plugins in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-maven-plugins in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 20s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 47s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.8.0_66. 

[jira] [Created] (HADOOP-12714) Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is not buggy

2016-01-14 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12714:
-

 Summary: Fix hadoop-mapreduce-client-nativetask unit test which 
fails when glibc is not buggy
 Key: HADOOP-12714
 URL: https://issues.apache.org/jira/browse/HADOOP-12714
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is not 
buggy.  It attempts to open a "glibc bug spill" file which doesn't exist unless 
glibc has a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12590) TestCompressorDecompressor failing without stack traces

2016-01-14 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099109#comment-15099109
 ] 

John Zhuge commented on HADOOP-12590:
-

Really appreciate the help.

Please commit this patch to branch-2 and branch-2.8.
Also please commit the followup HADOOP-12700 to branch-2 and branch-2.8 as well.

And while at it, could someone review HADOOP-12701 so that problem like 
HADOOP-12700 would never happen again?

Thanks,
John

> TestCompressorDecompressor failing without stack traces
> ---
>
> Key: HADOOP-12590
> URL: https://issues.apache.org/jira/browse/HADOOP-12590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12590.001.patch
>
>
> Jenkins failing on {{TestCompressorDecompressor}}.
> The exception is being caught and converted to a fail *so there is no stack 
> trace of any value*
> {code}
> testCompressorDecompressor error !!!java.lang.NullPointerException
> Stacktrace
> java.lang.AssertionError: testCompressorDecompressor error 
> !!!java.lang.NullPointerException
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor(TestCompressorDecompressor.java:69)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12706) TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out occasionally

2016-01-14 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099124#comment-15099124
 ] 

Jason Lowe commented on HADOOP-12706:
-

+1 committing this.

> TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out 
> occasionally
> ---
>
> Key: HADOOP-12706
> URL: https://issues.apache.org/jira/browse/HADOOP-12706
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Attachments: HADOOP-12706.002.patch, HADOOP-12706.01.patch, 
> HADOOP-12706.03.patch
>
>
> TestLocalFsFCStatistics has been failing sometimes, and when it fails it 
> appears to be from FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp. 
>  The test is timing out when it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12590) TestCompressorDecompressor failing without stack traces

2016-01-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101028#comment-15101028
 ] 

Akira AJISAKA commented on HADOOP-12590:


Committed this to branch-2.8 to fix the build failure after HADOOP-12700. 
Thanks Vinod and John.
FYI: This patch has been already committed to branch-2.

> TestCompressorDecompressor failing without stack traces
> ---
>
> Key: HADOOP-12590
> URL: https://issues.apache.org/jira/browse/HADOOP-12590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12590.001.patch
>
>
> Jenkins failing on {{TestCompressorDecompressor}}.
> The exception is being caught and converted to a fail *so there is no stack 
> trace of any value*
> {code}
> testCompressorDecompressor error !!!java.lang.NullPointerException
> Stacktrace
> java.lang.AssertionError: testCompressorDecompressor error 
> !!!java.lang.NullPointerException
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor(TestCompressorDecompressor.java:69)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101031#comment-15101031
 ] 

Akira AJISAKA commented on HADOOP-12700:


bq. Could someone commit HADOOP-12590 to branch-2.8?
Done. Thanks Yongjun and John for the comments.

> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101047#comment-15101047
 ] 

Akira AJISAKA commented on HADOOP-12700:


Committed this to branch-2.8.

> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-01-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101062#comment-15101062
 ] 

Haohui Mai commented on HADOOP-12579:
-

I consider {{WritableRPCEngine}} is dead code. I suggest cleaning up the tests 
instead of putting a major functionality ({{WirtableRPCEngine}} in the tests.

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
> Attachments: HADOOP-12579-v1.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12712) Fix some cmake plugin and native build warnings

2016-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101069#comment-15101069
 ] 

Hadoop QA commented on HADOOP-12712:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
12s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-maven-plugins in trunk failed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 5m 15s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 5m 15s {color} | 
{color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 15s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 2m 2s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 2m 2s {color} | 
{color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 2s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-maven-plugins in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 26s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 38s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.8.0_66. {color} |
| 

[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-01-14 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101075#comment-15101075
 ] 

Kai Zheng commented on HADOOP-12579:


Sure [~wheat9] I can do that, porting the test codes that are still valuable to 
the protocol buffer engine and getting away thoseuseless. Thanks!

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
> Attachments: HADOOP-12579-v1.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101074#comment-15101074
 ] 

Yongjun Zhang commented on HADOOP-12700:


Thanks [~ajisakaa]!


> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12590) TestCompressorDecompressor failing without stack traces

2016-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15101084#comment-15101084
 ] 

Hudson commented on HADOOP-12590:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9117 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9117/])
Move HADOOP-12590 from 2.9.0 to 2.8.0 in CHANGES.txt. (aajisaka: rev 
1da762c745fa2bbb0a7a6d16bdc58ec9d4eb670d)
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestCompressorDecompressor failing without stack traces
> ---
>
> Key: HADOOP-12590
> URL: https://issues.apache.org/jira/browse/HADOOP-12590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12590.001.patch
>
>
> Jenkins failing on {{TestCompressorDecompressor}}.
> The exception is being caught and converted to a fail *so there is no stack 
> trace of any value*
> {code}
> testCompressorDecompressor error !!!java.lang.NullPointerException
> Stacktrace
> java.lang.AssertionError: testCompressorDecompressor error 
> !!!java.lang.NullPointerException
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor(TestCompressorDecompressor.java:69)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2016-01-14 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-12107:

Fix Version/s: 2.6.4
   2.7.3

I committed this to branch-2.7 and branch-2.6.

> long running apps may have a huge number of StatisticsData instances under 
> FileSystem
> -
>
> Key: HADOOP-12107
> URL: https://issues.apache.org/jira/browse/HADOOP-12107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Fix For: 2.8.0, 2.7.3, 2.6.4
>
> Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
> HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch
>
>
> We observed with some of our apps (non-mapreduce apps that use filesystems) 
> that they end up accumulating a huge memory footprint coming from 
> {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
> {{Statistics}}).
> Although the thread reference from {{StatisticsData}} is a weak reference, 
> and thus can get cleared once a thread goes away, the actual 
> {{StatisticsData}} instances in the list won't get cleared until any of these 
> following methods is called on {{Statistics}}:
> - {{getBytesRead()}}
> - {{getBytesWritten()}}
> - {{getReadOps()}}
> - {{getLargeReadOps()}}
> - {{getWriteOps()}}
> - {{toString()}}
> It is quite possible to have an application that interacts with a filesystem 
> but does not call any of these methods on the {{Statistics}}. If such an 
> application runs for a long time and has a large amount of thread churn, the 
> memory footprint will grow significantly.
> The current workaround is either to limit the thread churn or to invoke these 
> operations occasionally to pare down the memory. However, this is still a 
> deficiency with {{FileSystem$Statistics}} itself in that the memory is 
> controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12714) Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is not buggy

2016-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099238#comment-15099238
 ] 

Hadoop QA commented on HADOOP-12714:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-mapreduce-client-nativetask in trunk failed with 
JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 30s 
{color} | {color:red} hadoop-mapreduce-client-nativetask in the patch failed 
with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 30s {color} | 
{color:red} hadoop-mapreduce-client-nativetask in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 30s {color} 
| {color:red} hadoop-mapreduce-client-nativetask in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 53s 
{color} | {color:red} 
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask-jdk1.7.0_91
 with JDK v1.7.0_91 generated 1 new issues (was 0, now 1). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 57s {color} 
| {color:red} hadoop-mapreduce-client-nativetask in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 28s {color} 
| {color:red} hadoop-mapreduce-client-nativetask in the patch failed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12782398/HADOOP-12714.001.patch
 |
| JIRA Issue | HADOOP-12714 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux afa5b3db6f74 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cdf8895 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8413/artifact/patchprocess/branch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask-jdk1.7.0_91.txt
 |
| compile | 

[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-01-14 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-12563:
--
Status: Open  (was: Patch Available)

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2016-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099221#comment-15099221
 ] 

Hudson commented on HADOOP-12107:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9116 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9116/])
Update CHANGES.txt for commit of HADOOP-12107 to branch-2.7 and (jlowe: rev 
651c23e8ef8aeafd999249ce57b31e689bd2ece6)
* hadoop-common-project/hadoop-common/CHANGES.txt


> long running apps may have a huge number of StatisticsData instances under 
> FileSystem
> -
>
> Key: HADOOP-12107
> URL: https://issues.apache.org/jira/browse/HADOOP-12107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Fix For: 2.8.0, 2.7.3, 2.6.4
>
> Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
> HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch
>
>
> We observed with some of our apps (non-mapreduce apps that use filesystems) 
> that they end up accumulating a huge memory footprint coming from 
> {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
> {{Statistics}}).
> Although the thread reference from {{StatisticsData}} is a weak reference, 
> and thus can get cleared once a thread goes away, the actual 
> {{StatisticsData}} instances in the list won't get cleared until any of these 
> following methods is called on {{Statistics}}:
> - {{getBytesRead()}}
> - {{getBytesWritten()}}
> - {{getReadOps()}}
> - {{getLargeReadOps()}}
> - {{getWriteOps()}}
> - {{toString()}}
> It is quite possible to have an application that interacts with a filesystem 
> but does not call any of these methods on the {{Statistics}}. If such an 
> application runs for a long time and has a large amount of thread churn, the 
> memory footprint will grow significantly.
> The current workaround is either to limit the thread churn or to invoke these 
> operations occasionally to pare down the memory. However, this is still a 
> deficiency with {{FileSystem$Statistics}} itself in that the memory is 
> controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12706) TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out occasionally

2016-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099222#comment-15099222
 ] 

Hudson commented on HADOOP-12706:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9116 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9116/])
HADOOP-12706. (jlowe: rev cdf88952599a43b1ef5adda792bfb195c7529fad)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FCStatisticsBaseTest.java


> TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out 
> occasionally
> ---
>
> Key: HADOOP-12706
> URL: https://issues.apache.org/jira/browse/HADOOP-12706
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Fix For: 2.7.3, 2.6.4
>
> Attachments: HADOOP-12706.002.patch, HADOOP-12706.01.patch, 
> HADOOP-12706.03.patch
>
>
> TestLocalFsFCStatistics has been failing sometimes, and when it fails it 
> appears to be from FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp. 
>  The test is timing out when it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12712) Fix some cmake plugin and native build warnings

2016-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12712:
--
Attachment: HADOOP-12712.003.patch

v3:
* remove TODO about using # of processors since it was already done (thanks, 
[~andrew.wang].)
* Fix javadoc in Exec.java (this was actually busted even prior to HADOOP-8887).

> Fix some cmake plugin and native build warnings
> ---
>
> Key: HADOOP-12712
> URL: https://issues.apache.org/jira/browse/HADOOP-12712
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-12712.001.patch, HADOOP-12712.002.patch, 
> HADOOP-12712.003.patch
>
>
> Fix some native build warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2016-01-14 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099066#comment-15099066
 ] 

Benoy Antony commented on HADOOP-12587:
---

committed to branch-2.8.

> Hadoop AuthToken refuses to work without a maxinactive attribute in issued 
> token
> 
>
> Key: HADOOP-12587
> URL: https://issues.apache.org/jira/browse/HADOOP-12587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
> Environment: OSX heimdal kerberos client against Linux KDC -talking 
> to a Hadoop 2.6.0 cluster
>Reporter: Steve Loughran
>Assignee: Benoy Antony
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-12587-001.patch, HADOOP-12587-002.patch, 
> HADOOP-12587-003.patch
>
>
> If you don't have a max-inactive attribute in the auth token returned from 
> the web site, AuthToken will raise an exception. This stops callers without 
> this token being able to submit jobs to a secure Hadoop 2.6 YARN cluster with 
> timeline server enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12714) Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is not buggy

2016-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12714:
--
Attachment: HADOOP-12714.001.patch

> Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is 
> not buggy
> 
>
> Key: HADOOP-12714
> URL: https://issues.apache.org/jira/browse/HADOOP-12714
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-12714.001.patch
>
>
> Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is 
> not buggy.  It attempts to open a "glibc bug spill" file which doesn't exist 
> unless glibc has a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12714) Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is not buggy

2016-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12714:
--
Status: Patch Available  (was: Open)

> Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is 
> not buggy
> 
>
> Key: HADOOP-12714
> URL: https://issues.apache.org/jira/browse/HADOOP-12714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-12714.001.patch
>
>
> Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is 
> not buggy.  It attempts to open a "glibc bug spill" file which doesn't exist 
> unless glibc has a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12714) Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is not buggy

2016-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12714:
--
Affects Version/s: 3.0.0
 Target Version/s: 3.0.0
  Component/s: native

> Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is 
> not buggy
> 
>
> Key: HADOOP-12714
> URL: https://issues.apache.org/jira/browse/HADOOP-12714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-12714.001.patch
>
>
> Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is 
> not buggy.  It attempts to open a "glibc bug spill" file which doesn't exist 
> unless glibc has a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12712) Fix some cmake plugin and native build warnings

2016-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099140#comment-15099140
 ] 

Hadoop QA commented on HADOOP-12712:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
4s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 46s 
{color} | {color:red} root in trunk failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 43s 
{color} | {color:red} root in trunk failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
10s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-maven-plugins in trunk failed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
7s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 46s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 46s {color} | 
{color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 46s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 2m 9s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 2m 9s {color} | 
{color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 9s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
49s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-maven-plugins in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-maven-plugins in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 53s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 38s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 

[jira] [Updated] (HADOOP-12708) SSL based embeded jetty for hadoop services should allow configuration for elect cipher suites

2016-01-14 Thread Vijay Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay Singh updated HADOOP-12708:
-
Target Version/s: 2.9.0  (was: 2.8.0)

> SSL based embeded jetty for hadoop services should allow configuration for 
> elect cipher suites
> --
>
> Key: HADOOP-12708
> URL: https://issues.apache.org/jira/browse/HADOOP-12708
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vijay Singh
>
> Currently all hadoop services use emebeded jetty server. Effort to exclude 
> cipher suite is being tracked under Hadoop-12668. However, hadoop should 
> allow to restrict the cipher suites using explicit list of cipher suites. 
> Current, pom dependency does not allow interface to include select cipher 
> suites. As a result, the options include following:
> 1) Upgrade jetty from 6.x to 8.x or 9.x
> 2) Contemplate and Implement WAR based implementation for these services
> 3) Include ability to select specific cipher suites



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-01-14 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099171#comment-15099171
 ] 

Larry McCay commented on HADOOP-12563:
--

[~aw] - thanks for the response - somehow I missed it earlier.

The ability to have multiple formats would be great.
There has been some other similar discussion around using JWT as a normalized 
authentication token.
I'd like to dig into this ability and make sure it is accounted for in the 
current design.

I envision an hinit command for authentication that results in a protected 
(JWT) token file that can be used for authentication.
This is very much inline with dtutil - apart from the current token format.

There is a filter available for use with the UIs that accepts cookies with JWT 
tokens available in trunk. It leverages the nimbus library for JWT support.

So, can we talk about the ability to have different formats now or do we have 
to talk about adding the ability in a follow up to this?

Thanks again!

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12706) TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out occasionally

2016-01-14 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-12706:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.6.4
   2.7.3
   Status: Resolved  (was: Patch Available)

Thanks everyone for the quick analysis, fix, and reviews!  I committed this to 
trunk, branch-2, branch-2.8, branch-2.7, and branch-2.6.

> TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out 
> occasionally
> ---
>
> Key: HADOOP-12706
> URL: https://issues.apache.org/jira/browse/HADOOP-12706
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Fix For: 2.7.3, 2.6.4
>
> Attachments: HADOOP-12706.002.patch, HADOOP-12706.01.patch, 
> HADOOP-12706.03.patch
>
>
> TestLocalFsFCStatistics has been failing sometimes, and when it fails it 
> appears to be from FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp. 
>  The test is timing out when it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12714) Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is not buggy

2016-01-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099241#comment-15099241
 ] 

Andrew Wang commented on HADOOP-12714:
--

Based on my understanding of the test, we've checked in a test file which, when 
read, reproduces this glibc bug. That file should always be present though, if 
it's not we have some problem finding test resources.

> Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is 
> not buggy
> 
>
> Key: HADOOP-12714
> URL: https://issues.apache.org/jira/browse/HADOOP-12714
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-12714.001.patch
>
>
> Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is 
> not buggy.  It attempts to open a "glibc bug spill" file which doesn't exist 
> unless glibc has a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2016-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8887:
-
   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to 2.9, thanks.

> Use a Maven plugin to build the native code using CMake
> ---
>
> Key: HADOOP-8887
> URL: https://issues.apache.org/jira/browse/HADOOP-8887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
> HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch, 
> HADOOP-8887.006.patch, HADOOP-8887.008.patch, HADOOP-8887.011.patch, 
> HADOOP-8887.012.patch, HADOOP-8887.013.patch, HADOOP-8887.014.patch
>
>
> Currently, we build the native code using ant-build invocations.  Although 
> this works, it has some limitations:
> * compiler warning messages are hidden, which can cause people to check in 
> code with warnings unintentionally
> * there is no framework for running native unit tests; instead, we use ad-hoc 
> constructs involving shell scripts
> * the antrun code is very platform specific
> * there is no way to run a specific native unit test
> * it's more or less impossible for scripts like test-patch.sh to separate a 
> native test failing from the build itself failing (no files are created) or 
> to enumerate which native tests failed.
> Using a native Maven plugin would overcome these limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-01-14 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12711:


 Summary: Remove dependency on commons-httpclient for ServletUtil
 Key: HADOOP-12711
 URL: https://issues.apache.org/jira/browse/HADOOP-12711
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


This is a branch-2 only change, as ServletUtil for trunk removes the code that 
depends on commons-httpclient.

We need to retire the use of commons-httpclient in Hadoop to address the 
security concern in CVE-2012-5783 
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.

{noformat}
import org.apache.commons.httpclient.URIException;
import org.apache.commons.httpclient.util.URIUtil;
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-01-14 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12711:
-
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-10105

> Remove dependency on commons-httpclient for ServletUtil
> ---
>
> Key: HADOOP-12711
> URL: https://issues.apache.org/jira/browse/HADOOP-12711
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>
> This is a branch-2 only change, as ServletUtil for trunk removes the code 
> that depends on commons-httpclient.
> We need to retire the use of commons-httpclient in Hadoop to address the 
> security concern in CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> {noformat}
> import org.apache.commons.httpclient.URIException;
> import org.apache.commons.httpclient.util.URIUtil;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2016-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098709#comment-15098709
 ] 

Hadoop QA commented on HADOOP-11613:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-11613 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12701990/HADOOP-11613-003.patch
 |
| JIRA Issue | HADOOP-11613 |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8409/console |


This message was automatically generated.



> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2016-01-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098661#comment-15098661
 ] 

Wei-Chiu Chuang commented on HADOOP-10105:
--

Guys, due to the security vulnerability issue CVE-2012-5783 
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783, I would highly 
suggest that we move away from commons-httpclient. At this point, there are 
still a few uncommitted piece. 
Thanks!

> remove httpclient dependency
> 
>
> Key: HADOOP-10105
> URL: https://issues.apache.org/jira/browse/HADOOP-10105
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
> HADOOP-10105.part2.patch, HADOOP-10105.patch
>
>
> httpclient is now end-of-life and is no longer being developed.  Now that we 
> have a dependency on {{httpcore}}, we should phase out our use of the old 
> discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
> {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12706) TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out occasionally

2016-01-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098668#comment-15098668
 ] 

Andrew Wang commented on HADOOP-12706:
--

LGTM +1

> TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out 
> occasionally
> ---
>
> Key: HADOOP-12706
> URL: https://issues.apache.org/jira/browse/HADOOP-12706
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Attachments: HADOOP-12706.002.patch, HADOOP-12706.01.patch, 
> HADOOP-12706.03.patch
>
>
> TestLocalFsFCStatistics has been failing sometimes, and when it fails it 
> appears to be from FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp. 
>  The test is timing out when it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack

2016-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098703#comment-15098703
 ] 

Hadoop QA commented on HADOOP-11614:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} HADOOP-11614 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12700103/HADOOP-11614.patch |
| JIRA Issue | HADOOP-11614 |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8408/console |


This message was automatically generated.



> Remove httpclient dependency from hadoop-openstack
> --
>
> Key: HADOOP-11614
> URL: https://issues.apache.org/jira/browse/HADOOP-11614
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11614.patch
>
>
> Remove httpclient dependency from hadoop-openstack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack

2016-01-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098587#comment-15098587
 ] 

Wei-Chiu Chuang commented on HADOOP-11614:
--

Hi Brahma Reddy Battula and Akira AJISAKA thanks for working on this. Are there 
any updates on this work? I'd like to join the effort and push this change into 
codebase.

> Remove httpclient dependency from hadoop-openstack
> --
>
> Key: HADOOP-11614
> URL: https://issues.apache.org/jira/browse/HADOOP-11614
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11614.patch
>
>
> Remove httpclient dependency from hadoop-openstack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2016-01-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098656#comment-15098656
 ] 

Wei-Chiu Chuang commented on HADOOP-10105:
--

Hi guys. I filed a new JIRA HADOOP-12710 (Remove dependency on 
commons-httpclient for TestHttpServerLogs) and list it as a sub-task of this 
jira.

Thanks for the effort.

> remove httpclient dependency
> 
>
> Key: HADOOP-10105
> URL: https://issues.apache.org/jira/browse/HADOOP-10105
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
> HADOOP-10105.part2.patch, HADOOP-10105.patch
>
>
> httpclient is now end-of-life and is no longer being developed.  Now that we 
> have a dependency on {{httpcore}}, we should phase out our use of the old 
> discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
> {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12713) Disable spurious checkstyle checks

2016-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099166#comment-15099166
 ] 

Hadoop QA commented on HADOOP-12713:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 6s 
{color} | {color:green} hadoop-build-tools in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 6s 
{color} | {color:green} hadoop-build-tools in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 54s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12782379/HADOOP-12713.002.patch
 |
| JIRA Issue | HADOOP-12713 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux bb1537183018 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cdf8895 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8412/testReport/ |
| modules | C: hadoop-build-tools U: hadoop-build-tools |
| Max memory used | 76MB |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-12706) TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out occasionally

2016-01-14 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15099165#comment-15099165
 ] 

Sangjin Lee commented on HADOOP-12706:
--

Thanks Jason for reporting the issue and others for the reviews!

> TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out 
> occasionally
> ---
>
> Key: HADOOP-12706
> URL: https://issues.apache.org/jira/browse/HADOOP-12706
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Fix For: 2.7.3, 2.6.4
>
> Attachments: HADOOP-12706.002.patch, HADOOP-12706.01.patch, 
> HADOOP-12706.03.patch
>
>
> TestLocalFsFCStatistics has been failing sometimes, and when it fails it 
> appears to be from FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp. 
>  The test is timing out when it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2016-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098749#comment-15098749
 ] 

Colin Patrick McCabe commented on HADOOP-8887:
--

So, the two issues from the plugin I saw were this deprecation warning:
{code}
[WARNING] 
/testptch/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/TestMojo.java:[207,30]
 [deprecation] getExecutionProperties() in MavenSession has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/TestMojo.java:[225,29]
 [deprecation] getExecutionProperties() in MavenSession has been deprecated
{code}

and a complaint about the use of a hard tab in {{TestMojo.java}}.  I opened 
HADOOP-12712 to fix these warnings.

Aside from that, we have the usual Jenkins weirdness on one of the test runs.
{code}
java.lang.RuntimeException: Error while running command to get file permissions 
: ExitCodeException exitCode=127: /bin/ls: error while loading shared 
libraries: libdl.so.2: failed to map segment from shared object: Permission 
denied
{code}
{{libdl.so.2}} is a Linux system library-- nothing to do with anything in this 
patch.  Best guess is that someone ran "apt-get update" concurrently with 
Jenkins and hosed this particular unit test.

Does that cover all the issues?

> Use a Maven plugin to build the native code using CMake
> ---
>
> Key: HADOOP-8887
> URL: https://issues.apache.org/jira/browse/HADOOP-8887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
> HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch, 
> HADOOP-8887.006.patch, HADOOP-8887.008.patch, HADOOP-8887.011.patch, 
> HADOOP-8887.012.patch, HADOOP-8887.013.patch, HADOOP-8887.014.patch
>
>
> Currently, we build the native code using ant-build invocations.  Although 
> this works, it has some limitations:
> * compiler warning messages are hidden, which can cause people to check in 
> code with warnings unintentionally
> * there is no framework for running native unit tests; instead, we use ad-hoc 
> constructs involving shell scripts
> * the antrun code is very platform specific
> * there is no way to run a specific native unit test
> * it's more or less impossible for scripts like test-patch.sh to separate a 
> native test failing from the build itself failing (no files are created) or 
> to enumerate which native tests failed.
> Using a native Maven plugin would overcome these limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12691) Add CSRF Filter for REST APIs to Hadoop Common

2016-01-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12691:
---
Issue Type: New Feature  (was: Bug)

> Add CSRF Filter for REST APIs to Hadoop Common
> --
>
> Key: HADOOP-12691
> URL: https://issues.apache.org/jira/browse/HADOOP-12691
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 3.0.0
>
> Attachments: CSRFProtectionforRESTAPIs.pdf, HADOOP-12691-001.patch, 
> HADOOP-12691-002.patch, HADOOP-12691-003.patch
>
>
> CSRF prevention for REST APIs can be provided through a common servlet 
> filter. This filter would check for the existence of an expected 
> (configurable) HTTP header - such as X-XSRF-Header.
> The fact that CSRF attacks are entirely browser based means that the above 
> approach can ensure that requests are coming from either: applications served 
> by the same origin as the REST API or that there is explicit policy 
> configuration that allows the setting of a header on XmlHttpRequest from 
> another origin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2016-01-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098770#comment-15098770
 ] 

Andrew Wang commented on HADOOP-8887:
-

There's a bunch of checkstyle stuff, and findbugs looks like it's broken 
because of some kafka stuff.

CC warnings, I'm guessing you're handling at HADOOP-12712 which I'll review.

Overall it seems okay, it's just there's a lot more "-1" than I'm used to in a 
precommit run, and it's not just flaky unit tests. The whitespace/checkstyle 
stuff is onerous, but one more precommit run will fix it.

If you want to take care of all this in HADOOP-12712, I'm okay with that. 
Nothing functionally wrong in what was committed, so okay to do it in a 
follow-on.

> Use a Maven plugin to build the native code using CMake
> ---
>
> Key: HADOOP-8887
> URL: https://issues.apache.org/jira/browse/HADOOP-8887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
> HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch, 
> HADOOP-8887.006.patch, HADOOP-8887.008.patch, HADOOP-8887.011.patch, 
> HADOOP-8887.012.patch, HADOOP-8887.013.patch, HADOOP-8887.014.patch
>
>
> Currently, we build the native code using ant-build invocations.  Although 
> this works, it has some limitations:
> * compiler warning messages are hidden, which can cause people to check in 
> code with warnings unintentionally
> * there is no framework for running native unit tests; instead, we use ad-hoc 
> constructs involving shell scripts
> * the antrun code is very platform specific
> * there is no way to run a specific native unit test
> * it's more or less impossible for scripts like test-patch.sh to separate a 
> native test failing from the build itself failing (no files are created) or 
> to enumerate which native tests failed.
> Using a native Maven plugin would overcome these limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10965) Confusing error message in hadoop fs -copyFromLocal

2016-01-14 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-10965:
---

Assignee: John Zhuge

> Confusing error message in hadoop fs -copyFromLocal
> ---
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12712) Fix some cmake plugin and native build warnings

2016-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12712:
--
Summary: Fix some cmake plugin and native build warnings  (was: Fix some 
native build warnings and cmake plugin warnings)

> Fix some cmake plugin and native build warnings
> ---
>
> Key: HADOOP-12712
> URL: https://issues.apache.org/jira/browse/HADOOP-12712
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-12712.001.patch
>
>
> Fix some native build warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2016-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098787#comment-15098787
 ] 

Hudson commented on HADOOP-8887:


FAILURE: Integrated in Hadoop-trunk-Commit #9113 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9113/])
HADOOP-8887. Use a Maven plugin to build the native code using CMake (cmccabe: 
rev b1ed28fa77cb2fab80c54f9dfeb5d8b7139eca34)
* hadoop-common-project/hadoop-common/pom.xml
* 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/TestMojo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml
* BUILDING.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/cmakebuilder/CompileMojo.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
* 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java
* hadoop-tools/hadoop-pipes/pom.xml


> Use a Maven plugin to build the native code using CMake
> ---
>
> Key: HADOOP-8887
> URL: https://issues.apache.org/jira/browse/HADOOP-8887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
> HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch, 
> HADOOP-8887.006.patch, HADOOP-8887.008.patch, HADOOP-8887.011.patch, 
> HADOOP-8887.012.patch, HADOOP-8887.013.patch, HADOOP-8887.014.patch
>
>
> Currently, we build the native code using ant-build invocations.  Although 
> this works, it has some limitations:
> * compiler warning messages are hidden, which can cause people to check in 
> code with warnings unintentionally
> * there is no framework for running native unit tests; instead, we use ad-hoc 
> constructs involving shell scripts
> * the antrun code is very platform specific
> * there is no way to run a specific native unit test
> * it's more or less impossible for scripts like test-patch.sh to separate a 
> native test failing from the build itself failing (no files are created) or 
> to enumerate which native tests failed.
> Using a native Maven plugin would overcome these limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12691) Add CSRF Filter for REST APIs to Hadoop Common

2016-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098788#comment-15098788
 ] 

Hudson commented on HADOOP-12691:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9113 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9113/])
HADOOP-12691. Add CSRF Filter for REST APIs to Hadoop Common. (cnauroth: rev 
06f4ac0ccdc623283106f258644148d5e003f75c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common/src/test/java/org/apache/hadoop/security/http/TestRestCsrfPreventionFilter.java
* 
hadoop-common/src/main/java/org/apache/hadoop/security/http/RestCsrfPreventionFilter.java


> Add CSRF Filter for REST APIs to Hadoop Common
> --
>
> Key: HADOOP-12691
> URL: https://issues.apache.org/jira/browse/HADOOP-12691
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.9.0
>
> Attachments: CSRFProtectionforRESTAPIs.pdf, HADOOP-12691-001.patch, 
> HADOOP-12691-002.patch, HADOOP-12691-003.patch
>
>
> CSRF prevention for REST APIs can be provided through a common servlet 
> filter. This filter would check for the existence of an expected 
> (configurable) HTTP header - such as X-XSRF-Header.
> The fact that CSRF attacks are entirely browser based means that the above 
> approach can ensure that requests are coming from either: applications served 
> by the same origin as the REST API or that there is explicit policy 
> configuration that allows the setting of a header on XmlHttpRequest from 
> another origin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2016-01-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098861#comment-15098861
 ] 

Wei-Chiu Chuang commented on HADOOP-11613:
--

Edit: EncodingUtil.getBytes() and EncodingUtil.getAsciiString() are in httpcore 
as well, so no need to reimplement, which is great.

> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098868#comment-15098868
 ] 

Yongjun Zhang commented on HADOOP-12700:


Hi Guys,

I found that the commit to 2.8 breaks the build. The same file actually use the 
removed import in 2.8.

I'm reverting this change from branch-2.8 for now. I guess the reason is that 
some changes done in trunk/branch-2 are not in 2.8.

Thanks.

> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12709) Deprecate s3:// in branch-2,; cut from trunk

2016-01-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098875#comment-15098875
 ] 

Haohui Mai commented on HADOOP-12709:
-

+1 on removing in trunk.

> Deprecate s3:// in branch-2,; cut from trunk
> 
>
> Key: HADOOP-12709
> URL: https://issues.apache.org/jira/browse/HADOOP-12709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12710) Remove dependency on commons-httpclient for TestHttpServerLogs

2016-01-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098879#comment-15098879
 ] 

Haohui Mai commented on HADOOP-12710:
-

If there are known security issues and they are not going to be fixed in 
commonhttpclient as it is EOL, we should probably just clean up the code and 
cut the dependency.

> Remove dependency on commons-httpclient for TestHttpServerLogs
> --
>
> Key: HADOOP-12710
> URL: https://issues.apache.org/jira/browse/HADOOP-12710
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12710.001.patch
>
>
> Commons-httpclient has long been EOL. Critically, it has several security 
> vulnerabilities: CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> I saw a recent commit that depends on commons-httpclient for 
> TestHttpServerLogs (HADOOP-12625) This JIRA intends to replace the dependency 
> with httpclient APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12700) Remove unused import in TestCompressorDecompressor.java

2016-01-14 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098884#comment-15098884
 ] 

John Zhuge commented on HADOOP-12700:
-

This is because this jira depends on 
[HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] that is only 
committed to "trunk":
{noformat}
$ git log --grep HADOOP-12590 asflive/trunk
commit d7ed04758c1bdb1c7caf5cf3a03da3ad81701957
Author: Steve Loughran 
Date:   Sat Jan 9 11:10:20 2016 +

HADOOP-12590. TestCompressorDecompressor failing without stack traces  
(John Zhuge via stevel)
$ git log --grep HADOOP-12590 asflive/branch-2.7
$ git log --grep HADOOP-12590 asflive/branch-2.8
{noformat}

Could someone commit HADOOP-12590 to branch-2.8?

> Remove unused import in TestCompressorDecompressor.java
> ---
>
> Key: HADOOP-12700
> URL: https://issues.apache.org/jira/browse/HADOOP-12700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12700.001.patch
>
>
> The fix for [HADOOP-12590|https://issues.apache.org/jira/browse/HADOOP-12590] 
> left an unused import in TestCompressorDecompressor.java.
> After uploading the patch for HADOOP-12590, I spotted the problem in IntelliJ 
> which marked the unused import *gray*, but it was too late.
> The problem was not detected by precommit check because test source files are 
> not checked by checkstyle by default. Maven checkstyle plugin parameter 
> *includeTestSourceDirectory* is *false* by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2016-01-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098892#comment-15098892
 ] 

Wei-Chiu Chuang commented on HADOOP-11613:
--

If the replacement code is short, then there's no need to create an extra 
wrapper method. Let me know work HADOOP-12711 to see if that's necessary. Thanks

> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2016-01-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098821#comment-15098821
 ] 

Wei-Chiu Chuang commented on HADOOP-11613:
--

Did a quick review:
would it be possible to use URLCodec.encode() instead of URLEncoder.encode()? 
This is how URIUtil.encode() originally implemented. You would also nee to 
reimplement EncodingUtil.getBytes() and EncodingUtil.getAsciiString()

Also, I want to add that ServletUtil.java (in branch-2) also uses 
URIUtil.encode()/URIUtil.decode(), and need to be replace as well. Maybe it's 
better to make a utility method/class to avoid duplicating the work.

> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2016-01-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098585#comment-15098585
 ] 

Wei-Chiu Chuang commented on HADOOP-11613:
--

Hi [~brahmareddy] and [~ajisakaa] thanks for working on this. Are there any 
updates on this work? It would be really great if we can remove 
commons-httpclient from Hadoop at all because it's old and has a few security 
vulnerabilities.

> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12712) Fix some native build warnings

2016-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12712:
--
Attachment: HADOOP-12712.001.patch

> Fix some native build warnings
> --
>
> Key: HADOOP-12712
> URL: https://issues.apache.org/jira/browse/HADOOP-12712
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-12712.001.patch
>
>
> Fix some native build warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12712) Fix some native build warnings

2016-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12712:
--
Status: Patch Available  (was: Open)

> Fix some native build warnings
> --
>
> Key: HADOOP-12712
> URL: https://issues.apache.org/jira/browse/HADOOP-12712
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-12712.001.patch
>
>
> Fix some native build warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12691) Add CSRF Filter for REST APIs to Hadoop Common

2016-01-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12691:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 3.0.0)
   2.9.0
   Status: Resolved  (was: Patch Available)

+1 for patch v003.  I agree with deferring documentation until subsequent 
JIRAs, where individual components will start using the filter.  I have 
committed this to trunk and branch-2.  [~lmccay], thank you for contributing 
this patch.

> Add CSRF Filter for REST APIs to Hadoop Common
> --
>
> Key: HADOOP-12691
> URL: https://issues.apache.org/jira/browse/HADOOP-12691
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.9.0
>
> Attachments: CSRFProtectionforRESTAPIs.pdf, HADOOP-12691-001.patch, 
> HADOOP-12691-002.patch, HADOOP-12691-003.patch
>
>
> CSRF prevention for REST APIs can be provided through a common servlet 
> filter. This filter would check for the existence of an expected 
> (configurable) HTTP header - such as X-XSRF-Header.
> The fact that CSRF attacks are entirely browser based means that the above 
> approach can ensure that requests are coming from either: applications served 
> by the same origin as the REST API or that there is explicit policy 
> configuration that allows the setting of a header on XmlHttpRequest from 
> another origin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2016-01-14 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098761#comment-15098761
 ] 

Benoy Antony commented on HADOOP-12587:
---

[~vinodkv], I will merge the changes to branch-2.8. Is it okay to cherry-pick 
from trunk ? That will make the dates slightly off as this change is from Dec 
22.

> Hadoop AuthToken refuses to work without a maxinactive attribute in issued 
> token
> 
>
> Key: HADOOP-12587
> URL: https://issues.apache.org/jira/browse/HADOOP-12587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
> Environment: OSX heimdal kerberos client against Linux KDC -talking 
> to a Hadoop 2.6.0 cluster
>Reporter: Steve Loughran
>Assignee: Benoy Antony
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-12587-001.patch, HADOOP-12587-002.patch, 
> HADOOP-12587-003.patch
>
>
> If you don't have a max-inactive attribute in the auth token returned from 
> the web site, AuthToken will raise an exception. This stops callers without 
> this token being able to submit jobs to a secure Hadoop 2.6 YARN cluster with 
> timeline server enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12712) Fix some native build warnings and cmake plugin warnings

2016-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12712:
--
Summary: Fix some native build warnings and cmake plugin warnings  (was: 
Fix some native build warnings)

> Fix some native build warnings and cmake plugin warnings
> 
>
> Key: HADOOP-12712
> URL: https://issues.apache.org/jira/browse/HADOOP-12712
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-12712.001.patch
>
>
> Fix some native build warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12712) Fix some cmake plugin and native build warnings

2016-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12712:
--
Attachment: HADOOP-12712.002.patch

Remove some unused imports, clean up some whitespace, etc. etc.

> Fix some cmake plugin and native build warnings
> ---
>
> Key: HADOOP-12712
> URL: https://issues.apache.org/jira/browse/HADOOP-12712
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-12712.001.patch, HADOOP-12712.002.patch
>
>
> Fix some native build warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12691) Add CSRF Filter for REST APIs to Hadoop Common

2016-01-14 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098778#comment-15098778
 ] 

Larry McCay commented on HADOOP-12691:
--

Thank you, [~cnauroth]!

> Add CSRF Filter for REST APIs to Hadoop Common
> --
>
> Key: HADOOP-12691
> URL: https://issues.apache.org/jira/browse/HADOOP-12691
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.9.0
>
> Attachments: CSRFProtectionforRESTAPIs.pdf, HADOOP-12691-001.patch, 
> HADOOP-12691-002.patch, HADOOP-12691-003.patch
>
>
> CSRF prevention for REST APIs can be provided through a common servlet 
> filter. This filter would check for the existence of an expected 
> (configurable) HTTP header - such as X-XSRF-Header.
> The fact that CSRF attacks are entirely browser based means that the above 
> approach can ensure that requests are coming from either: applications served 
> by the same origin as the REST API or that there is explicit policy 
> configuration that allows the setting of a header on XmlHttpRequest from 
> another origin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2016-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098790#comment-15098790
 ] 

Colin Patrick McCabe commented on HADOOP-8887:
--

Yes, let's take care of the whitespace / etc. stuff on HADOOP-12712.  There are 
also some flaky native unit tests, which I also think should be fixed in a 
follow-on.

> Use a Maven plugin to build the native code using CMake
> ---
>
> Key: HADOOP-8887
> URL: https://issues.apache.org/jira/browse/HADOOP-8887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
> HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch, 
> HADOOP-8887.006.patch, HADOOP-8887.008.patch, HADOOP-8887.011.patch, 
> HADOOP-8887.012.patch, HADOOP-8887.013.patch, HADOOP-8887.014.patch
>
>
> Currently, we build the native code using ant-build invocations.  Although 
> this works, it has some limitations:
> * compiler warning messages are hidden, which can cause people to check in 
> code with warnings unintentionally
> * there is no framework for running native unit tests; instead, we use ad-hoc 
> constructs involving shell scripts
> * the antrun code is very platform specific
> * there is no way to run a specific native unit test
> * it's more or less impossible for scripts like test-patch.sh to separate a 
> native test failing from the build itself failing (no files are created) or 
> to enumerate which native tests failed.
> Using a native Maven plugin would overcome these limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12706) TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out occasionally

2016-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098793#comment-15098793
 ] 

Colin Patrick McCabe commented on HADOOP-12706:
---

+1 as well.  Thanks, all.

> TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out 
> occasionally
> ---
>
> Key: HADOOP-12706
> URL: https://issues.apache.org/jira/browse/HADOOP-12706
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Attachments: HADOOP-12706.002.patch, HADOOP-12706.01.patch, 
> HADOOP-12706.03.patch
>
>
> TestLocalFsFCStatistics has been failing sometimes, and when it fails it 
> appears to be from FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp. 
>  The test is timing out when it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12709) Deprecate s3:// in branch-2,; cut from trunk

2016-01-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12709:
---

 Summary: Deprecate s3:// in branch-2,; cut from trunk
 Key: HADOOP-12709
 URL: https://issues.apache.org/jira/browse/HADOOP-12709
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran


The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
shows that it's not being used. while invaluable at the time, s3n and 
especially s3a render it obsolete except for reading existing data.

I propose

# Mark Java source as {{@deprecated}}
# Warn the first time in a JVM that an S3 instance is created, "deprecated 
-will be removed in future releases"
# in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
which holds it for anyone who still wants it. Or: retain the code but remove 
the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12710) Remove dependency on commons-httpclient for TestHttpServerLogs

2016-01-14 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12710:


 Summary: Remove dependency on commons-httpclient for 
TestHttpServerLogs
 Key: HADOOP-12710
 URL: https://issues.apache.org/jira/browse/HADOOP-12710
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


Commons-httpclient has long been EOL. Critically, it has several security 
vulnerabilities: CVE-2012-5783 
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.

I saw a recent commit that depends on commons-httpclient for TestHttpServerLogs 
(HADOOP-12625) This JIRA intends to replace the dependency with httpclient APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12710) Remove dependency on commons-httpclient for TestHttpServerLogs

2016-01-14 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12710:
-
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-10105

> Remove dependency on commons-httpclient for TestHttpServerLogs
> --
>
> Key: HADOOP-12710
> URL: https://issues.apache.org/jira/browse/HADOOP-12710
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> Commons-httpclient has long been EOL. Critically, it has several security 
> vulnerabilities: CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> I saw a recent commit that depends on commons-httpclient for 
> TestHttpServerLogs (HADOOP-12625) This JIRA intends to replace the dependency 
> with httpclient APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12712) Fix some native build warnings

2016-01-14 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12712:
-

 Summary: Fix some native build warnings
 Key: HADOOP-12712
 URL: https://issues.apache.org/jira/browse/HADOOP-12712
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


Fix some native build warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2016-01-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098678#comment-15098678
 ] 

Andrew Wang commented on HADOOP-8887:
-

Hey Colin, I think commit was a little premature, since precommit flagged a 
bunch of issues. Could you take a look?

> Use a Maven plugin to build the native code using CMake
> ---
>
> Key: HADOOP-8887
> URL: https://issues.apache.org/jira/browse/HADOOP-8887
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch, 
> HADOOP-8887.003.patch, HADOOP-8887.004.patch, HADOOP-8887.005.patch, 
> HADOOP-8887.006.patch, HADOOP-8887.008.patch, HADOOP-8887.011.patch, 
> HADOOP-8887.012.patch, HADOOP-8887.013.patch, HADOOP-8887.014.patch
>
>
> Currently, we build the native code using ant-build invocations.  Although 
> this works, it has some limitations:
> * compiler warning messages are hidden, which can cause people to check in 
> code with warnings unintentionally
> * there is no framework for running native unit tests; instead, we use ad-hoc 
> constructs involving shell scripts
> * the antrun code is very platform specific
> * there is no way to run a specific native unit test
> * it's more or less impossible for scripts like test-patch.sh to separate a 
> native test failing from the build itself failing (no files are created) or 
> to enumerate which native tests failed.
> Using a native Maven plugin would overcome these limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12710) Remove dependency on commons-httpclient for TestHttpServerLogs

2016-01-14 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12710:
-
Attachment: HADOOP-12710.001.patch

Rev01 a small change to use httpcore status code.

> Remove dependency on commons-httpclient for TestHttpServerLogs
> --
>
> Key: HADOOP-12710
> URL: https://issues.apache.org/jira/browse/HADOOP-12710
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12710.001.patch
>
>
> Commons-httpclient has long been EOL. Critically, it has several security 
> vulnerabilities: CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> I saw a recent commit that depends on commons-httpclient for 
> TestHttpServerLogs (HADOOP-12625) This JIRA intends to replace the dependency 
> with httpclient APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12706) TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out occasionally

2016-01-14 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098409#comment-15098409
 ] 

Sangjin Lee commented on HADOOP-12706:
--

Can I get your review? The test failures are unrelated.

> TestLocalFsFCStatistics#testStatisticsThreadLocalDataCleanUp times out 
> occasionally
> ---
>
> Key: HADOOP-12706
> URL: https://issues.apache.org/jira/browse/HADOOP-12706
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Attachments: HADOOP-12706.002.patch, HADOOP-12706.01.patch, 
> HADOOP-12706.03.patch
>
>
> TestLocalFsFCStatistics has been failing sometimes, and when it fails it 
> appears to be from FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp. 
>  The test is timing out when it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12713) Disable spurious checkstyle checks

2016-01-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12713:
-
Attachment: HADOOP-12713.001.patch

Turn off line length and the qualifier ordering check (which I think is pretty 
nitty).

Note you need to run "mvn install" before these get picked up.

> Disable spurious checkstyle checks
> --
>
> Key: HADOOP-12713
> URL: https://issues.apache.org/jira/browse/HADOOP-12713
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-12713.001.patch
>
>
> Some of the checkstyle checks are not realistic (like the line length), 
> leading to spurious -1 in precommit. Let's disable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12713) Disable spurious checkstyle checks

2016-01-14 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-12713:


 Summary: Disable spurious checkstyle checks
 Key: HADOOP-12713
 URL: https://issues.apache.org/jira/browse/HADOOP-12713
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Andrew Wang
Assignee: Andrew Wang


Some of the checkstyle checks are not realistic (like the line length), leading 
to spurious -1 in precommit. Let's disable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12713) Disable spurious checkstyle checks

2016-01-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12713:
-
Status: Patch Available  (was: Open)

> Disable spurious checkstyle checks
> --
>
> Key: HADOOP-12713
> URL: https://issues.apache.org/jira/browse/HADOOP-12713
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-12713.001.patch
>
>
> Some of the checkstyle checks are not realistic (like the line length), 
> leading to spurious -1 in precommit. Let's disable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12712) Fix some cmake plugin and native build warnings

2016-01-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15098993#comment-15098993
 ] 

Andrew Wang commented on HADOOP-12712:
--

One other thing I noticed, there's a TODO in CompileMojo about setting the # of 
processors which can be deleted, since we consult Runtime now.

> Fix some cmake plugin and native build warnings
> ---
>
> Key: HADOOP-12712
> URL: https://issues.apache.org/jira/browse/HADOOP-12712
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-12712.001.patch, HADOOP-12712.002.patch
>
>
> Fix some native build warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12713) Disable spurious checkstyle checks

2016-01-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12713:
-
Attachment: HADOOP-12713.002.patch

Disable the TODO check too, nothing wrong with a TODO.

> Disable spurious checkstyle checks
> --
>
> Key: HADOOP-12713
> URL: https://issues.apache.org/jira/browse/HADOOP-12713
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-12713.001.patch, HADOOP-12713.002.patch
>
>
> Some of the checkstyle checks are not realistic (like the line length), 
> leading to spurious -1 in precommit. Let's disable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)