[jira] [Commented] (HADOOP-11790) Testcase failures in PowerPC due to leveldbjni artifact

2015-11-05 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14991896#comment-14991896
 ] 

Alan Burlison commented on HADOOP-11790:


Solaris has exactly the same issue. If leveldbjni is built, where does it need 
to be installed in order to to be found?

> Testcase failures in PowerPC due to leveldbjni artifact
> ---
>
> Key: HADOOP-11790
> URL: https://issues.apache.org/jira/browse/HADOOP-11790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
> Environment: PowerPC64LE
>Reporter: Ayappan
>Priority: Minor
>
> The leveldbjni artifact in maven repository has been built for only x86 
> architecture. Due to which some of the testcases are failing in PowerPC. The 
> leveldbjni community has no plans to support other platforms [ 
> https://github.com/fusesource/leveldbjni/issues/54 ]. Right now , the 
> approach is we need to locally built leveldbjni prior to running hadoop 
> testcases. Pushing a PowerPC-specific leveldbjni artifact in central maven 
> repository and making pom.xml to pickup it up while running in PowerPC is 
> another option but i don't know whether this is a suitable one . Any other 
> alternative/solution is there ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11687) Ignore x-emc-* headers when copying an Amazon S3 object

2015-11-05 Thread Aaron Peterson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14991941#comment-14991941
 ] 

Aaron Peterson commented on HADOOP-11687:
-

Implementing a different cloning method seems to alleviate the COPY issue:

1092c1092
< final ObjectMetadata dstom = srcom.clone();
---
> ObjectMetadata dstom = copyOfObjMetadata(srcom);
1197a1198,1224
>   // Constructs copy of object metadata
>   private ObjectMetadata copyOfObjMetadata(ObjectMetadata source) {
> ObjectMetadata ret = new ObjectMetadata();
> ret.setCacheControl(source.getCacheControl());
> ret.setContentDisposition(source.getContentDisposition());
> ret.setContentEncoding(source.getContentEncoding());
> ret.setContentLength(source.getContentLength());
> ret.setContentMD5(source.getContentMD5());
> ret.setContentType(source.getContentType());
> ret.setExpirationTime(source.getExpirationTime());
> ret.setExpirationTimeRuleId(source.getExpirationTimeRuleId());
> ret.setHttpExpiresDate(source.getHttpExpiresDate());
> ret.setLastModified(source.getLastModified());
> ret.setOngoingRestore(source.getOngoingRestore());
> ret.setRestoreExpirationTime(source.getRestoreExpirationTime());
> ret.setSSEAlgorithm(source.getSSEAlgorithm());
> ret.setSSECustomerAlgorithm(source.getSSECustomerAlgorithm());
> ret.setSSECustomerKeyMd5(source.getSSECustomerKeyMd5());
> if (source.getUserMetadata().isEmpty() != Boolean.TRUE) {
> java.util.Map smd = source.getUserMetadata();
> for (String key : smd.keySet()) {
> ret.addUserMetadata(key, smd.get(key));
> }
> }
> return ret;
>   }
> 

> Ignore x-emc-* headers when copying an Amazon S3 object
> ---
>
> Key: HADOOP-11687
> URL: https://issues.apache.org/jira/browse/HADOOP-11687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Denis Jannot
>
> The EMC ViPR/ECS object storage platform uses proprietary headers starting by 
> x-emc-* (like Amazon does with x-amz-*).
> Headers starting by x-emc-* should be included in the signature computation, 
> but it's not done by the Amazon S3 Java SDK (it's done by the EMC S3 SDK).
> When s3a copy an object it copies all the headers, but when the object 
> includes x-emc-* headers, it generates a signature mismatch.
> Removing the x-emc-* headers from the copy would allow s3a to be compatible 
> with the EMC ViPR/ECS object storage platform.
> Removing the x-* which aren't x-amz-* headers from the copy would allow s3a 
> to be compatible with any object storage platform which is using proprietary 
> headers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11790) Testcase failures in PowerPC & Solaris due to leveldbjni artifact

2015-11-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992072#comment-14992072
 ] 

Allen Wittenauer commented on HADOOP-11790:
---

Local maven repo or some other repo defined by maven's settings.xml.

> Testcase failures in PowerPC & Solaris due to leveldbjni artifact
> -
>
> Key: HADOOP-11790
> URL: https://issues.apache.org/jira/browse/HADOOP-11790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
> Environment: PowerPC64LE Solaris
>Reporter: Ayappan
>Priority: Minor
>
> The leveldbjni artifact in maven repository has been built for only x86 
> architecture. Due to which some of the testcases are failing in PowerPC. The 
> leveldbjni community has no plans to support other platforms [ 
> https://github.com/fusesource/leveldbjni/issues/54 ]. Right now , the 
> approach is we need to locally built leveldbjni prior to running hadoop 
> testcases. Pushing a PowerPC-specific leveldbjni artifact in central maven 
> repository and making pom.xml to pickup it up while running in PowerPC is 
> another option but i don't know whether this is a suitable one . Any other 
> alternative/solution is there ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11790) Testcase failures in PowerPC due to leveldbjni artifact

2015-11-05 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-11790:
---
Environment: PowerPC64LE Solaris  (was: PowerPC64LE)

> Testcase failures in PowerPC due to leveldbjni artifact
> ---
>
> Key: HADOOP-11790
> URL: https://issues.apache.org/jira/browse/HADOOP-11790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
> Environment: PowerPC64LE Solaris
>Reporter: Ayappan
>Priority: Minor
>
> The leveldbjni artifact in maven repository has been built for only x86 
> architecture. Due to which some of the testcases are failing in PowerPC. The 
> leveldbjni community has no plans to support other platforms [ 
> https://github.com/fusesource/leveldbjni/issues/54 ]. Right now , the 
> approach is we need to locally built leveldbjni prior to running hadoop 
> testcases. Pushing a PowerPC-specific leveldbjni artifact in central maven 
> repository and making pom.xml to pickup it up while running in PowerPC is 
> another option but i don't know whether this is a suitable one . Any other 
> alternative/solution is there ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12079) Make 'X-Newest' header a configurable

2015-11-05 Thread ramtin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14991970#comment-14991970
 ] 

ramtin commented on HADOOP-12079:
-

+1 (non binding)

> Make 'X-Newest' header a configurable
> -
>
> Key: HADOOP-12079
> URL: https://issues.apache.org/jira/browse/HADOOP-12079
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.6.0, 3.0.0
>Reporter: Gil Vernik
>Assignee: Gil Vernik
> Attachments: x-newest-optional0001.patch, 
> x-newest-optional0002.patch, x-newest-optional0003.patch, 
> x-newest-optional0004.patch, x-newest-optional0005.patch
>
>
> Current code always sends X-Newest header to Swift. While it's true that 
> Swift is eventual consistent and X-Newest will always get the newest version 
> from Swift, in practice this header will make Swift response very slow. 
> This header should be configured as an optional, so that it will be possible 
> to access Swift without this header and get much better performance. 
> This patch doesn't modify current behavior. All is working as is, but there 
> is an option to provide fs.swift.service.useXNewest = false. 
> Some background on Swift and X-Newest: 
> When a GET or HEAD request is made to an object, the default behavior is to 
> get the data from one of the replicas (could be any of them). The downside to 
> this is that if there are older versions of the object (due to eventual 
> consistency) it is possible to get an older version of the object. The upside 
> is that the for the majority of use cases, this isn't an issue. For the small 
> subset of use cases that need to make sure that they get the latest version 
> of the object, they can set the "X-Newest" header to "True". If this is set, 
> the proxy server will check all replicas of the object and only return the 
> newest object. The downside to this is that the request can take longer, 
> since it has to contact all the replicas. It is also more expensive for the 
> backend, so only recommended when it is absolutely needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11790) Testcase failures in PowerPC & Solaris due to leveldbjni artifact

2015-11-05 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-11790:
---
Summary: Testcase failures in PowerPC & Solaris due to leveldbjni artifact  
(was: Testcase failures in PowerPC due to leveldbjni artifact)

> Testcase failures in PowerPC & Solaris due to leveldbjni artifact
> -
>
> Key: HADOOP-11790
> URL: https://issues.apache.org/jira/browse/HADOOP-11790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
> Environment: PowerPC64LE Solaris
>Reporter: Ayappan
>Priority: Minor
>
> The leveldbjni artifact in maven repository has been built for only x86 
> architecture. Due to which some of the testcases are failing in PowerPC. The 
> leveldbjni community has no plans to support other platforms [ 
> https://github.com/fusesource/leveldbjni/issues/54 ]. Right now , the 
> approach is we need to locally built leveldbjni prior to running hadoop 
> testcases. Pushing a PowerPC-specific leveldbjni artifact in central maven 
> repository and making pom.xml to pickup it up while running in PowerPC is 
> another option but i don't know whether this is a suitable one . Any other 
> alternative/solution is there ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10668) TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails

2015-11-05 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-10668:
-
Fix Version/s: 2.6.3

Cherry-picked the fix to branch-2.6.

> TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails
> 
>
> Key: HADOOP-10668
> URL: https://issues.apache.org/jira/browse/HADOOP-10668
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Ted Yu
>Assignee: Ming Ma
>  Labels: test
> Fix For: 2.7.0, 2.6.3
>
> Attachments: HADOOP-10668-2.patch, HADOOP-10668.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/4018//testReport/org.apache.hadoop.ha/TestZKFailoverControllerStress/testExpireBackAndForth/
>  :
> {code}
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
>   at org.apache.zookeeper.server.DataTree.getData(DataTree.java:648)
>   at org.apache.zookeeper.server.ZKDatabase.getData(ZKDatabase.java:371)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireActiveLockHolder(MiniZKFCCluster.java:199)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCCluster.java:234)
>   at 
> org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth(TestZKFailoverControllerStress.java:84)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11887:
--
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

Committed to 2.8.  Thanks, [~drankye].

> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.8.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11267) TestSecurityUtil fails when run with JDK8 because of empty principal names

2015-11-05 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-11267:
-
Fix Version/s: 2.6.3

Cherry-picked the fix to branch-2.6.

> TestSecurityUtil fails when run with JDK8 because of empty principal names
> --
>
> Key: HADOOP-11267
> URL: https://issues.apache.org/jira/browse/HADOOP-11267
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Stephen Chu
>Priority: Minor
> Fix For: 2.7.0, 2.6.3
>
> Attachments: HADOOP-11267.1.patch, HADOOP-11267.2.patch, 
> HADOOP-11267.2.patch, HADOOP-11267.4.patch
>
>
> Running {{TestSecurityUtil}} on JDK8 will fail:
> {code}
> java.lang.IllegalArgumentException: Empty nameString not allowed
>   at 
> sun.security.krb5.PrincipalName.validateNameStrings(PrincipalName.java:171)
>   at sun.security.krb5.PrincipalName.(PrincipalName.java:393)
>   at sun.security.krb5.PrincipalName.(PrincipalName.java:460)
>   at 
> javax.security.auth.kerberos.KerberosPrincipal.(KerberosPrincipal.java:120)
>   at 
> org.apache.hadoop.security.TestSecurityUtil.isOriginalTGTReturnsCorrectValues(TestSecurityUtil.java:57)
> {code}
> In JDK8, PrincipalName checks that its name is not empty and throws an 
> IllegalArgumentException if it is empty. This didn't happen in JDK6/7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8065) discp should have an option to compress data while copying.

2015-11-05 Thread Stephen Veiss (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992212#comment-14992212
 ] 

Stephen Veiss commented on HADOOP-8065:
---

The remaining checkstyle issues are either not actually in code modified by 
this patch, or (the 'hides a field' warnings) addressing them would mean 
breaking with the existing style in the file.

> discp should have an option to compress data while copying.
> ---
>
> Key: HADOOP-8065
> URL: https://issues.apache.org/jira/browse/HADOOP-8065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.20.2
>Reporter: Suresh Antony
>Priority: Minor
>  Labels: distcp
> Fix For: 0.20.2
>
> Attachments: HADOOP-8065-trunk_2015-11-03.patch, 
> HADOOP-8065-trunk_2015-11-04.patch, patch.distcp.2012-02-10
>
>
> We would like compress the data while transferring from our source system to 
> target system. One way to do this is to write a map/reduce job to compress 
> that after/before being transferred. This looks inefficient. 
> Since distcp already reading writing data it would be better if it can 
> accomplish while doing this. 
> Flip side of this is that distcp -update option can not check file size 
> before copying data. It can only check for the existence of file. 
> So I propose if -compress option is given then file size is not checked.
> Also when we copy file appropriate extension needs to be added to file 
> depending on compression type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12482) Race condition in JMX cache update

2015-11-05 Thread Tony Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992260#comment-14992260
 ] 

Tony Wu commented on HADOOP-12482:
--

The failed test hadoop.ipc.TestDecayRpcScheduler is not relevant to the change.

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992276#comment-14992276
 ] 

Zhe Zhang commented on HADOOP-11887:


Thanks Kai for the great work and Colin for the review! Looking forward to 
improved EC I/O performance powered by ISA-L.

> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12547) Deprecate hadoop-pipes

2015-11-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992305#comment-14992305
 ] 

Colin Patrick McCabe commented on HADOOP-12547:
---

bq. I'm pretty sure streaming supports JVM languages too since I'd be surprised 
if Java couldn't read and write from stdin and stdout... which, by your own 
argument, means we should drop the Java client APIs too. After all, that would 
reduce the code footprint, limit the testing needs, etc, etc, too right?

The Java client APIs provide significant advantages that neither streaming nor 
pipes provide.  Strong type checking, for example, and many opportunities to 
avoid serialization overhead.

bq. At this point, both. Perhaps deprecation in trunk if the native task stuff 
actually works.

OK.

bq. 
https://www.quora.com/If-my-current-job-involves-purely-C-C++-coding-what-are-the-best-ways-to-learn-hadoop-and-contribute-to-the-apache-hadoop-project-I-understand-most-of-hadoop-code-is-Java-Are-there-any-C-C++-bindings-for-hadoop-used-in-production-clusters

These links don't answer the question of why you would use pipes instead of 
streaming.  In fact, they don't even mention streaming at all.  Again I ask, 
why would you personally ever use or recommend pipes when streaming is 
available?

> Deprecate hadoop-pipes
> --
>
> Key: HADOOP-12547
> URL: https://issues.apache.org/jira/browse/HADOOP-12547
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> Development appears to have stopped on hadoop-pipes upstream for the last few 
> years, aside from very basic maintenance.  Hadoop streaming seems to be a 
> better alternative, since it supports more programming languages and is 
> better implemented.
> There were no responses to a message on the mailing list asking for users of 
> Hadoop pipes... and in my experience, I have never seen anyone use this.  We 
> should remove it to reduce our maintenance burden and build times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12547) Deprecate hadoop-pipes

2015-11-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992364#comment-14992364
 ] 

Allen Wittenauer commented on HADOOP-12547:
---

bq. The Java client APIs provide significant advantages that neither streaming 
nor pipes provide. 

This is a false statement. Partitioning, for example, can't be done natively in 
streaming code but can in pipes.  In streaming, you can only provide a Java 
class.

bq.  In fact, they don't even mention streaming at all. In fact, they don't 
even mention streaming at all. Again I ask, why would you personally ever use 
or recommend pipes when streaming is available?

Correct. Because if the code is being written MR in C++, why would one use the 
less functional streaming API?  If one believes that MR jobs consist of nothing 
but reading and writing KVs I could see that, but there's a lot more going on 
under the hood in more advanced jobs.  That functionality is just flat-out not 
available in streaming.

BTW, thanks to this discourse, I realize that yes, not deprecating until trunk 
is completely the correct thing to do.  It clearly fills a gap not fulfilled by 
any other APIs.  So, I'm more convinced that ever that a -1 is appropriate here.

> Deprecate hadoop-pipes
> --
>
> Key: HADOOP-12547
> URL: https://issues.apache.org/jira/browse/HADOOP-12547
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> Development appears to have stopped on hadoop-pipes upstream for the last few 
> years, aside from very basic maintenance.  Hadoop streaming seems to be a 
> better alternative, since it supports more programming languages and is 
> better implemented.
> There were no responses to a message on the mailing list asking for users of 
> Hadoop pipes... and in my experience, I have never seen anyone use this.  We 
> should remove it to reduce our maintenance burden and build times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-05 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-11684:
--
Attachment: (was: HADOOP-11684-006.patch)

> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-05 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-11684:
--
Attachment: HADOOP-11684-006.patch

Re-attaching 006 patch.

> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11790) Testcase failures in PowerPC & Solaris due to leveldbjni artifact

2015-11-05 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992127#comment-14992127
 ] 

Alan Burlison commented on HADOOP-11790:


Thanks, it looks like all the necessary bits will build on Solaris, after some 
pummelling.

> Testcase failures in PowerPC & Solaris due to leveldbjni artifact
> -
>
> Key: HADOOP-11790
> URL: https://issues.apache.org/jira/browse/HADOOP-11790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
> Environment: PowerPC64LE Solaris
>Reporter: Ayappan
>Priority: Minor
>
> The leveldbjni artifact in maven repository has been built for only x86 
> architecture. Due to which some of the testcases are failing in PowerPC. The 
> leveldbjni community has no plans to support other platforms [ 
> https://github.com/fusesource/leveldbjni/issues/54 ]. Right now , the 
> approach is we need to locally built leveldbjni prior to running hadoop 
> testcases. Pushing a PowerPC-specific leveldbjni artifact in central maven 
> repository and making pom.xml to pickup it up while running in PowerPC is 
> another option but i don't know whether this is a suitable one . Any other 
> alternative/solution is there ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11887:
--
Summary: Introduce Intel ISA-L erasure coding library for native erasure 
encoding support  (was: Introduce Intel ISA-L erasure coding library for the 
native support)

> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12547) Deprecate hadoop-pipes

2015-11-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992190#comment-14992190
 ] 

Colin Patrick McCabe commented on HADOOP-12547:
---

bq. Why do people use streaming instead of the Java MR API? Or even, why do 
people use Java MR instead of streaming?

Because the Java MR API only supports Java (and possibly other JVM languages), 
whereas streaming supports Perl, Python, Ruby, C, C++, and any other non-JVM 
programming language you can think of.

bq. we have people actually using it

Who specifically is using it?  [~kihwal] said he'd check if they were still 
using it, but didn't return with that information yet.  There was a post on 
stack overflow where a newbie tried to use it and failed.

bq. we haven't removed or deprecated MRv1 yet either, and these two seem fairly 
tied together given the history of why it exists

Hmm.  How are they tied together?  It seems like pipes could run against MRv2 
as well as MRv1.

In general, the comparisons of hadoop-pipes with mapreduce itself don't seem 
fair.  Users and customers use mapreduce jobs on a daily basis-- for disaster 
recovery with DistCp, to benchmark with Teragen, Teraread, Teravalidate, or the 
Pi jobs, and so on.  While there are good reasons to write new jobs in Spark, 
there are also a lot of MR jobs out there.  The same can't be said for 
hadoop-pipes, which we are still searching for an actual user for.

bq. So yeah, I'm definitely -1 at this point.

What specifically are you -1 on?  Removal, deprecation, or both?

Can you explain when you would advise one of your customers to use pipes 
instead of streaming?

If you feel that pipes is worth maintaining, can you file JIRAs to reinstate 
the documentation, fix the compiler warnings, and fix the security bugs?

Thanks.

> Deprecate hadoop-pipes
> --
>
> Key: HADOOP-12547
> URL: https://issues.apache.org/jira/browse/HADOOP-12547
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> Development appears to have stopped on hadoop-pipes upstream for the last few 
> years, aside from very basic maintenance.  Hadoop streaming seems to be a 
> better alternative, since it supports more programming languages and is 
> better implemented.
> There were no responses to a message on the mailing list asking for users of 
> Hadoop pipes... and in my experience, I have never seen anyone use this.  We 
> should remove it to reduce our maintenance burden and build times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12547) Deprecate hadoop-pipes

2015-11-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992218#comment-14992218
 ] 

Allen Wittenauer commented on HADOOP-12547:
---

bq. Because the Java MR API only supports Java (and possibly other JVM 
languages), whereas streaming supports Perl, Python, Ruby, C, C++, and any 
other non-JVM programming language you can think of.

I'm pretty sure streaming supports JVM languages too since I'd be surprised if 
Java couldn't read and write from stdin and stdout... which, by your own 
argument, means we should drop the Java client APIs too.  After all, that would 
reduce the code footprint, limit the testing needs, etc, etc, too right?

bq. What specifically are you -1 on? Removal, deprecation, or both?

At this point, both.  Perhaps deprecation in trunk if the native task stuff 
actually works.

bq. Can you explain when you would advise one of your customers to use pipes 
instead of streaming?

https://www.quora.com/Why-would-anyone-use-Hadoop-Pipes

https://www.quora.com/If-my-current-job-involves-purely-C-C++-coding-what-are-the-best-ways-to-learn-hadoop-and-contribute-to-the-apache-hadoop-project-I-understand-most-of-hadoop-code-is-Java-Are-there-any-C-C++-bindings-for-hadoop-used-in-production-clusters

bq. If you feel that pipes is worth maintaining, can you file JIRAs to 
reinstate the documentation, fix the compiler warnings, and fix the security 
bugs?

Sure, I'll file JIRAs for these.

> Deprecate hadoop-pipes
> --
>
> Key: HADOOP-12547
> URL: https://issues.apache.org/jira/browse/HADOOP-12547
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> Development appears to have stopped on hadoop-pipes upstream for the last few 
> years, aside from very basic maintenance.  Hadoop streaming seems to be a 
> better alternative, since it supports more programming languages and is 
> better implemented.
> There were no responses to a message on the mailing list asking for users of 
> Hadoop pipes... and in my experience, I have never seen anyone use this.  We 
> should remove it to reduce our maintenance burden and build times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11887:
--
Target Version/s: 3.0.0  (was: 2.8.0)
   Fix Version/s: (was: 2.8.0)
  3.0.0

Let's move this to trunk only until / unless we commit the other EC stuff to 
branch-2.

> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a file

2015-11-05 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992278#comment-14992278
 ] 

Larry McCay commented on HADOOP-12548:
--

Hmmm - I notice that the override that semantics in S3Credentials is not 
aligned with what I proposed here.
The current implementation overrides the credentials provided in the userInfo 
of the URI with those found through the credential provider API.

We should decide which behavior we want in S3A.

It seems to me that if you were to indicate credentials in the URI from the 
launch of distcp that you are being more specific and that those should 
override what may have been previously provisioned in a credentialStore.

Thoughts?

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
> Attachments: CredentialProviderAPIforS3FS.pdf
>
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12548) read s3 creds from a file

2015-11-05 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay reassigned HADOOP-12548:


Assignee: Larry McCay

> read s3 creds from a file
> -
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>Assignee: Larry McCay
> Attachments: CredentialProviderAPIforS3FS.pdf
>
>
> It would be good if we could read s3 creds from a file rather than via a java 
> property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992367#comment-14992367
 ] 

Hudson commented on HADOOP-11887:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8762 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8762/])
HADOOP-11887. Introduce Intel ISA-L erasure coding library for native (cmccabe: 
rev 482e35c55a4bec27fa62b29d9e5f125816f1d8bd)
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/erasure_code_native.c
* hadoop-common-project/hadoop-common/src/config.h.cmake
* hadoop-common-project/hadoop-common/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/org_apache_hadoop_io_erasurecode_ErasureCodeNative.h
* hadoop-project-dist/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/io/erasurecode/erasure_code_test.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/gf_util.h
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_code.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
* BUILDING.txt
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeNative.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/erasure_code.h
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/CMakeLists.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
CHANGES.txt: move HADOOP-11887 to trunk (cmccabe: rev 
21c0e3eda56b19a6552ffdb59deb7ebeddee8aae)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12480) Run precommit javadoc only for changed modules

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12480:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Closing this as won't fix now that Yetus is here.

> Run precommit javadoc only for changed modules
> --
>
> Key: HADOOP-12480
> URL: https://issues.apache.org/jira/browse/HADOOP-12480
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-12480-01.patch, HADOOP-12480-02.patch, 
> HADOOP-12480-03.patch, HADOOP-12480-04.patch
>
>
> Currently Precommit javadoc check will happen on root of hadoop,
> IMO Its sufficient to run for only changed modules.
> This way Pre-commit will take even lesser time as Javadoc will take 
> significant time compare to other checks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12451) Setting HADOOP_HOME explicitly should be allowed

2015-11-05 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992765#comment-14992765
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-12451:
--

[~kasha], looked at the original patch for HADOOP-11464, I think there are more 
bugs:
 - We should also fix the following line?
{code}+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.home.dir=$HADOOP_HOME"{code}
If it isn't cygwin, it is supposed to point hadoop.home.dir to HADOOP_PREFIX as 
it was doing before.
 - The same fix (your patch and the change I suggest above) definitely needs to 
also happen in hadoop-yarn-project/hadoop-yarn/bin/yarn?

> Setting HADOOP_HOME explicitly should be allowed
> 
>
> Key: HADOOP-12451
> URL: https://issues.apache.org/jira/browse/HADOOP-12451
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.1
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Fix For: 2.7.2
>
> Attachments: HADOOP-12451-branch-2.1.patch, 
> HADOOP-12451-branch-2.addendum-1.patch
>
>
> HADOOP-11464 reinstates cygwin support. In the process, it sets HADOOP_HOME 
> explicitly in hadoop-config.sh without checking if it has already been set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10532) Jenkins test-patch timed out on a large patch touching files in multiple modules.

2015-11-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992631#comment-14992631
 ] 

Allen Wittenauer commented on HADOOP-10532:
---

Closing as a dupe.  test-patch has been replaced by Yetus.

> Jenkins test-patch timed out on a large patch touching files in multiple 
> modules.
> -
>
> Key: HADOOP-10532
> URL: https://issues.apache.org/jira/browse/HADOOP-10532
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Jean-Pierre Matsumoto
> Attachments: PreCommit-HADOOP-Build-3821-consoleText.txt.gz
>
>
> On HADOOP-10503, I had posted a consolidated patch touching multiple files 
> across all sub-modules: Hadoop, HDFS, YARN and MapReduce.  The Jenkins 
> test-patch runs for these consolidated patches timed out.  I also 
> experimented with a dummy patch that simply added one-line comment changes to 
> files.  This patch also timed out, which seems to indicate a bug in our 
> automation rather than a problem with any patch in particular.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10532) Jenkins test-patch timed out on a large patch touching files in multiple modules.

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-10532.
---
Resolution: Duplicate

> Jenkins test-patch timed out on a large patch touching files in multiple 
> modules.
> -
>
> Key: HADOOP-10532
> URL: https://issues.apache.org/jira/browse/HADOOP-10532
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Jean-Pierre Matsumoto
> Attachments: PreCommit-HADOOP-Build-3821-consoleText.txt.gz
>
>
> On HADOOP-10503, I had posted a consolidated patch touching multiple files 
> across all sub-modules: Hadoop, HDFS, YARN and MapReduce.  The Jenkins 
> test-patch runs for these consolidated patches timed out.  I also 
> experimented with a dummy patch that simply added one-line comment changes to 
> files.  This patch also timed out, which seems to indicate a bug in our 
> automation rather than a problem with any patch in particular.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12288) test-patch.sh does not give any detail on its -1 findbugs warning report.

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12288.
---
Resolution: Fixed

> test-patch.sh does not give any detail on its -1 findbugs warning report.
> -
>
> Key: HADOOP-12288
> URL: https://issues.apache.org/jira/browse/HADOOP-12288
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>
> test-patch.sh does not give any detail on its -1 warning report. This has 
> been seen in Jenkins run of HDFS-8830.
> https://builds.apache.org/job/PreCommit-HDFS-Build/11872/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
> {code}
> Summary
> Warning Type  Number
> Total 0
> Warnings
> Click on a warning row to see full context information.
> Details
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-9263) findbugs reports extra warnings on branch-1

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9263.
--
Resolution: Won't Fix

closing as won't fix since branch-1 is basically dead.

> findbugs reports extra warnings on branch-1
> ---
>
> Key: HADOOP-9263
> URL: https://issues.apache.org/jira/browse/HADOOP-9263
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.1.2
>Reporter: Chris Nauroth
>
> Currently on branch-1, src/test/test-patch.properties defines 
> {{OK_FINDBUGS_WARNINGS=211}}, but findbugs actually finds 225 warnings.  This 
> means that any patch will get -1 from ant test-patch due to 14 new findbugs 
> warnings, even though the warnings are unrelated to that patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992685#comment-14992685
 ] 

Hudson commented on HADOOP-11887:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2513 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2513/])
HADOOP-11887. Introduce Intel ISA-L erasure coding library for native (cmccabe: 
rev 482e35c55a4bec27fa62b29d9e5f125816f1d8bd)
* BUILDING.txt
* hadoop-project/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/io/erasurecode/erasure_code_test.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/gf_util.h
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_code.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/erasure_code_native.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/CMakeLists.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/erasure_code.h
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/src/config.h.cmake
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* hadoop-project-dist/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeNative.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/org_apache_hadoop_io_erasurecode_ErasureCodeNative.h


> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12517) Findbugs reported 0 issues, but summary stated -1 for findbugs

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12517.
---
Resolution: Won't Fix

test-patch is no longer being developed inside Hadoop.

Closing as won't fix.


> Findbugs reported 0 issues, but summary stated -1 for findbugs
> --
>
> Key: HADOOP-12517
> URL: https://issues.apache.org/jira/browse/HADOOP-12517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Yongjun Zhang
>
> https://issues.apache.org/jira/browse/HDFS-9231?focusedCommentId=14975559=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975559
> stated -1 for findbugs (The patch appears to introduce 1 new Findbugs 
> (version 3.0.0) warnings.), however, 
> https://builds.apache.org/job/PreCommit-HDFS-Build/13205/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
> says 0.
> Thanks a lot for looking into.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11937) Guarantee a full build of all native code during pre-commit.

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11937.
---
Resolution: Fixed

Closing this as fixed. The hadoop personality for Yetus does this.

> Guarantee a full build of all native code during pre-commit.
> 
>
> Key: HADOOP-11937
> URL: https://issues.apache.org/jira/browse/HADOOP-11937
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Chris Nauroth
>
> Some of the native components of the build are considered optional and either 
> will not build at all without passing special flags to Maven or will allow a 
> build to proceed if dependencies are missing from the build machine.  If 
> these components do not get built, then pre-commit isn't really providing 
> full coverage of the build.  This issue proposes to update test-patch.sh so 
> that it does a full build of all native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992675#comment-14992675
 ] 

Hadoop QA commented on HADOOP-11684:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 12s 
{color} | {color:red} Patch generated 6 new checkstyle issues in root (total 
was 64, now 61). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-aws in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 1s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 29s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.ipc.TestIPC |
|   | hadoop.ha.TestZKFailoverController |
|   | 

[jira] [Updated] (HADOOP-11996) Native erasure coder basic facilities with an illustration sample

2015-11-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11996:
---
Status: Patch Available  (was: In Progress)

> Native erasure coder basic facilities with an illustration sample
> -
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff so 
> they can be utilized to compose a useful sample coder. Such sample coder can 
> serve as a good illustration for how to use the ISA-L library, meanwhile it's 
> easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-9965) ant test-patch fails with Required Args Missing.

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9965.
--
Resolution: Won't Fix

Closing as won't fix.  test-patch has been replaced by Yetus.

> ant test-patch fails with Required Args Missing.
> 
>
> Key: HADOOP-9965
> URL: https://issues.apache.org/jira/browse/HADOOP-9965
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Affects Versions: 1.2.1
>Reporter: Chris Nauroth
>
> Running the ant test-patch process on branch-1 fails with the message 
> "Required Args Missing".  It appears that test-patch.sh is not parsing the 
> arguments from the command line correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12002) test-patch.sh needs to verify all of the findbugs tools exist

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12002:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Closing as won't fix.  test-patch has been replaced by Yetus.  This bug is 
already fixed there.

> test-patch.sh needs to verify all of the findbugs tools exist
> -
>
> Key: HADOOP-12002
> URL: https://issues.apache.org/jira/browse/HADOOP-12002
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Sidharta Seethana
>Assignee: Kengo Seki
>Priority: Critical
>  Labels: test-patch
> Attachments: HADOOP-12002.001.patch, HADOOP-12002.002.patch
>
>
> {{test-patch.sh}} was used with {{FINDBUGS_HOME}} set. See below for an 
> example - there were 4 findbugs warnings generated - however, 
> {{test-patch.sh}} doesn't seem to realize that there are missing findbugs 
> tools and +1s the finbugs check. 
> {quote}
>  Running findbugs in 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
> mvn clean test findbugs:findbugs -DskipTests -DhadoopPatchProcess > 
> /private/tmp/hadoop-test-patch/71089/patchFindBugsOutputhadoop-yarn-server-nodemanager.txt
>  2>&1
> hadoop/dev-support/test-patch.sh: line 1907: 
> /usr/local/Cellar/findbugs/3.0.0/bin/setBugDatabaseInfo: No such file or 
> directory
> hadoop/dev-support/test-patch.sh: line 1915: 
> /usr/local/Cellar/findbugs/3.0.0/bin/filterBugs: No such file or directory
> Found  Findbugs warnings 
> (hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/findbugsXml.xml)
> hadoop/dev-support/test-patch.sh: line 1921: 
> /usr/local/Cellar/findbugs/3.0.0/bin/convertXmlToText: No such file or 
> directory
> [Mon May 18 18:08:52 PDT 2015 DEBUG]: Stop clock
> Elapsed time:   0m 38s
> {quote}
> Findbugs check reported as successful : 
> {quote}
> |  +1  |   findbugs  |  0m 38s| The patch does not introduce any 
> |  | || new Findbugs (version 3.0.0)
> |  | || warnings.
> |  | |  23m 51s   | 
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-7678) Nightly build+test should run with "continue on error" for automated testing after successful build

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-7678:


Assignee: Allen Wittenauer  (was: Giridharan Kesavan)

> Nightly build+test should run with "continue on error" for automated testing 
> after successful build
> ---
>
> Key: HADOOP-7678
> URL: https://issues.apache.org/jira/browse/HADOOP-7678
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: Matt Foley
>Assignee: Allen Wittenauer
>
> It appears that scripts for nightly build in Apache Jenkins will stop after 
> unit testing if any unit tests fail.  Therefore, contribs, schedulers, and 
> some other system-level automated tests don't ever run until the unit tests 
> are clean.  This results in two-phase cleanup of broken builds, which wastes 
> developers' time.  Please change them to run even in the presence of unit 
> test errors, as long as the compile+packaging build successfully.
> This jira does not relate to CI builds, which emphasize test-patch execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11218) Add TLSv1.1,TLSv1.2 to KMS, HttpFS, SSLFactory

2015-11-05 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11218:
-
Target Version/s: 2.7.3  (was: 2.7.2)

Moving this out into 2.7.3 in the interest of 2.7.2's progress.

> Add TLSv1.1,TLSv1.2 to KMS, HttpFS, SSLFactory
> --
>
> Key: HADOOP-11218
> URL: https://issues.apache.org/jira/browse/HADOOP-11218
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.1
>Reporter: Robert Kanter
>Assignee: Vijay Singh
>Priority: Critical
> Attachments: 
> Enable_TLSv1_1_and_TLSv1_2_for_HttpFS_and_KMS_services.patch
>
>
> HADOOP-11217 required us to specifically list the versions of TLS that KMS 
> supports. With Hadoop 2.7 dropping support for Java 6 and Java 7 supporting 
> TLSv1.1 and TLSv1.2, we should add them to the list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)

2015-11-05 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12053:
-
Target Version/s: 2.8.0, 2.7.3  (was: 2.8.0, 2.7.2)

Moving this out into 2.7.3 in the interest of 2.7.2's progress.

> Harfs defaulturiport should be Zero ( should not -1)
> 
>
> Key: HADOOP-12053
> URL: https://issues.apache.org/jira/browse/HADOOP-12053
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Gera Shegalov
>Priority: Critical
> Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch, 
> HADOOP-12053.003.patch
>
>
> The harfs overrides the "getUriDefaultPort" method of AbstractFilesystem, and 
> returns "-1" . But "-1" can't pass the "checkPath" method when the 
> {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
>  *Test Code :* 
> {code}
> for (FileStatus file : files) {
>   String[] edges = file.getPath().getName().split("-");
>   if (applicationId.toString().compareTo(edges[0]) >= 0 && 
> applicationId.toString().compareTo(edges[1]) <= 0) {
> Path harPath = new Path("har://" + 
> file.getPath().toUri().getPath());
> harPath = harPath.getFileSystem(conf).makeQualified(harPath);
> remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
> harPath, applicationId, appOwner,
> LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
> if 
> (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
>  {
> remoteDirSet.add(remoteAppDir);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10672) Add support for pushing metrics to OpenTSDB

2015-11-05 Thread Venkata Ganji (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992794#comment-14992794
 ] 

Venkata Ganji commented on HADOOP-10672:


[~kamal_ebay] Did you guys test your connector on any other version of Hadoop ? 
I am looking at implementing ur connector (TSDBSink-2.4.1.java) on a 2.0.0 
Hadoop version. 

> Add support for pushing metrics to OpenTSDB
> ---
>
> Key: HADOOP-10672
> URL: https://issues.apache.org/jira/browse/HADOOP-10672
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 0.21.0
>Reporter: Kamaldeep Singh
>Priority: Minor
>
> We wish to add support for pushing metrics to OpenTSDB from hadoop 
> Code and instructions at - https://github.com/eBay/hadoop-tsdb-connector



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992637#comment-14992637
 ] 

Hudson commented on HADOOP-11887:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #643 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/643/])
HADOOP-11887. Introduce Intel ISA-L erasure coding library for native (cmccabe: 
rev 482e35c55a4bec27fa62b29d9e5f125816f1d8bd)
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/gf_util.h
* BUILDING.txt
* hadoop-common-project/hadoop-common/src/config.h.cmake
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/erasure_code_native.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/erasure_code.h
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/org_apache_hadoop_io_erasurecode_ErasureCodeNative.h
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeNative.java
* hadoop-project-dist/pom.xml
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* hadoop-project/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/io/erasurecode/erasure_code_test.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
* hadoop-common-project/hadoop-common/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_code.c
* hadoop-common-project/hadoop-common/src/CMakeLists.txt
CHANGES.txt: move HADOOP-11887 to trunk (cmccabe: rev 
21c0e3eda56b19a6552ffdb59deb7ebeddee8aae)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7654) Change maven setup to not run tests by default

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7654.
--
Resolution: Won't Fix

Closing this.

> Change maven setup to not run tests by default
> --
>
> Key: HADOOP-7654
> URL: https://issues.apache.org/jira/browse/HADOOP-7654
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: build
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>
> I find that I now type "-DskipTests" many, many times per day when working on 
> Hadoop. We should change the default to not run tests except when explicitly 
> given a test target.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-6430) Need indication if FindBugs' warnings level isn't 0

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6430.
--
Resolution: Fixed

Fixed long ago.

> Need indication if FindBugs' warnings level isn't 0
> ---
>
> Key: HADOOP-6430
> URL: https://issues.apache.org/jira/browse/HADOOP-6430
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>
> It's be great to have some indication (also reflected by Hudson) if the level 
> of FindBugs warnings during a particular build wasn't 0 (zero). I.e. build 
> might be marked unstable, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11937) Guarantee a full build of all native code during pre-commit.

2015-11-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992677#comment-14992677
 ] 

Chris Nauroth commented on HADOOP-11937:


Thanks, Allen!

> Guarantee a full build of all native code during pre-commit.
> 
>
> Key: HADOOP-11937
> URL: https://issues.apache.org/jira/browse/HADOOP-11937
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Chris Nauroth
>
> Some of the native components of the build are considered optional and either 
> will not build at all without passing special flags to Maven or will allow a 
> build to proceed if dependencies are missing from the build machine.  If 
> these components do not get built, then pre-commit isn't really providing 
> full coverage of the build.  This issue proposes to update test-patch.sh so 
> that it does a full build of all native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992534#comment-14992534
 ] 

Hudson commented on HADOOP-11887:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1366 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1366/])
HADOOP-11887. Introduce Intel ISA-L erasure coding library for native (cmccabe: 
rev 482e35c55a4bec27fa62b29d9e5f125816f1d8bd)
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/erasure_code.h
* hadoop-common-project/hadoop-common/src/config.h.cmake
* hadoop-common-project/hadoop-common/src/CMakeLists.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeNative.java
* BUILDING.txt
* hadoop-project-dist/pom.xml
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/erasure_code_native.c
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/io/erasurecode/erasure_code_test.c
* hadoop-common-project/hadoop-common/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_code.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/gf_util.h
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/org_apache_hadoop_io_erasurecode_ErasureCodeNative.h
CHANGES.txt: move HADOOP-11887 to trunk (cmccabe: rev 
21c0e3eda56b19a6552ffdb59deb7ebeddee8aae)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992542#comment-14992542
 ] 

Hudson commented on HADOOP-11887:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #632 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/632/])
HADOOP-11887. Introduce Intel ISA-L erasure coding library for native (cmccabe: 
rev 482e35c55a4bec27fa62b29d9e5f125816f1d8bd)
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/gf_util.h
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/org_apache_hadoop_io_erasurecode_ErasureCodeNative.h
* BUILDING.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_code.c
* hadoop-project-dist/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/io/erasurecode/erasure_code_test.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/erasure_code.h
* hadoop-common-project/hadoop-common/src/config.h.cmake
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/erasure_code_native.c
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeNative.java
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/src/CMakeLists.txt
CHANGES.txt: move HADOOP-11887 to trunk (cmccabe: rev 
21c0e3eda56b19a6552ffdb59deb7ebeddee8aae)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12552) Fix undeclared/unused dependency to httpclient

2015-11-05 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12552:
--
Status: Patch Available  (was: Open)

> Fix undeclared/unused dependency to httpclient
> --
>
> Key: HADOOP-12552
> URL: https://issues.apache.org/jira/browse/HADOOP-12552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-12552.001.patch
>
>
> hadoop-common uses httpclient as undeclared dependency and have unused 
> dependency to commons-httpclient. Vise versa in hadoop-azure and 
> hadoop-openstack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993099#comment-14993099
 ] 

Hudson commented on HADOOP-11684:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #635 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/635/])
HADOOP-11684. S3a to use thread pool that blocks clients. (Thomas Demoor (lei: 
rev bff7c90a5686de106ca7a866982412c5dfa01632)
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFastOutputStream.java
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestBlockingThreadPoolExecutorService.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BlockingThreadPoolExecutorService.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockingThreadPool.java


> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12552) Fix undeclared/unused dependency to httpclient

2015-11-05 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-12552:
-

 Summary: Fix undeclared/unused dependency to httpclient
 Key: HADOOP-12552
 URL: https://issues.apache.org/jira/browse/HADOOP-12552
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor


hadoop-common uses httpclient as undeclared dependency and have unused 
dependency to commons-httpclient. Vise versa in hadoop-azure and 
hadoop-openstack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12552) Fix undeclared/unused dependency to httpclient

2015-11-05 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12552:
--
Attachment: HADOOP-12552.001.patch

> Fix undeclared/unused dependency to httpclient
> --
>
> Key: HADOOP-12552
> URL: https://issues.apache.org/jira/browse/HADOOP-12552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-12552.001.patch
>
>
> hadoop-common uses httpclient as undeclared dependency and have unused 
> dependency to commons-httpclient. Vise versa in hadoop-azure and 
> hadoop-openstack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12041) Implement another Reed-Solomon coder in pure Java

2015-11-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12041:
---
Attachment: HADOOP-12041-v4.patch

> Implement another Reed-Solomon coder in pure Java
> -
>
> Key: HADOOP-12041
> URL: https://issues.apache.org/jira/browse/HADOOP-12041
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12041-v1.patch, HADOOP-12041-v2.patch, 
> HADOOP-12041-v3.patch, HADOOP-12041-v4.patch
>
>
> Currently existing Java RS coders based on {{GaloisField}} implementation 
> have some drawbacks or limitations:
> * The decoder computes not really erased units unnecessarily (HADOOP-11871);
> * The decoder requires parity units + data units order for the inputs in the 
> decode API (HADOOP-12040);
> * Need to support or align with native erasure coders regarding concrete 
> coding algorithms and matrix, so Java coders and native coders can be easily 
> swapped in/out and transparent to HDFS (HADOOP-12010);
> * It's unnecessarily flexible but incurs some overhead, as HDFS erasure 
> coding is totally a byte based data system, we don't need to consider other 
> symbol size instead of 256.
> This desires to implement another  RS coder in pure Java, in addition to the 
> existing {{GaliosField}} from HDFS-RAID. The new Java RS coder will be 
> favored and used by default to resolve the related issues. The old HDFS-RAID 
> originated coder will still be there for comparing, and converting old data 
> from HDFS-RAID systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993207#comment-14993207
 ] 

Hudson commented on HADOOP-11684:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #645 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/645/])
HADOOP-11684. S3a to use thread pool that blocks clients. (Thomas Demoor (lei: 
rev bff7c90a5686de106ca7a866982412c5dfa01632)
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BlockingThreadPoolExecutorService.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFastOutputStream.java
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestBlockingThreadPoolExecutorService.java
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockingThreadPool.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992895#comment-14992895
 ] 

Hudson commented on HADOOP-11887:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2573 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2573/])
HADOOP-11887. Introduce Intel ISA-L erasure coding library for native (cmccabe: 
rev 482e35c55a4bec27fa62b29d9e5f125816f1d8bd)
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/org_apache_hadoop_io_erasurecode_ErasureCodeNative.h
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/erasure_code.h
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/gf_util.h
* BUILDING.txt
* hadoop-common-project/hadoop-common/src/config.h.cmake
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/erasure_code_native.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_code.c
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeNative.java
* hadoop-project-dist/pom.xml
* hadoop-common-project/hadoop-common/src/CMakeLists.txt
* hadoop-project/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/io/erasurecode/erasure_code_test.c
CHANGES.txt: move HADOOP-11887 to trunk (cmccabe: rev 
21c0e3eda56b19a6552ffdb59deb7ebeddee8aae)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12415) hdfs and nfs builds broken on -missing compile-time dependency on netty

2015-11-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992971#comment-14992971
 ] 

Hadoop QA commented on HADOOP-12415:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s 
{color} | {color:green} hadoop-nfs in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 29s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-nfs in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 32s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 131m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-06 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12761458/HADOOP-12415.patch |
| JIRA Issue | HADOOP-12415 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  xml  compile  
|
| uname | Linux 5ac3fa9e8e69 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh
 |
| git revision | trunk / 286cc64 |
| 

[jira] [Updated] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-11684:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   2.8.0
   Status: Resolved  (was: Patch Available)

+1. Thanks for working on this, [~Thomas Demoor] and [~fabbri].

Committed.

> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993072#comment-14993072
 ] 

Hudson commented on HADOOP-11684:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #1368 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1368/])
HADOOP-11684. S3a to use thread pool that blocks clients. (Thomas Demoor (lei: 
rev bff7c90a5686de106ca7a866982412c5dfa01632)
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestBlockingThreadPoolExecutorService.java
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BlockingThreadPoolExecutorService.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockingThreadPool.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFastOutputStream.java


> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for open and getFileStatus API's in WASB

2015-11-05 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-12551:
-
Fix Version/s: (was: 2.8.0)

> Introduce FileNotFoundException for open and getFileStatus API's in WASB
> 
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for open and getFileStatus API's in WASB

2015-11-05 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-12551:
-
Affects Version/s: (was: 2.8.0)
   2.7.1

> Introduce FileNotFoundException for open and getFileStatus API's in WASB
> 
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11996) Native erasure coder basic facilities with an illustration sample

2015-11-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11996:
---
Attachment: HADOOP-11996-v4.patch

Transformed the sample program into a test.

> Native erasure coder basic facilities with an illustration sample
> -
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff so 
> they can be utilized to compose a useful sample coder. Such sample coder can 
> serve as a good illustration for how to use the ISA-L library, meanwhile it's 
> easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11996) Native erasure coder facilities based on ISA-L

2015-11-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11996:
---
Description: While working on HADOOP-11540 and etc., it was found useful to 
write the basic facilities based on Intel ISA-L library separately from JNI 
stuff. It's also easy to debug and troubleshooting, as no JNI or Java stuffs 
are involved.  (was: While working on HADOOP-11540 and etc., it was found 
useful to write the basic facilities based on Intel ISA-L library separately 
from JNI stuff so they can be utilized to compose a useful sample coder. Such 
sample coder can serve as a good illustration for how to use the ISA-L library, 
meanwhile it's easy to debug and troubleshooting, as no JNI or Java stuffs are 
involved.)
Summary: Native erasure coder facilities based on ISA-L  (was: Native 
erasure coder basic facilities with an illustration sample)

> Native erasure coder facilities based on ISA-L
> --
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff. It's 
> also easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992933#comment-14992933
 ] 

Hudson commented on HADOOP-11887:
-

ABORTED: Integrated in Hadoop-Hdfs-trunk-Java8 #575 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/575/])
HADOOP-11887. Introduce Intel ISA-L erasure coding library for native (cmccabe: 
rev 482e35c55a4bec27fa62b29d9e5f125816f1d8bd)
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/org_apache_hadoop_io_erasurecode_ErasureCodeNative.h
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/erasure_code.h
* hadoop-common-project/hadoop-common/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeNative.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_code.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/io/erasurecode/erasure_code_test.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
* hadoop-project/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/coder/erasure_code_native.c
* hadoop-project-dist/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/include/gf_util.h
* BUILDING.txt
* hadoop-common-project/hadoop-common/src/config.h.cmake
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/CMakeLists.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
CHANGES.txt: move HADOOP-11887 to trunk (cmccabe: rev 
21c0e3eda56b19a6552ffdb59deb7ebeddee8aae)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12415) hdfs and nfs builds broken on -missing compile-time dependency on netty

2015-11-05 Thread Tom Zeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993049#comment-14993049
 ] 

Tom Zeng commented on HADOOP-12415:
---

Again the test failures do not seems to be related to the commit, and way to 
verify all tests pass without the patch?

> hdfs and nfs builds broken on -missing compile-time dependency on netty
> ---
>
> Key: HADOOP-12415
> URL: https://issues.apache.org/jira/browse/HADOOP-12415
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
> Environment: Bigtop, plain Linux distro of any kind
>Reporter: Konstantin Boudnik
>Assignee: Tom Zeng
> Attachments: HADOOP-12415.patch
>
>
> As discovered in BIGTOP-2049 {{hadoop-nfs}} module compilation is broken. 
> Looks like that HADOOP-11489 is the root-cause of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12114) Make hadoop-tools/hadoop-pipes Native code -Wall-clean

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12114:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-11985)

> Make hadoop-tools/hadoop-pipes Native code -Wall-clean
> --
>
> Key: HADOOP-12114
> URL: https://issues.apache.org/jira/browse/HADOOP-12114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: pipes
>Affects Versions: 2.7.0
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: HADOOP-12114.001.patch, HADOOP-12114.002.patch
>
>
> As we specify -Wall as a default compilation flag, it would be helpful if the 
> Native code was -Wall-clean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10672) Add support for pushing metrics to OpenTSDB

2015-11-05 Thread Kamaldeep Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993046#comment-14993046
 ] 

Kamaldeep Singh commented on HADOOP-10672:
--

Yes we tested the connector with Hadoop 1.2.1 and Hadoop 2.4.1 versions . 
Please check the readme on https://github.com/eBay/hadoop-tsdb-connector

> Add support for pushing metrics to OpenTSDB
> ---
>
> Key: HADOOP-10672
> URL: https://issues.apache.org/jira/browse/HADOOP-10672
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 0.21.0
>Reporter: Kamaldeep Singh
>Priority: Minor
>
> We wish to add support for pushing metrics to OpenTSDB from hadoop 
> Code and instructions at - https://github.com/eBay/hadoop-tsdb-connector



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for native erasure encoding support

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993059#comment-14993059
 ] 

Hudson commented on HADOOP-11887:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2514 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2514/])
CHANGES.txt: move HADOOP-11887 to trunk (cmccabe: rev 
21c0e3eda56b19a6552ffdb59deb7ebeddee8aae)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Introduce Intel ISA-L erasure coding library for native erasure encoding 
> support
> 
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v10, HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, 
> HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, 
> HADOOP-11887-v7.patch, HADOOP-11887-v8.patch, HADOOP-11887-v9.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11996) Native erasure coder basic facilities with an illustration sample

2015-11-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992981#comment-14992981
 ] 

Hadoop QA commented on HADOOP-11996:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 18s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 56s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.metrics2.impl.TestMetricsSystemImpl 
|
|   | hadoop.ipc.TestDecayRpcScheduler |
|   | hadoop.ipc.TestIPC |
| JDK v1.7.0_79 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-06 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770286/HADOOP-11996-v3.patch 
|
| JIRA Issue | HADOOP-11996 |
| Optional Tests |  asflicense  cc  unit  javac  javadoc  mvninstall  compile  |
| uname | Linux f3ec90a4fa91 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh
 |
| git revision | trunk / 19a0c26 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| unit | 

[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993005#comment-14993005
 ] 

Hudson commented on HADOOP-11684:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8765 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8765/])
HADOOP-11684. S3a to use thread pool that blocks clients. (Thomas Demoor (lei: 
rev bff7c90a5686de106ca7a866982412c5dfa01632)
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockingThreadPool.java
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestBlockingThreadPoolExecutorService.java
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFastOutputStream.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BlockingThreadPoolExecutorService.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for open and getFileStatus API's in WASB

2015-11-05 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12551:
---
Assignee: Dushyanth

> Introduce FileNotFoundException for open and getFileStatus API's in WASB
> 
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
>
> HADOOP-12533 introduced FileNotFoundException to be introduced in read and 
> seek API for WASB. The open and getFileStatus api currently throws 
> FileNotFoundException correctly when the file does not exists when the API is 
> called but does not throw the same exception if there is another 
> thread/process deletes the file during its execution. This Jira fixes that 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12551) Introduce FileNotFoundException for open and getFileStatus API's in WASB

2015-11-05 Thread Dushyanth (JIRA)
Dushyanth created HADOOP-12551:
--

 Summary: Introduce FileNotFoundException for open and 
getFileStatus API's in WASB
 Key: HADOOP-12551
 URL: https://issues.apache.org/jira/browse/HADOOP-12551
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.8.0
Reporter: Dushyanth
 Fix For: 2.8.0


HADOOP-12533 introduced FileNotFoundException to be introduced in read and seek 
API for WASB. The open and getFileStatus api currently throws 
FileNotFoundException correctly when the file does not exists when the API is 
called but does not throw the same exception if there is another thread/process 
deletes the file during its execution. This Jira fixes that behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for open and getFileStatus API's in WASB

2015-11-05 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12551:
---
Description: HADOOP-12533 introduced FileNotFoundException to the read and 
seek API for WASB. The open and getFileStatus api currently throws 
FileNotFoundException correctly when the file does not exists when the API is 
called but does not throw the same exception if there is another thread/process 
deletes the file during its execution. This Jira fixes that behavior.  (was: 
HADOOP-12533 introduced FileNotFoundException to be introduced in read and seek 
API for WASB. The open and getFileStatus api currently throws 
FileNotFoundException correctly when the file does not exists when the API is 
called but does not throw the same exception if there is another thread/process 
deletes the file during its execution. This Jira fixes that behavior.)

> Introduce FileNotFoundException for open and getFileStatus API's in WASB
> 
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993273#comment-14993273
 ] 

Hudson commented on HADOOP-11684:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #576 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/576/])
HADOOP-11684. S3a to use thread pool that blocks clients. (Thomas Demoor (lei: 
rev bff7c90a5686de106ca7a866982412c5dfa01632)
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BlockingThreadPoolExecutorService.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestBlockingThreadPoolExecutorService.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFastOutputStream.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockingThreadPool.java


> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-11-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993280#comment-14993280
 ] 

Hudson commented on HADOOP-11684:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2575 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2575/])
HADOOP-11684. S3a to use thread pool that blocks clients. (Thomas Demoor (lei: 
rev bff7c90a5686de106ca7a866982412c5dfa01632)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFastOutputStream.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestBlockingThreadPoolExecutorService.java
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockingThreadPool.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/BlockingThreadPoolExecutorService.java
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch, 
> HADOOP-11684-003.patch, HADOOP-11684-004.patch, HADOOP-11684-005.patch, 
> HADOOP-11684-006.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11996) Native erasure coder facilities based on ISA-L

2015-11-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993296#comment-14993296
 ] 

Hadoop QA commented on HADOOP-11996:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 29s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 32s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.metrics2.impl.TestMetricsSystemImpl 
|
|   | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.fs.TestLocalFsFCStatistics |
| JDK v1.7.0_79 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.fs.shell.TestCopyPreserveFlag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-06 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770960/HADOOP-11996-v4.patch 
|
| JIRA Issue | HADOOP-11996 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  xml  compile  
cc  |
| uname | Linux b10c06a5c91c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build@2/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh
 |
| git revision | trunk / 66c0967 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12552) Fix undeclared/unused dependency to httpclient

2015-11-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14993279#comment-14993279
 ] 

Hadoop QA commented on HADOOP-12552:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 48s 
{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 58s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 27s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-06 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770961/HADOOP-12552.001.patch
 |
| JIRA Issue | HADOOP-12552 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  xml  compile  
|
| uname | Linux 3546e1c85455 3.13.0-36-lowlatency #63-Ubuntu SMP 

[jira] [Updated] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2015-11-05 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12550:
---
Status: Open  (was: Patch Available)

[~xyao], thank you for reviewing.  I had missed that point in the MSDN doc.  
Oddly, it seems to work anyway, because I have run tests that cover a directory 
rename through {{NativeIO#renameTo}}.  Even still, I wouldn't want to depend on 
undocumented behavior.  I'm canceling the patch while I sort this out.

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11954) Solaris does not support RLIMIT_MEMLOCK as in Linux

2015-11-05 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14991650#comment-14991650
 ] 

Alan Burlison commented on HADOOP-11954:


Whilst returning 0 from NativeIO_getMemlockLimit0 may be OK on Windows as it 
doesn't support the mlock() syscall, it doesn't work on Solaris which supports 
the syscall but not the RLIMIT_MEMLOCK ulimit. On Solaris we should probably 
return MAXINT instead.

> Solaris does not support RLIMIT_MEMLOCK as in Linux
> ---
>
> Key: HADOOP-11954
> URL: https://issues.apache.org/jira/browse/HADOOP-11954
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.2.0, 2.3.0, 2.4.1, 2.5.2, 2.6.0, 2.7.0
>Reporter: Malcolm Kavalsky
>Assignee: Alan Burlison
> Attachments: HADOOP-11954.001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> This affects the JNI call to NativeIO_getMemlockLimit0.
> We can just return 0, as Windows does which also does not support this 
> feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11954) Solaris does not support RLIMIT_MEMLOCK as in Linux

2015-11-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14991694#comment-14991694
 ] 

Hadoop QA commented on HADOOP-11954:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 31s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 16s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 59s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-05 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12741632/HADOOP-11954.001.patch
 |
| JIRA Issue | HADOOP-11954 |
| Optional Tests |  asflicense  cc  unit  javac  compile  |
| uname | Linux 724d96493f14 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-e8bd3ad/precommit/personality/hadoop.sh
 |
| git revision | trunk / ea5bb48 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| JDK v1.7.0_79  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8034/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Max memory used | 227MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8034/console |


This message was automatically generated.



> Solaris does not support RLIMIT_MEMLOCK as in Linux
> ---
>
> Key: HADOOP-11954
> URL: https://issues.apache.org/jira/browse/HADOOP-11954
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.2.0, 2.3.0, 2.4.1, 2.5.2, 2.6.0, 2.7.0
>Reporter: Malcolm Kavalsky
>Assignee: Alan Burlison
> Attachments: