[jira] [Resolved] (HADOOP-7427) syntax error in smart-apply-patch.sh

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7427.
--
Resolution: Cannot Reproduce

Closing as cannot reproduce.

 syntax error in smart-apply-patch.sh 
 -

 Key: HADOOP-7427
 URL: https://issues.apache.org/jira/browse/HADOOP-7427
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Tsz Wo Nicholas Sze

 {noformat}
  [exec] Finished build.
  [exec] hdfs/src/test/bin/smart-apply-patch.sh: line 60: syntax error in 
 conditional expression: unexpected token `('
 BUILD FAILED
 hdfs/build.xml:1595: exec returned: 1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-9636) UNIX like sort options for ls shell command

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HADOOP-9636:
--

I'm re-opening this one, since the patch is more recent.  It just needs a 
rebase.

 UNIX like sort options for ls shell command
 ---

 Key: HADOOP-9636
 URL: https://issues.apache.org/jira/browse/HADOOP-9636
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
Reporter: Varun Dhussa
Priority: Minor
 Attachments: HADOOP-9636-001.patch


 Add support for unix ls like sort options in fs -ls:
 -t : sort by modification time
 -S : sort by file size
 -r : reverse the sort order
 -u : sort by acess time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294328#comment-14294328
 ] 

Kai Zheng commented on HADOOP-11514:


Zhe and Tsz, as I'm traveling today and not convenient to hit my dev 
environment, I'm not able to update the patch changing the package name. Maybe 
I can get it done in a follow up JIRA?

 Raw Erasure Coder API for concrete encoding and decoding
 

 Key: HADOOP-11514
 URL: https://issues.apache.org/jira/browse/HADOOP-11514
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-EC

 Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
 HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
 HDFS-7353-v6.patch, HDFS-7353-v7.patch


 This is to abstract and define raw erasure coder API across different codes 
 algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
 various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10560) Update NativeS3FileSystem to issue copy commands for files with in a directory with a configurable number of threads

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293305#comment-14293305
 ] 

Hadoop QA commented on HADOOP-10560:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643032/HADOOP-10560-1.patch
  against trunk revision 6f9fe76.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5503//console

This message is automatically generated.

 Update NativeS3FileSystem to issue copy commands for files with in a 
 directory with a configurable number of threads
 

 Key: HADOOP-10560
 URL: https://issues.apache.org/jira/browse/HADOOP-10560
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Ted Malaska
Assignee: Ted Malaska
Priority: Minor
  Labels: performance
 Attachments: HADOOP-10560-1.patch, HADOOP-10560.patch


 In NativeS3FileSystem if you do a copy of a directory it will copy all the 
 files to the new location, but it will do this with one thread. Code is 
 below. This jira will allow a configurable number of threads to be used to 
 issue the copy commands to S3.
 do {
 PartialListing listing = store.list(srcKey, S3_MAX_LISTING_LENGTH, 
 priorLastKey, true);
 for (FileMetadata file : listing.getFiles())
 { keysToDelete.add(file.getKey()); store.copy(file.getKey(), dstKey + 
 file.getKey().substring(srcKey.length())); }
 priorLastKey = listing.getPriorLastKey();
 } while (priorLastKey != null);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11086) Upgrade jets3t to 0.9.2

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11086:

Affects Version/s: 2.6.0
   Status: Open  (was: Patch Available)

 Upgrade jets3t to 0.9.2
 ---

 Key: HADOOP-11086
 URL: https://issues.apache.org/jira/browse/HADOOP-11086
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Matteo Bertozzi
Priority: Minor
 Attachments: HADOOP-11086-v0.patch


 jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
 service-side encryption.
 http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
 (it also removes an exception thrown from the RestS3Service constructor which 
 requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11086) Upgrade jets3t to 0.9.2

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11086:

Status: Patch Available  (was: Open)

 Upgrade jets3t to 0.9.2
 ---

 Key: HADOOP-11086
 URL: https://issues.apache.org/jira/browse/HADOOP-11086
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Matteo Bertozzi
Priority: Minor
 Attachments: HADOOP-11086-v0.patch


 jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
 service-side encryption.
 http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
 (it also removes an exception thrown from the RestS3Service constructor which 
 requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11335) KMS ACL in meta data or database

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293309#comment-14293309
 ] 

Hadoop QA commented on HADOOP-11335:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12694730/HADOOP-11335.004.patch
  against trunk revision 6f9fe76.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1188 javac 
compiler warnings (more than the trunk's current 1187 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.crypto.key.TestKeyProviderCryptoExtension

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5488//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5488//artifact/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5488//console

This message is automatically generated.

 KMS ACL in meta data or database
 

 Key: HADOOP-11335
 URL: https://issues.apache.org/jira/browse/HADOOP-11335
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.6.0
Reporter: Jerry Chen
Assignee: Dian Fu
  Labels: Security
 Attachments: HADOOP-11335.001.patch, HADOOP-11335.002.patch, 
 HADOOP-11335.003.patch, HADOOP-11335.004.patch, KMS ACL in metadata or 
 database.pdf

   Original Estimate: 504h
  Remaining Estimate: 504h

 Currently Hadoop KMS has implemented ACL for keys and the per key ACL are 
 stored in the configuration file kms-acls.xml.
 The management of ACL in configuration file would not be easy in enterprise 
 usage and it is put difficulties for backup and recovery.
 It is ideal to store the ACL for keys in the key meta data similar to what 
 file system ACL does.  In this way, the backup and recovery that works on 
 keys should work for ACL for keys too.
 On the other hand, with the ACL in meta data, the ACL of each key can be 
 easily manipulate with API or command line tool and take effect instantly.  
 This is very important for enterprise level access control management.  This 
 feature can be addressed by separate JIRA. While with the configuration file, 
 these would be hard to provide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10197) Disable additional m2eclipse plugin execution

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293361#comment-14293361
 ] 

Hadoop QA commented on HADOOP-10197:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621072/HADOOP-10197-2.patch
  against trunk revision 6f9fe76.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5502//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5502//console

This message is automatically generated.

 Disable additional m2eclipse plugin execution
 -

 Key: HADOOP-10197
 URL: https://issues.apache.org/jira/browse/HADOOP-10197
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Eric Charles
 Attachments: HADOOP-10197-2.patch, HADOOP-10197.patch


 M2Eclipse complains on when importing the maven modules into Eclipse.
 We should add more filter in the org.eclipse.m2e.lifecycle-mapping plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293363#comment-14293363
 ] 

Steve Loughran commented on HADOOP-10846:
-

also, {{byteswap.h}} isn't on windows ... which doesn't matter as it can use 
the no-op feature. the #ifdef logic is going to have to only include the file 
if it exists.



 DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
 by array
 --

 Key: HADOOP-10846
 URL: https://issues.apache.org/jira/browse/HADOOP-10846
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.4.1, 2.5.2
 Environment: PowerPC platform
Reporter: Jinghui Wang
Assignee: Jinghui Wang
 Attachments: HADOOP-10846-v1.patch, HADOOP-10846-v2.patch, 
 HADOOP-10846-v3.patch, HADOOP-10846.patch


 Got the following exception when running Hadoop on Power PC. The 
 implementation for computing checksum when the data buffer and checksum 
 buffer are not backed by arrays.
 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
 PriviledgedActionException as:biadmin (auth:SIMPLE) 
 cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
 org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10744) LZ4 Compression fails to recognize PowerPC Little Endian Architecture

2015-01-27 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293394#comment-14293394
 ] 

Ayappan commented on HADOOP-10744:
--

The patch will succeed with -p1 option. Can any maintainer look into this issue 
?

 LZ4 Compression fails to recognize PowerPC Little Endian Architecture
 -

 Key: HADOOP-10744
 URL: https://issues.apache.org/jira/browse/HADOOP-10744
 Project: Hadoop Common
  Issue Type: Bug
  Components: io, native
Affects Versions: 2.4.1, 2.5.2
 Environment: PowerPC Little Endian (ppc64le)
Reporter: Ayappan
Assignee: Bert Sanders
 Attachments: HADOOP-10744-v1.patch, HADOOP-10744-v2.patch, 
 HADOOP-10744.patch


 Lz4 Compression fails to identify the PowerPC Little Endian Architecture. It 
 recognizes it as Big Endian and several testcases( 
 TestCompressorDecompressor, TestCodec, TestLz4CompressorDecompressor)  fails 
 due to this.
 Running org.apache.hadoop.io.compress.TestCompressorDecompressor
 Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.435 sec  
 FAILURE! - in org.apache.hadoop.io.compress.TestCompressorDecompressor
 testCompressorDecompressor(org.apache.hadoop.io.compress.TestCompressorDecompressor)
   Time elapsed: 0.308 sec   FAILURE!
 org.junit.internal.ArrayComparisonFailure: 
 org.apache.hadoop.io.compress.lz4.Lz4Compressor_org.apache.hadoop.io.compress.lz4.Lz4Decompressor-
   byte arrays not equals error !!!: arrays first differed at element [1428]; 
 expected:4 but was:10
 at 
 org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
 at org.junit.Assert.internalArrayEquals(Assert.java:473)
 at org.junit.Assert.assertArrayEquals(Assert.java:294)
 at 
 org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:325)
 at 
 org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
 at 
 org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor(TestCompressorDecompressor.java:58)
 ...
 ...
 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9650) Update jetty dependencies

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293297#comment-14293297
 ] 

Hadoop QA commented on HADOOP-9650:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588183/HADOOP-trunk-9650.patch
  against trunk revision 6f9fe76.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5499//console

This message is automatically generated.

 Update jetty dependencies 
 --

 Key: HADOOP-9650
 URL: https://issues.apache.org/jira/browse/HADOOP-9650
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.6.0
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: build, maven
 Attachments: HADOOP-9650.patch, HADOOP-trunk-9650.patch


 Update deprecated jetty 6 dependencies, moving forwards to jetty 8.  This 
 enables mvn-rpmbuild on Fedora 18   platforms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10948) SwiftNativeFileSystem's directory is incompatible with Swift and Horizon

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10948:

Assignee: Kazuki OIKAWA
  Status: Open  (was: Patch Available)

 SwiftNativeFileSystem's directory is incompatible with Swift and Horizon
 

 Key: HADOOP-10948
 URL: https://issues.apache.org/jira/browse/HADOOP-10948
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 3.0.0
Reporter: Kazuki OIKAWA
Assignee: Kazuki OIKAWA
 Attachments: HADOOP-10948-2.patch, HADOOP-10948.patch


 SwiftNativeFileSystem's directory representation is zero-byte file.
 But in Swift / Horizon, directory representation is a trailing-slash.
 This incompatibility has the following issues.
 * SwiftNativeFileSystem can't see pseudo-directory made by OpenStack Horizon
 * Swift/Horizon can't see pseudo-directory made by SwiftNativeFileSystem. But 
 Swift/Horizon see a zero-byte file instead of that pseudo-directory.
 * SwiftNativeFileSystem can't see a file if there is no intermediate 
 pseudo-directory object.
 * SwiftNativeFileSystem makes two objects when making a single directory
 (e.g. hadoop fs -mkdir swift://test.test/dir/ = dir and dir/ created)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10560) Update NativeS3FileSystem to issue copy commands for files with in a directory with a configurable number of threads

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293303#comment-14293303
 ] 

Steve Loughran commented on HADOOP-10560:
-

the s3a FS is doing some form of thread pooling in 2.7+. Is this sufficient, or 
do people think we should have something similar in s3n. 

Given s3a is a replacement for s3n; I'd be tempted to say leave s3n alone

 Update NativeS3FileSystem to issue copy commands for files with in a 
 directory with a configurable number of threads
 

 Key: HADOOP-10560
 URL: https://issues.apache.org/jira/browse/HADOOP-10560
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Ted Malaska
Assignee: Ted Malaska
Priority: Minor
  Labels: performance
 Attachments: HADOOP-10560-1.patch, HADOOP-10560.patch


 In NativeS3FileSystem if you do a copy of a directory it will copy all the 
 files to the new location, but it will do this with one thread. Code is 
 below. This jira will allow a configurable number of threads to be used to 
 issue the copy commands to S3.
 do {
 PartialListing listing = store.list(srcKey, S3_MAX_LISTING_LENGTH, 
 priorLastKey, true);
 for (FileMetadata file : listing.getFiles())
 { keysToDelete.add(file.getKey()); store.copy(file.getKey(), dstKey + 
 file.getKey().substring(srcKey.length())); }
 priorLastKey = listing.getPriorLastKey();
 } while (priorLastKey != null);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-4297) Enable Java assertions when running tests

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293323#comment-14293323
 ] 

Hadoop QA commented on HADOOP-4297:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12656708/c4297_20140719.patch
  against trunk revision 6f9fe76.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
48 warning messages.
See 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5496//artifact/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-openstack.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5496//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5496//console

This message is automatically generated.

 Enable Java assertions when running tests
 -

 Key: HADOOP-4297
 URL: https://issues.apache.org/jira/browse/HADOOP-4297
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 0.19.0, 0.20.0
Reporter: Yoram Kulbak
Assignee: Tsz Wo Nicholas Sze
 Attachments: HADOOP-4297.patch, HADOOP-4297.patch, 
 c4297_20140719.patch


 A suggestion to enable Java assertions in the project's build xml when 
 running tests. I think this would improve the build quality.
 To enable assertions add the following snippets to the JUnit tasks in 
 build.xml:
 assertions
  enable /
 /assertions
 --
 For example:
 junit ... 
  ...
 assertions
 enable /
 /assertions
 /junit
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9613) Updated jersey pom dependencies

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9613:
---
Assignee: Timothy St. Clair
  Status: Patch Available  (was: Open)

resubmitting, though I think jersey version have moved on more. 

 Updated jersey pom dependencies
 ---

 Key: HADOOP-9613
 URL: https://issues.apache.org/jira/browse/HADOOP-9613
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.4.0, 3.0.0
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: maven
 Attachments: HADOOP-2.2.0-9613.patch, HADOOP-9613.patch


 Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
 system dependencies on Fedora 18.  
 The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9613) Updated jersey pom dependencies

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9613:
---
Status: Open  (was: Patch Available)

 Updated jersey pom dependencies
 ---

 Key: HADOOP-9613
 URL: https://issues.apache.org/jira/browse/HADOOP-9613
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.4.0, 3.0.0
Reporter: Timothy St. Clair
  Labels: maven
 Attachments: HADOOP-2.2.0-9613.patch, HADOOP-9613.patch


 Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
 system dependencies on Fedora 18.  
 The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11499) Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock acquisition

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293317#comment-14293317
 ] 

Hudson commented on HADOOP-11499:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #86 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/86/])
HADOOP-11499. Check of executorThreadsStarted in ValueQueue#submitRefillTask() 
evades lock acquisition. Contributed by Ted Yu (jlowe: rev 
7574df1bba33919348d3009f2578d6a81b5818e6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java


 Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock 
 acquisition
 

 Key: HADOOP-11499
 URL: https://issues.apache.org/jira/browse/HADOOP-11499
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 2.7.0

 Attachments: hadoop-11499-001.patch


 {code}
 if (!executorThreadsStarted) {
   synchronized (this) {
 // To ensure all requests are first queued, make coreThreads =
 // maxThreads
 // and pre-start all the Core Threads.
 executor.prestartAllCoreThreads();
 executorThreadsStarted = true;
   }
 }
 {code}
 It is possible that two threads executing the above code both see 
 executorThreadsStarted as being false, leading to 
 executor.prestartAllCoreThreads() called twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6221) RPC Client operations cannot be interrupted

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293319#comment-14293319
 ] 

Hudson commented on HADOOP-6221:


FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #86 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/86/])
HADOOP-6221 RPC Client operations cannot be interrupted (stevel) (stevel: rev 
1f2b6956c2012a7d6ea7e7ba5116d3ad71c23d7e)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketIOWithTimeout.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCWaitForProxy.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 RPC Client operations cannot be interrupted
 ---

 Key: HADOOP-6221
 URL: https://issues.apache.org/jira/browse/HADOOP-6221
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-6221-007.patch, HADOOP-6221-008.patch, 
 HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, 
 HADOOP-6221.patch, HADOOP-6221.patch


 RPC.waitForProxy swallows any attempts to interrupt it while waiting for a 
 proxy; this makes it hard to shutdown a service that you are starting; you 
 have to wait for the timeouts. 
 There are only 4-5 places in the code that use either of the two overloaded 
 methods, removing the catch and changing the signature should not be too 
 painful, unless anyone is using the method outside the hadoop codebase. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11466) FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture because it is slower there

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293315#comment-14293315
 ] 

Hudson commented on HADOOP-11466:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #86 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/86/])
HADOOP-11466: move to 2.6.1 (cmccabe: rev 
21d5599067adf14d589732a586c3b10aeb0936e9)
* hadoop-common-project/hadoop-common/CHANGES.txt


 FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture 
 because it is slower there
 

 Key: HADOOP-11466
 URL: https://issues.apache.org/jira/browse/HADOOP-11466
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io, performance, util
 Environment: Linux X86 and Solaris SPARC
Reporter: Suman Somasundar
Assignee: Suman Somasundar
Priority: Minor
  Labels: patch
 Fix For: 2.6.1

 Attachments: HADOOP-11466.003.patch


 One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two 
 byte arrays at coarser 8-byte granularity instead of at the byte-level. The 
 discussion at HADOOP-7761 says this fast byte comparison is somewhat faster 
 for longer arrays and somewhat slower for smaller arrays ( AVRO-939). In 
 order to do 8-byte reads on addresses not aligned to 8-byte boundaries, the 
 patch uses Unsafe.getLong. The problem is that this call is incredibly 
 expensive on SPARC. The reason is that the Studio compiler detects an 
 unaligned pointer read and handles this read in software. x86 supports 
 unaligned reads, so there is no penalty for this call on x86. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11509) change parsing sequence in GenericOptionsParser to parse -D parameters first

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293320#comment-14293320
 ] 

Hudson commented on HADOOP-11509:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #86 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/86/])
HADOOP-11509. Change parsing sequence in GenericOptionsParser to parse (xgong: 
rev 0bf333911c950f22ec0f784bf465306e20b0d507)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java


 change parsing sequence in GenericOptionsParser to parse -D parameters first
 

 Key: HADOOP-11509
 URL: https://issues.apache.org/jira/browse/HADOOP-11509
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong
 Fix For: 2.7.0

 Attachments: HADOOP-11509.1.patch, HADOOP-11509.2.patch


 In GenericOptionsParser, we need to parse -D parameter first. In that case, 
 the user input parameter (through -D) can be set into configuration object 
 earlier and used to process other parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10197) Disable additional m2eclipse plugin execution

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293293#comment-14293293
 ] 

Steve Loughran commented on HADOOP-10197:
-

HADOOP-10574 is going to move all plugin versions to properties. Once it's in 
can this patch be reworked to update/extend the properties?

 Disable additional m2eclipse plugin execution
 -

 Key: HADOOP-10197
 URL: https://issues.apache.org/jira/browse/HADOOP-10197
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Eric Charles
 Attachments: HADOOP-10197-2.patch, HADOOP-10197.patch


 M2Eclipse complains on when importing the maven modules into Eclipse.
 We should add more filter in the org.eclipse.m2e.lifecycle-mapping plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) Updated jersey pom dependencies

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293294#comment-14293294
 ] 

Hadoop QA commented on HADOOP-9613:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12610134/HADOOP-2.2.0-9613.patch
  against trunk revision 6f9fe76.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5498//console

This message is automatically generated.

 Updated jersey pom dependencies
 ---

 Key: HADOOP-9613
 URL: https://issues.apache.org/jira/browse/HADOOP-9613
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.4.0
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: maven
 Attachments: HADOOP-2.2.0-9613.patch, HADOOP-9613.patch


 Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
 system dependencies on Fedora 18.  
 The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8307) The task-controller is not packaged in the tarball

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293295#comment-14293295
 ] 

Hadoop QA commented on HADOOP-8307:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12523932/hadoop-8307.patch
  against trunk revision 6f9fe76.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5500//console

This message is automatically generated.

 The task-controller is not packaged in the tarball
 --

 Key: HADOOP-8307
 URL: https://issues.apache.org/jira/browse/HADOOP-8307
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.3
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: hadoop-8307.patch


 Ant in some situations, puts artifacts such as task-controller into the 
 build/hadoop-*/ directory before the package target deletes it to start 
 over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10309) S3 block filesystem should more aggressively delete temporary files

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10309:

Affects Version/s: 2.6.0
   Status: Open  (was: Patch Available)

 S3 block filesystem should more aggressively delete temporary files
 ---

 Key: HADOOP-10309
 URL: https://issues.apache.org/jira/browse/HADOOP-10309
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Joe Kelley
Priority: Minor
 Attachments: HADOOP-10309.patch


 The S3 FileSystem reading implementation downloads block files into a 
 configurable temporary directory. deleteOnExit() is called on these files, so 
 they are deleted when the JVM exits.
 However, JVM reuse can lead to JVMs that stick around for a very long time. 
 This can cause these temporary files to build up indefinitely and, in the 
 worst case, fill up the local directory.
 After a block file has been read, there is no reason to keep it around. It 
 should be deleted.
 Writing to the S3 FileSystem already has this behavior; after a temporary 
 block file is written and uploaded to S3, it is deleted immediately; there is 
 no need to wait for the JVM to exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10309) S3 block filesystem should more aggressively delete temporary files

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10309:

Status: Patch Available  (was: Open)

 S3 block filesystem should more aggressively delete temporary files
 ---

 Key: HADOOP-10309
 URL: https://issues.apache.org/jira/browse/HADOOP-10309
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Joe Kelley
Priority: Minor
 Attachments: HADOOP-10309.patch


 The S3 FileSystem reading implementation downloads block files into a 
 configurable temporary directory. deleteOnExit() is called on these files, so 
 they are deleted when the JVM exits.
 However, JVM reuse can lead to JVMs that stick around for a very long time. 
 This can cause these temporary files to build up indefinitely and, in the 
 worst case, fill up the local directory.
 After a block file has been read, there is no reason to keep it around. It 
 should be deleted.
 Writing to the S3 FileSystem already has this behavior; after a temporary 
 block file is written and uploaded to S3, it is deleted immediately; there is 
 no need to wait for the JVM to exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11488) Difference in default connection timeout for S3A FS

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1429#comment-1429
 ] 

Steve Loughran commented on HADOOP-11488:
-

although option #2 is purest, I'm going to recommend #1. Why? if creating the 
test FS fails, then trying to do it again in the teardown is simply going to 
create more problems

-steve

 Difference in default connection timeout for S3A FS
 ---

 Key: HADOOP-11488
 URL: https://issues.apache.org/jira/browse/HADOOP-11488
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Harsh J
Assignee: Daisuke Kobayashi
Priority: Minor
 Attachments: HADOOP-11488.patch, HADOOP-11488.patch


 The core-default.xml defines fs.s3a.connection.timeout as 5000, and the code 
 under hadoop-tools/hadoop-aws defines it as 5.
 We should update the former to 50s so it gets taken properly, as we're also 
 noticing that 5s is often too low, especially in cases such as large DistCp 
 operations (which fail with {{Read timed out}} errors from the S3 service).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11466) FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture because it is slower there

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293327#comment-14293327
 ] 

Hudson commented on HADOOP-11466:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #820 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/820/])
HADOOP-11466: move to 2.6.1 (cmccabe: rev 
21d5599067adf14d589732a586c3b10aeb0936e9)
* hadoop-common-project/hadoop-common/CHANGES.txt


 FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture 
 because it is slower there
 

 Key: HADOOP-11466
 URL: https://issues.apache.org/jira/browse/HADOOP-11466
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io, performance, util
 Environment: Linux X86 and Solaris SPARC
Reporter: Suman Somasundar
Assignee: Suman Somasundar
Priority: Minor
  Labels: patch
 Fix For: 2.6.1

 Attachments: HADOOP-11466.003.patch


 One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two 
 byte arrays at coarser 8-byte granularity instead of at the byte-level. The 
 discussion at HADOOP-7761 says this fast byte comparison is somewhat faster 
 for longer arrays and somewhat slower for smaller arrays ( AVRO-939). In 
 order to do 8-byte reads on addresses not aligned to 8-byte boundaries, the 
 patch uses Unsafe.getLong. The problem is that this call is incredibly 
 expensive on SPARC. The reason is that the Studio compiler detects an 
 unaligned pointer read and handles this read in software. x86 supports 
 unaligned reads, so there is no penalty for this call on x86. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10948) SwiftNativeFileSystem's directory is incompatible with Swift and Horizon

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293334#comment-14293334
 ] 

Hadoop QA commented on HADOOP-10948:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662455/HADOOP-10948-2.patch
  against trunk revision 6f9fe76.

{color:red}-1 @author{color}.  The patch appears to contain  @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include  new 
or modified test files.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5504//console

This message is automatically generated.

 SwiftNativeFileSystem's directory is incompatible with Swift and Horizon
 

 Key: HADOOP-10948
 URL: https://issues.apache.org/jira/browse/HADOOP-10948
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 3.0.0
Reporter: Kazuki OIKAWA
Assignee: Kazuki OIKAWA
 Attachments: HADOOP-10948-2.patch, HADOOP-10948.patch


 SwiftNativeFileSystem's directory representation is zero-byte file.
 But in Swift / Horizon, directory representation is a trailing-slash.
 This incompatibility has the following issues.
 * SwiftNativeFileSystem can't see pseudo-directory made by OpenStack Horizon
 * Swift/Horizon can't see pseudo-directory made by SwiftNativeFileSystem. But 
 Swift/Horizon see a zero-byte file instead of that pseudo-directory.
 * SwiftNativeFileSystem can't see a file if there is no intermediate 
 pseudo-directory object.
 * SwiftNativeFileSystem makes two objects when making a single directory
 (e.g. hadoop fs -mkdir swift://test.test/dir/ = dir and dir/ created)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6221) RPC Client operations cannot be interrupted

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293331#comment-14293331
 ] 

Hudson commented on HADOOP-6221:


FAILURE: Integrated in Hadoop-Yarn-trunk #820 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/820/])
HADOOP-6221 RPC Client operations cannot be interrupted (stevel) (stevel: rev 
1f2b6956c2012a7d6ea7e7ba5116d3ad71c23d7e)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCWaitForProxy.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketIOWithTimeout.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


 RPC Client operations cannot be interrupted
 ---

 Key: HADOOP-6221
 URL: https://issues.apache.org/jira/browse/HADOOP-6221
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-6221-007.patch, HADOOP-6221-008.patch, 
 HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, 
 HADOOP-6221.patch, HADOOP-6221.patch


 RPC.waitForProxy swallows any attempts to interrupt it while waiting for a 
 proxy; this makes it hard to shutdown a service that you are starting; you 
 have to wait for the timeouts. 
 There are only 4-5 places in the code that use either of the two overloaded 
 methods, removing the catch and changing the signature should not be too 
 painful, unless anyone is using the method outside the hadoop codebase. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11499) Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock acquisition

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293329#comment-14293329
 ] 

Hudson commented on HADOOP-11499:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #820 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/820/])
HADOOP-11499. Check of executorThreadsStarted in ValueQueue#submitRefillTask() 
evades lock acquisition. Contributed by Ted Yu (jlowe: rev 
7574df1bba33919348d3009f2578d6a81b5818e6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java


 Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock 
 acquisition
 

 Key: HADOOP-11499
 URL: https://issues.apache.org/jira/browse/HADOOP-11499
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 2.7.0

 Attachments: hadoop-11499-001.patch


 {code}
 if (!executorThreadsStarted) {
   synchronized (this) {
 // To ensure all requests are first queued, make coreThreads =
 // maxThreads
 // and pre-start all the Core Threads.
 executor.prestartAllCoreThreads();
 executorThreadsStarted = true;
   }
 }
 {code}
 It is possible that two threads executing the above code both see 
 executorThreadsStarted as being false, leading to 
 executor.prestartAllCoreThreads() called twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11086) Upgrade jets3t to 0.9.2

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293359#comment-14293359
 ] 

Hadoop QA commented on HADOOP-11086:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668147/HADOOP-11086-v0.patch
  against trunk revision 6f9fe76.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1188 javac 
compiler warnings (more than the trunk's current 1187 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-tools/hadoop-aws 

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5501//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5501//artifact/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5501//console

This message is automatically generated.

 Upgrade jets3t to 0.9.2
 ---

 Key: HADOOP-11086
 URL: https://issues.apache.org/jira/browse/HADOOP-11086
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Matteo Bertozzi
Priority: Minor
 Attachments: HADOOP-11086-v0.patch


 jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
 service-side encryption.
 http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
 (it also removes an exception thrown from the RestS3Service constructor which 
 requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9650) Update jetty dependencies

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9650:
---
 Assignee: Timothy St. Clair
 Target Version/s: 3.0.0  (was: 3.0.0, 2.1.0-beta)
Affects Version/s: (was: 2.1.0-beta)
   2.6.0
   Status: Patch Available  (was: Open)

resubmitting to see what happens. I expect an apply failure.

now that hadoop is on java7+ we can look at this. I do expect it to break YARN 
apps though

 Update jetty dependencies 
 --

 Key: HADOOP-9650
 URL: https://issues.apache.org/jira/browse/HADOOP-9650
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.6.0
Reporter: Timothy St. Clair
Assignee: Timothy St. Clair
  Labels: build, maven
 Attachments: HADOOP-9650.patch, HADOOP-trunk-9650.patch


 Update deprecated jetty 6 dependencies, moving forwards to jetty 8.  This 
 enables mvn-rpmbuild on Fedora 18   platforms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10948) SwiftNativeFileSystem's directory is incompatible with Swift and Horizon

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10948:

Status: Patch Available  (was: Open)

resubmitting. 

What are we do do here? The 0-byte pseudo dir went copying s3n; this swift 
pseudo-dir thing seems a swift-specific feature. It's obviously better, but has 
implications for backwards compatibility.

 SwiftNativeFileSystem's directory is incompatible with Swift and Horizon
 

 Key: HADOOP-10948
 URL: https://issues.apache.org/jira/browse/HADOOP-10948
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 3.0.0
Reporter: Kazuki OIKAWA
Assignee: Kazuki OIKAWA
 Attachments: HADOOP-10948-2.patch, HADOOP-10948.patch


 SwiftNativeFileSystem's directory representation is zero-byte file.
 But in Swift / Horizon, directory representation is a trailing-slash.
 This incompatibility has the following issues.
 * SwiftNativeFileSystem can't see pseudo-directory made by OpenStack Horizon
 * Swift/Horizon can't see pseudo-directory made by SwiftNativeFileSystem. But 
 Swift/Horizon see a zero-byte file instead of that pseudo-directory.
 * SwiftNativeFileSystem can't see a file if there is no intermediate 
 pseudo-directory object.
 * SwiftNativeFileSystem makes two objects when making a single directory
 (e.g. hadoop fs -mkdir swift://test.test/dir/ = dir and dir/ created)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10290) Surefire steals focus on MacOS

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293306#comment-14293306
 ] 

Hadoop QA commented on HADOOP-10290:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12625365/hadoop-10290.patch
  against trunk revision 6f9fe76.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5494//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5494//console

This message is automatically generated.

 Surefire steals focus on MacOS
 --

 Key: HADOOP-10290
 URL: https://issues.apache.org/jira/browse/HADOOP-10290
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Laurent Goujon
 Attachments: hadoop-10290.patch, hadoop-10290.patch


 When running tests on MacOS X, surefire plugin keeps stealing focus from 
 current application.
 This can be avoided by adding {noformat}-Djava.awt.headless=true{noformat} to 
 the surefire commandline



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10309) S3 block filesystem should more aggressively delete temporary files

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293307#comment-14293307
 ] 

Hadoop QA commented on HADOOP-10309:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12625989/HADOOP-10309.patch
  against trunk revision 6f9fe76.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5505//console

This message is automatically generated.

 S3 block filesystem should more aggressively delete temporary files
 ---

 Key: HADOOP-10309
 URL: https://issues.apache.org/jira/browse/HADOOP-10309
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Joe Kelley
Priority: Minor
 Attachments: HADOOP-10309.patch


 The S3 FileSystem reading implementation downloads block files into a 
 configurable temporary directory. deleteOnExit() is called on these files, so 
 they are deleted when the JVM exits.
 However, JVM reuse can lead to JVMs that stick around for a very long time. 
 This can cause these temporary files to build up indefinitely and, in the 
 worst case, fill up the local directory.
 After a block file has been read, there is no reason to keep it around. It 
 should be deleted.
 Writing to the S3 FileSystem already has this behavior; after a temporary 
 block file is written and uploaded to S3, it is deleted immediately; there is 
 no need to wait for the JVM to exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11509) change parsing sequence in GenericOptionsParser to parse -D parameters first

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293332#comment-14293332
 ] 

Hudson commented on HADOOP-11509:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #820 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/820/])
HADOOP-11509. Change parsing sequence in GenericOptionsParser to parse (xgong: 
rev 0bf333911c950f22ec0f784bf465306e20b0d507)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java


 change parsing sequence in GenericOptionsParser to parse -D parameters first
 

 Key: HADOOP-11509
 URL: https://issues.apache.org/jira/browse/HADOOP-11509
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong
 Fix For: 2.7.0

 Attachments: HADOOP-11509.1.patch, HADOOP-11509.2.patch


 In GenericOptionsParser, we need to parse -D parameter first. In that case, 
 the user input parameter (through -D) can be set into configuration object 
 earlier and used to process other parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11510) Expose truncate API via FileContext

2015-01-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11510:

Attachment: HADOOP-11510.002.patch

Fix the build failure.

 Expose truncate API via FileContext
 ---

 Key: HADOOP-11510
 URL: https://issues.apache.org/jira/browse/HADOOP-11510
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-11510.001.patch, HADOOP-11510.002.patch


 We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11316) mvn package -Pdist,docs -DskipTests -Dtar fails because of non-ascii characters

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294729#comment-14294729
 ] 

Hadoop QA commented on HADOOP-11316:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12694918/HADOOP-11316.1.patch
  against trunk revision ee1e06a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5512//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5512//console

This message is automatically generated.

 mvn package -Pdist,docs -DskipTests -Dtar fails because of non-ascii 
 characters
 -

 Key: HADOOP-11316
 URL: https://issues.apache.org/jira/browse/HADOOP-11316
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Blocker
 Attachments: HADOOP-11316.1.patch


 The command fails because following files include non-ascii characters.
 * ComparableVersion.java
 * CommonConfigurationKeysPublic.java
 * ComparableVersion.java
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
 {code}
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc]   //  !--- KMSClientProvider configurations ???
   [javadoc]  ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc]   //  !--- KMSClientProvider configurations ???
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc] Loading source files for package org.apache.hadoop.fs.crypto...
   [javadoc]   //  !--- KMSClientProvider configurations ???
 {code}
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11335) KMS ACL in meta data or database

2015-01-27 Thread Dian Fu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated HADOOP-11335:
-
Attachment: HADOOP-11335.005.patch

Update the patch to fix the unit test failure.

 KMS ACL in meta data or database
 

 Key: HADOOP-11335
 URL: https://issues.apache.org/jira/browse/HADOOP-11335
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.6.0
Reporter: Jerry Chen
Assignee: Dian Fu
  Labels: Security
 Attachments: HADOOP-11335.001.patch, HADOOP-11335.002.patch, 
 HADOOP-11335.003.patch, HADOOP-11335.004.patch, HADOOP-11335.005.patch, KMS 
 ACL in metadata or database.pdf

   Original Estimate: 504h
  Remaining Estimate: 504h

 Currently Hadoop KMS has implemented ACL for keys and the per key ACL are 
 stored in the configuration file kms-acls.xml.
 The management of ACL in configuration file would not be easy in enterprise 
 usage and it is put difficulties for backup and recovery.
 It is ideal to store the ACL for keys in the key meta data similar to what 
 file system ACL does.  In this way, the backup and recovery that works on 
 keys should work for ACL for keys too.
 On the other hand, with the ACL in meta data, the ACL of each key can be 
 easily manipulate with API or command line tool and take effect instantly.  
 This is very important for enterprise level access control management.  This 
 feature can be addressed by separate JIRA. While with the configuration file, 
 these would be hard to provide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11045) Introducing a tool to detect flaky tests of hadoop jenkins test job

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293811#comment-14293811
 ] 

Tsuyoshi OZAWA commented on HADOOP-11045:
-

[~yzhangal] Great work! One point: I prefer to use sys.version_info instead of 
sys.hexversion. We can use tuple comparison feature like this: 

{code}
 sys.version_info
sys.version_info(major=2, minor=7, micro=6, releaselevel='final', serial=0)
 sys.version_info  (2, 6, 0)
True
 sys.version_info  (3, 0, 0)
False
 sys.version_info  (3, 0, 0)
True
{code}

 Introducing a tool to detect flaky tests of hadoop jenkins test job
 ---

 Key: HADOOP-11045
 URL: https://issues.apache.org/jira/browse/HADOOP-11045
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, tools
Affects Versions: 2.5.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11045.001.patch, HADOOP-11045.002.patch, 
 HADOOP-11045.003.patch, HADOOP-11045.004.patch, HADOOP-11045.005.patch, 
 HADOOP-11045.006.patch, HADOOP-11045.007.patch


 File this jira to introduce a tool to detect flaky tests of hadoop jenkins 
 test jobs. Certainly it can be adapted to projects other than hadoop.
 I developed the tool on top of some initial work [~tlipcon] did. We find it 
 quite useful. With Todd's agreement, I'd like to push it to upstream so all 
 of us can share (thanks Todd for the initial work and support). I hope you 
 find the tool useful too.
 The idea is, when one has the need to see if the test failure s/he is seeing 
 in a pre-build jenkins run is flaky or not, s/he could run this tool to get a 
 good idea. Also, if one wants to look at the failure trend of a testcase in a 
 given jenkins job, the tool can be used too. I hope people find it useful.
 This tool is for hadoop contributors rather than hadoop users. Thanks 
 [~tedyu] for the advice to put to dev-support dir.
 Description of the tool:
 {code}
 #
 # Given a jenkins test job, this script examines all runs of the job done
 # within specified period of time (number of days prior to the execution
 # time of this script), and reports all failed tests.
 #
 # The output of this script includes a section for each run that has failed
 # tests, with each failed test name listed.
 #
 # More importantly, at the end, it outputs a summary section to list all 
 failed
 # tests within all examined runs, and indicate how many runs a same test
 # failed, and sorted all failed tests by how many runs each test failed in.
 #
 # This way, when we see failed tests in PreCommit build, we can quickly tell 
 # whether a failed test is a new failure or it failed before, and it may just 
 # be a flaky test.
 #
 # Of course, to be 100% sure about the reason of a failed test, closer look 
 # at the failed test for the specific run is necessary.
 #
 {code}
 How to use the tool:
 {code}
 Usage: determine-flaky-tests-hadoop.py [options]
 Options:
   -h, --helpshow this help message and exit
   -J JENKINS_URL, --jenkins-url=JENKINS_URL
 Jenkins URL
   -j JOB_NAME, --job-name=JOB_NAME
 Job name to look at
   -n NUM_PREV_DAYS, --num-days=NUM_PREV_DAYS
 Number of days to examine
 {code}
 Example command line:
 {code}
 ./determine-flaky-tests-hadoop.py -J https://builds.apache.org -j 
 PreCommit-HDFS-Build -n 2 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11045) Introducing a tool to detect flaky tests of hadoop jenkins test job

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293812#comment-14293812
 ] 

Tsuyoshi OZAWA commented on HADOOP-11045:
-

Please let me know if you have reason to use sys.hexversion.

 Introducing a tool to detect flaky tests of hadoop jenkins test job
 ---

 Key: HADOOP-11045
 URL: https://issues.apache.org/jira/browse/HADOOP-11045
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, tools
Affects Versions: 2.5.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11045.001.patch, HADOOP-11045.002.patch, 
 HADOOP-11045.003.patch, HADOOP-11045.004.patch, HADOOP-11045.005.patch, 
 HADOOP-11045.006.patch, HADOOP-11045.007.patch


 File this jira to introduce a tool to detect flaky tests of hadoop jenkins 
 test jobs. Certainly it can be adapted to projects other than hadoop.
 I developed the tool on top of some initial work [~tlipcon] did. We find it 
 quite useful. With Todd's agreement, I'd like to push it to upstream so all 
 of us can share (thanks Todd for the initial work and support). I hope you 
 find the tool useful too.
 The idea is, when one has the need to see if the test failure s/he is seeing 
 in a pre-build jenkins run is flaky or not, s/he could run this tool to get a 
 good idea. Also, if one wants to look at the failure trend of a testcase in a 
 given jenkins job, the tool can be used too. I hope people find it useful.
 This tool is for hadoop contributors rather than hadoop users. Thanks 
 [~tedyu] for the advice to put to dev-support dir.
 Description of the tool:
 {code}
 #
 # Given a jenkins test job, this script examines all runs of the job done
 # within specified period of time (number of days prior to the execution
 # time of this script), and reports all failed tests.
 #
 # The output of this script includes a section for each run that has failed
 # tests, with each failed test name listed.
 #
 # More importantly, at the end, it outputs a summary section to list all 
 failed
 # tests within all examined runs, and indicate how many runs a same test
 # failed, and sorted all failed tests by how many runs each test failed in.
 #
 # This way, when we see failed tests in PreCommit build, we can quickly tell 
 # whether a failed test is a new failure or it failed before, and it may just 
 # be a flaky test.
 #
 # Of course, to be 100% sure about the reason of a failed test, closer look 
 # at the failed test for the specific run is necessary.
 #
 {code}
 How to use the tool:
 {code}
 Usage: determine-flaky-tests-hadoop.py [options]
 Options:
   -h, --helpshow this help message and exit
   -J JENKINS_URL, --jenkins-url=JENKINS_URL
 Jenkins URL
   -j JOB_NAME, --job-name=JOB_NAME
 Job name to look at
   -n NUM_PREV_DAYS, --num-days=NUM_PREV_DAYS
 Number of days to examine
 {code}
 Example command line:
 {code}
 ./determine-flaky-tests-hadoop.py -J https://builds.apache.org -j 
 PreCommit-HDFS-Build -n 2 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11509) change parsing sequence in GenericOptionsParser to parse -D parameters first

2015-01-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293817#comment-14293817
 ] 

Chris Nauroth commented on HADOOP-11509:


I took a look at how the generic command line options are documented, currently 
and going back to 1.0.4:

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options

http://hadoop.apache.org/docs/r1.0.4/commands_manual.html#Generic+Options

Also relevant is the streaming documentation, which basically repeats the 
information:

http://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopStreaming.html#Generic_Command_Options

http://hadoop.apache.org/docs/r1.0.4/streaming.html#Generic+Command+Options

There is no way for a user to interpret this documentation to get a complete 
and correct understanding of these precedence rules (regardless of this patch). 
 It sounds like you're suggesting we file a follow-on jira to improve the 
documentation.  Do I understand correctly?

I still can't see a way that this patch would cause calling code to break.  The 
argument handling at this layer is not strictly positional, due to the way 
commons-cli works.  I don't expect anyone will need to swap the order of 
arguments in their script or anything like that.  Do you have an example of 
something that would break after this patch?

 change parsing sequence in GenericOptionsParser to parse -D parameters first
 

 Key: HADOOP-11509
 URL: https://issues.apache.org/jira/browse/HADOOP-11509
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong
 Fix For: 2.7.0

 Attachments: HADOOP-11509.1.patch, HADOOP-11509.2.patch


 In GenericOptionsParser, we need to parse -D parameter first. In that case, 
 the user input parameter (through -D) can be set into configuration object 
 earlier and used to process other parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11086) Upgrade jets3t to 0.9.2

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293453#comment-14293453
 ] 

Steve Loughran commented on HADOOP-11086:
-

javac warning is spurious
{code}
73a74
 [WARNING] 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:[55,28]
  [deprecation] X509V1CertificateGenerator in org.bouncycastle.x509 has been 
 deprecated
{code}

 Upgrade jets3t to 0.9.2
 ---

 Key: HADOOP-11086
 URL: https://issues.apache.org/jira/browse/HADOOP-11086
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Matteo Bertozzi
Priority: Minor
 Attachments: HADOOP-11086-v0.patch


 jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
 service-side encryption.
 http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
 (it also removes an exception thrown from the RestS3Service constructor which 
 requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10309) S3 block filesystem should more aggressively delete temporary files

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293471#comment-14293471
 ] 

Steve Loughran commented on HADOOP-10309:
-

patch isn't applying because the code is now in the {{hadoop-aws}} module. 
Otherwise: looks straightforward to apply

 S3 block filesystem should more aggressively delete temporary files
 ---

 Key: HADOOP-10309
 URL: https://issues.apache.org/jira/browse/HADOOP-10309
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Joe Kelley
Priority: Minor
 Attachments: HADOOP-10309.patch


 The S3 FileSystem reading implementation downloads block files into a 
 configurable temporary directory. deleteOnExit() is called on these files, so 
 they are deleted when the JVM exits.
 However, JVM reuse can lead to JVMs that stick around for a very long time. 
 This can cause these temporary files to build up indefinitely and, in the 
 worst case, fill up the local directory.
 After a block file has been read, there is no reason to keep it around. It 
 should be deleted.
 Writing to the S3 FileSystem already has this behavior; after a temporary 
 block file is written and uploaded to S3, it is deleted immediately; there is 
 no need to wait for the JVM to exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11082) Resolve findbugs warnings in hadoop-aws module

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11082:

 Target Version/s: 2.7.0  (was: 2.6.0)
Affects Version/s: (was: 3.0.0)
   2.6.0
   Status: Patch Available  (was: Open)

resubmitting. 

Colin, that FS contract base test is probably forever stuck in java3-land, as 
we don't know which external filesystems have subclassed it for testing. Pity. 

 Resolve findbugs warnings in hadoop-aws module
 --

 Key: HADOOP-11082
 URL: https://issues.apache.org/jira/browse/HADOOP-11082
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: David S. Wang
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-11082.patch, findbugs.xml


 Currently hadoop-aws module has the findbugs exclude file from hadoop-common. 
  It would be nice to address the findbugs bugs eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11082) Resolve findbugs warnings in hadoop-aws module

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293479#comment-14293479
 ] 

Hadoop QA commented on HADOOP-11082:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12669951/HADOOP-11082.patch
  against trunk revision 0da53a3.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5506//console

This message is automatically generated.

 Resolve findbugs warnings in hadoop-aws module
 --

 Key: HADOOP-11082
 URL: https://issues.apache.org/jira/browse/HADOOP-11082
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: David S. Wang
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-11082.patch, findbugs.xml


 Currently hadoop-aws module has the findbugs exclude file from hadoop-common. 
  It would be nice to address the findbugs bugs eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11417) review filesystem seek logic, clarify/confirm spec, test fix compliance

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293487#comment-14293487
 ] 

Steve Loughran commented on HADOOP-11417:
-

Looking at the HDFS code, the logic is
{code}
if (targetPos  getFileLength()) {
  throw new EOFException(Cannot seek after EOF);
}
if (targetPos  0) {
  throw new EOFException(Cannot seek to negative offset);
}
if (closed) {
  throw new IOException(Stream is closed!);
}
{code}

That is: it is not an error to {{seek(len(file))}.

Instead, on the {{read()}} operation, it goes
{code}
if (pos  getFileLength()) {
 ... the read logic, which appears to either return success or throw 
something
}
   return -1;
{code}

That is: you can seek to the length of a file; the read() operation then fails.

h3. Conclusions

The FS spec is wrong as it says filesystems MAY throw an exception for any seek 
= len(file). It should 

{code}
s  0 and ((s==0) or ((s  len(data else raise [EOFException, 
IOException]

Some FileSystems do not raise an exception if this condition is not met. They
instead return -1 on any `read()` operation where, at the time of the read,
`len(data(FSDIS))  pos(FSDIS)`.
{code}

it should have the condition
{code}
s = 0 and s  len(data) else raise [EOFException, IOException]
{code}

This matches hdfs and handles what was considered the special case, seek(0) is 
always valid.

As HADOOP-11270 notes, at least one of the object stores does not follow HDFS 
behaviour. Apart from a special test for seek(0). {{AbstractContractSeekTest}} 
does not test the case {{seek(len(file))}}. It does test {{seek(len(file))+2}}, 
going far enough past the end to resolve any ambiguity.

Proposed

# correct the spec to match HDFS
# add a new test in {{AbstractContractSeekTest}} which declares that all 
filesystem clients must support  {{seek(len(file))}}. 
# see what fails. 
# fix them.





 review filesystem seek logic, clarify/confirm spec, test  fix compliance
 -

 Key: HADOOP-11417
 URL: https://issues.apache.org/jira/browse/HADOOP-11417
 Project: Hadoop Common
  Issue Type: Task
  Components: fs, fs/s3, fs/swift
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran

 HADOOP-11270 implies there's a diff in the way HDFS seeks and the object 
 stores on the action {{seek(len(file))}}
 # review what HDFS does, add contract test to exactly demonstrate HDFS 
 behaviour.
 # ensure FS spec is consistent with this
 # test/audit all supported filesystems to verify consistent behaviour
 # fix where appropriate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11417) review filesystem seek logic, clarify/confirm spec, test fix compliance

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293495#comment-14293495
 ] 

Steve Loughran commented on HADOOP-11417:
-

looks like s3n, s3a and swift all fail here: either they check and reject, or 
they hand off to HTTP to open at the offset, which then fails.

It would be nice to have a common solution. To complicate things, they will all 
need to close their http input streams, so that future seeks don't get confused 
as to where they are, so other bits of the code will have to check for this now 
special state streams closed at end of file, differentiating it from streams 
closed after close()

 review filesystem seek logic, clarify/confirm spec, test  fix compliance
 -

 Key: HADOOP-11417
 URL: https://issues.apache.org/jira/browse/HADOOP-11417
 Project: Hadoop Common
  Issue Type: Task
  Components: fs, fs/s3, fs/swift
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran

 HADOOP-11270 implies there's a diff in the way HDFS seeks and the object 
 stores on the action {{seek(len(file))}}
 # review what HDFS does, add contract test to exactly demonstrate HDFS 
 behaviour.
 # ensure FS spec is consistent with this
 # test/audit all supported filesystems to verify consistent behaviour
 # fix where appropriate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11086) Upgrade jets3t to 0.9.2

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293461#comment-14293461
 ] 

Steve Loughran commented on HADOOP-11086:
-

aws failure is dependency convergence; jets3t uses v 1.1.1 of javax.activation
{code}
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-aws ---
[WARNING] 
Dependency convergence error for javax.activation:activation:1.1 paths to 
dependency are:
+-org.apache.hadoop:hadoop-aws:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-com.sun.jersey:jersey-json:1.9
  +-com.sun.xml.bind:jaxb-impl:2.2.3-1
+-javax.xml.bind:jaxb-api:2.2.2
  +-javax.activation:activation:1.1
and
+-org.apache.hadoop:hadoop-aws:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-net.java.dev.jets3t:jets3t:0.9.2
  +-javax.activation:activation:1.1.1
and
+-org.apache.hadoop:hadoop-aws:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-net.java.dev.jets3t:jets3t:0.9.2
  +-javax.mail:mail:1.4.7
+-javax.activation:activation:1.1

[WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
failed with message:
Failed while enforcing releasability the error(s) are [
Dependency convergence error for javax.activation:activation:1.1 paths to 
dependency are:
+-org.apache.hadoop:hadoop-aws:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-com.sun.jersey:jersey-json:1.9
  +-com.sun.xml.bind:jaxb-impl:2.2.3-1
+-javax.xml.bind:jaxb-api:2.2.2
  +-javax.activation:activation:1.1
and
+-org.apache.hadoop:hadoop-aws:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-net.java.dev.jets3t:jets3t:0.9.2
  +-javax.activation:activation:1.1.1
and
+-org.apache.hadoop:hadoop-aws:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-net.java.dev.jets3t:jets3t:0.9.2
  +-javax.mail:mail:1.4.7
+-javax.activation:activation:1.1
{code]

 Upgrade jets3t to 0.9.2
 ---

 Key: HADOOP-11086
 URL: https://issues.apache.org/jira/browse/HADOOP-11086
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Matteo Bertozzi
Priority: Minor
 Attachments: HADOOP-11086-v0.patch


 jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
 service-side encryption.
 http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
 (it also removes an exception thrown from the RestS3Service constructor which 
 requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11086) Upgrade jets3t to 0.9.2

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293468#comment-14293468
 ] 

Steve Loughran commented on HADOOP-11086:
-

note that jets3t is inconsistent itself, with both transitive 1.1 and  1.1.1 
dependencies

# why does it need javax.mail?
# if the activation dependeny is excluded from under jets3t then everything 
else will stay as is —for now—. This will isolate the jets3t patch from having 
impact across the entire codebase

 Upgrade jets3t to 0.9.2
 ---

 Key: HADOOP-11086
 URL: https://issues.apache.org/jira/browse/HADOOP-11086
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Matteo Bertozzi
Priority: Minor
 Attachments: HADOOP-11086-v0.patch


 jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
 service-side encryption.
 http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
 (it also removes an exception thrown from the RestS3Service constructor which 
 requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10181) GangliaContext does not work with multicast ganglia setup

2015-01-27 Thread Andrew Johnson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Johnson updated HADOOP-10181:

Status: Open  (was: Patch Available)

 GangliaContext does not work with multicast ganglia setup
 -

 Key: HADOOP-10181
 URL: https://issues.apache.org/jira/browse/HADOOP-10181
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andrew Otto
Assignee: Andrew Johnson
Priority: Minor
  Labels: ganglia, hadoop, metrics, multicast
 Attachments: HADOOP-10181.001.patch, HADOOP-10181.002.patch, 
 HADOOP-10181.003.patch


 The GangliaContext class which is used to send Hadoop metrics to Ganglia uses 
 a DatagramSocket to send these metrics.  This works fine for Ganglia 
 multicast setups that are all on the same VLAN.  However, when working with 
 multiple VLANs, a packet sent via DatagramSocket to a multicast address will 
 end up with a TTL of 1.  Multicast TTL indicates the number of network hops 
 for which a particular multicast packet is valid.  The packets sent by 
 GangliaContext do not make it to ganglia aggregrators on the same multicast 
 group, but in different VLANs.
 To fix, we'd need a configuration property that specifies that multicast is 
 to be used, and another that allows setting of the multicast packet TTL.  
 With these set, we could then use MulticastSocket setTimeToLive() instead of 
 just plain ol' DatagramSocket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10181) GangliaContext does not work with multicast ganglia setup

2015-01-27 Thread Andrew Johnson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Johnson updated HADOOP-10181:

Status: Patch Available  (was: Open)

 GangliaContext does not work with multicast ganglia setup
 -

 Key: HADOOP-10181
 URL: https://issues.apache.org/jira/browse/HADOOP-10181
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andrew Otto
Assignee: Andrew Johnson
Priority: Minor
  Labels: ganglia, hadoop, metrics, multicast
 Attachments: HADOOP-10181.001.patch, HADOOP-10181.002.patch, 
 HADOOP-10181.003.patch


 The GangliaContext class which is used to send Hadoop metrics to Ganglia uses 
 a DatagramSocket to send these metrics.  This works fine for Ganglia 
 multicast setups that are all on the same VLAN.  However, when working with 
 multiple VLANs, a packet sent via DatagramSocket to a multicast address will 
 end up with a TTL of 1.  Multicast TTL indicates the number of network hops 
 for which a particular multicast packet is valid.  The packets sent by 
 GangliaContext do not make it to ganglia aggregrators on the same multicast 
 group, but in different VLANs.
 To fix, we'd need a configuration property that specifies that multicast is 
 to be used, and another that allows setting of the multicast packet TTL.  
 With these set, we could then use MulticastSocket setTimeToLive() instead of 
 just plain ol' DatagramSocket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9954) Hadoop 2.0.5 doc build failure - OutOfMemoryError exception

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293777#comment-14293777
 ] 

Tsuyoshi OZAWA commented on HADOOP-9954:


[~ste...@apache.org] I think this problem looks fixed in HADOOP-10910. Can we 
close this as resolved? BTW, I faced a problem reported as HADOOP-11316 when I 
checked the command.

 Hadoop 2.0.5 doc build failure - OutOfMemoryError exception
 ---

 Key: HADOOP-9954
 URL: https://issues.apache.org/jira/browse/HADOOP-9954
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.5-alpha
 Environment: CentOS 5, Sun JDK 1.6 (but not on CenOS6 + OpenJDK 7).
Reporter: Paul Han
 Fix For: 2.0.5-alpha

 Attachments: HADOOP-9954.patch


 When run hadoop build with command line options:
 {code}
 mvn package -Pdist,native,docs -DskipTests -Dtar 
 {code}
 Build failed adn OutOfMemoryError Exception is thrown:
 {code}
 [INFO] --- maven-source-plugin:2.1.2:test-jar (default) @ hadoop-hdfs ---
 [INFO] 
 [INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default) @ hadoop-hdfs ---
 [INFO] ** FindBugsMojo execute ***
 [INFO] canGenerate is true
 [INFO] ** FindBugsMojo executeFindbugs ***
 [INFO] Temp File is 
 /var/lib/jenkins/workspace/Hadoop-Client-2.0.5-T-RPM/rpms/hadoop-devel.x86_64/BUILD/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/findbugsTemp.xml
 [INFO] Fork Value is true
  [java] Out of memory
  [java] Total memory: 477M
  [java]  free memory: 68M
  [java] Analyzed: 
 /var/lib/jenkins/workspace/Hadoop-Client-2.0.5-T-RPM/rpms/hadoop-devel.x86_64/BUILD/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/classes
  [java]  Aux: 
 /home/henkins-service/.m2/repository/org/codehaus/mojo/findbugs-maven-plugin/2.3.2/findbugs-maven-plugin-2.3.2.jar
  [java]  Aux: 
 /home/henkins-service/.m2/repository/com/google/code/findbugs/bcel/1.3.9/bcel-1.3.9.jar
  ...
  [java]  Aux: 
 /home/henkins-service/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar
  [java] Exception in thread main java.lang.OutOfMemoryError: GC 
 overhead limit exceeded
  [java]   at java.util.HashMap.init(HashMap.java:226)
  [java]   at 
 edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefSet.init(UnconditionalValueDerefSet.java:68)
  [java]   at 
 edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefAnalysis.createFact(UnconditionalValueDerefAnalysis.java:650)
  [java]   at 
 edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefAnalysis.createFact(UnconditionalValueDerefAnalysis.java:82)
  [java]   at 
 edu.umd.cs.findbugs.ba.BasicAbstractDataflowAnalysis.getFactOnEdge(BasicAbstractDataflowAnalysis.java:119)
  [java]   at 
 edu.umd.cs.findbugs.ba.AbstractDataflow.getFactOnEdge(AbstractDataflow.java:54)
  [java]   at 
 edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.examineNullValues(NullDerefAndRedundantComparisonFinder.java:297)
  [java]   at 
 edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.execute(NullDerefAndRedundantComparisonFinder.java:150)
  [java]   at 
 edu.umd.cs.findbugs.detect.FindNullDeref.analyzeMethod(FindNullDeref.java:278)
  [java]   at 
 edu.umd.cs.findbugs.detect.FindNullDeref.visitClassContext(FindNullDeref.java:205)
  [java]   at 
 edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:68)
  [java]   at 
 edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:979)
  [java]   at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:230)
  [java]   at edu.umd.cs.findbugs.FindBugs.runMain(FindBugs.java:348)
  [java]   at edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1057)
  [java] Java Result: 1
 [INFO] No bugs found
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11045) Introducing a tool to detect flaky tests of hadoop jenkins test job

2015-01-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293875#comment-14293875
 ] 

Yongjun Zhang commented on HADOOP-11045:


Hi [~ozawa],

Thanks a lot for your feedback! I did do some study before deciding to use 
hexversion. Below is what I found:

* hexversion exists in as early version as Python 1.5.2, whereas version_info 
exists only from 2.0 on. 
* hexversion is described as The version number encoded as a single integer. 
This is guaranteed to increase with each version, including proper support for 
non-production releases, however, per 
http://stackoverflow.com/questions/1093322/how-do-i-check-what-version-of-python-is-running-my-script,
 version_info may not, see  As long you do not endup comparing 
(3,3,0,'rc1','0') and (3,3,0,'beta','0') –  sorin Jun 5 '13 at 9:51 

Based on this information, I chose to use hexversion. It's a bit harder to 
read, but not too bad. There is a detailed description of the format here: 
https://docs.python.org/2/library/sys.html#sys.hexversion. Please see that when 
I print out error messages, I do print a more readable version info.

What do you think?

Thanks.

--Yongjun




 Introducing a tool to detect flaky tests of hadoop jenkins test job
 ---

 Key: HADOOP-11045
 URL: https://issues.apache.org/jira/browse/HADOOP-11045
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, tools
Affects Versions: 2.5.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11045.001.patch, HADOOP-11045.002.patch, 
 HADOOP-11045.003.patch, HADOOP-11045.004.patch, HADOOP-11045.005.patch, 
 HADOOP-11045.006.patch, HADOOP-11045.007.patch


 File this jira to introduce a tool to detect flaky tests of hadoop jenkins 
 test jobs. Certainly it can be adapted to projects other than hadoop.
 I developed the tool on top of some initial work [~tlipcon] did. We find it 
 quite useful. With Todd's agreement, I'd like to push it to upstream so all 
 of us can share (thanks Todd for the initial work and support). I hope you 
 find the tool useful too.
 The idea is, when one has the need to see if the test failure s/he is seeing 
 in a pre-build jenkins run is flaky or not, s/he could run this tool to get a 
 good idea. Also, if one wants to look at the failure trend of a testcase in a 
 given jenkins job, the tool can be used too. I hope people find it useful.
 This tool is for hadoop contributors rather than hadoop users. Thanks 
 [~tedyu] for the advice to put to dev-support dir.
 Description of the tool:
 {code}
 #
 # Given a jenkins test job, this script examines all runs of the job done
 # within specified period of time (number of days prior to the execution
 # time of this script), and reports all failed tests.
 #
 # The output of this script includes a section for each run that has failed
 # tests, with each failed test name listed.
 #
 # More importantly, at the end, it outputs a summary section to list all 
 failed
 # tests within all examined runs, and indicate how many runs a same test
 # failed, and sorted all failed tests by how many runs each test failed in.
 #
 # This way, when we see failed tests in PreCommit build, we can quickly tell 
 # whether a failed test is a new failure or it failed before, and it may just 
 # be a flaky test.
 #
 # Of course, to be 100% sure about the reason of a failed test, closer look 
 # at the failed test for the specific run is necessary.
 #
 {code}
 How to use the tool:
 {code}
 Usage: determine-flaky-tests-hadoop.py [options]
 Options:
   -h, --helpshow this help message and exit
   -J JENKINS_URL, --jenkins-url=JENKINS_URL
 Jenkins URL
   -j JOB_NAME, --job-name=JOB_NAME
 Job name to look at
   -n NUM_PREV_DAYS, --num-days=NUM_PREV_DAYS
 Number of days to examine
 {code}
 Example command line:
 {code}
 ./determine-flaky-tests-hadoop.py -J https://builds.apache.org -j 
 PreCommit-HDFS-Build -n 2 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11460) Deprecate shell vars

2015-01-27 Thread John Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293879#comment-14293879
 ] 

John Smith commented on HADOOP-11460:
-

Should KMS get deprecated as well?

 Deprecate shell vars
 

 Key: HADOOP-11460
 URL: https://issues.apache.org/jira/browse/HADOOP-11460
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
  Labels: scripts, shell
 Attachments: HADOOP-11460-00.patch, HADOOP-11460-01.patch, 
 HADOOP-11460-02.patch


 It is a very common shell pattern in 3.x to effectively replace sub-project 
 specific vars with generics.  We should have a function that does this 
 replacement and provides a warning to the end user that the old shell var is 
 deprecated.  Additionally, we should use this shell function to deprecate the 
 shell vars that are holdovers already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11485) Pluggable shell integration

2015-01-27 Thread John Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293884#comment-14293884
 ] 

John Smith commented on HADOOP-11485:
-

+1 (non-binding)

 Pluggable shell integration
 ---

 Key: HADOOP-11485
 URL: https://issues.apache.org/jira/browse/HADOOP-11485
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts, shell
 Attachments: HADOOP-11485-00.patch, HADOOP-11485-01.patch, 
 HADOOP-11485-02.patch


 It would be useful to provide a way for core and non-core Hadoop components 
 to plug into the shell infrastructure.  This would allow us to pull the HDFS, 
 MapReduce, and YARN shell functions out of hadoop-functions.sh.  
 Additionally, it should let 3rd parties such as HBase influence things like 
 classpaths at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10181) GangliaContext does not work with multicast ganglia setup

2015-01-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10181:
---
  Component/s: metrics
 Target Version/s: 2.7.0
Affects Version/s: 2.6.0
 Hadoop Flags: Reviewed

+1 for patch v003.  Andrew, thank you for addressing the feedback.

I plan to wait until Monday, 2/2, to commit this, in case any committer who has 
prior experience with this code also wants to review.

 GangliaContext does not work with multicast ganglia setup
 -

 Key: HADOOP-10181
 URL: https://issues.apache.org/jira/browse/HADOOP-10181
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.6.0
Reporter: Andrew Otto
Assignee: Andrew Johnson
Priority: Minor
  Labels: ganglia, hadoop, metrics, multicast
 Attachments: HADOOP-10181.001.patch, HADOOP-10181.002.patch, 
 HADOOP-10181.003.patch


 The GangliaContext class which is used to send Hadoop metrics to Ganglia uses 
 a DatagramSocket to send these metrics.  This works fine for Ganglia 
 multicast setups that are all on the same VLAN.  However, when working with 
 multiple VLANs, a packet sent via DatagramSocket to a multicast address will 
 end up with a TTL of 1.  Multicast TTL indicates the number of network hops 
 for which a particular multicast packet is valid.  The packets sent by 
 GangliaContext do not make it to ganglia aggregrators on the same multicast 
 group, but in different VLANs.
 To fix, we'd need a configuration property that specifies that multicast is 
 to be used, and another that allows setting of the multicast packet TTL.  
 With these set, we could then use MulticastSocket setTimeToLive() instead of 
 just plain ol' DatagramSocket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11316) mvn package -Pdist,docs -DskipTests -Dtar fails because of non-ascii characters

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-11316:

Status: Patch Available  (was: Open)

 mvn package -Pdist,docs -DskipTests -Dtar fails because of non-ascii 
 characters
 -

 Key: HADOOP-11316
 URL: https://issues.apache.org/jira/browse/HADOOP-11316
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Blocker
 Attachments: HADOOP-11316.1.patch


 The command fails because following files include non-ascii characters.
 * ComparableVersion.java
 * CommonConfigurationKeysPublic.java
 * ComparableVersion.java
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
 {code}
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc]   //  !--- KMSClientProvider configurations ???
   [javadoc]  ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc]   //  !--- KMSClientProvider configurations ???
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc] Loading source files for package org.apache.hadoop.fs.crypto...
   [javadoc]   //  !--- KMSClientProvider configurations ???
 {code}
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11316) mvn package -Pdist,docs -DskipTests -Dtar fails because of non-ascii characters

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-11316:

Attachment: HADOOP-11316.1.patch

Attaching first patch.

After fixing this problem, I faced HADOOP-11377. Should I fix HADOOP-11377 
here? I cannot find any workarounds to avoid the problem yet.

 mvn package -Pdist,docs -DskipTests -Dtar fails because of non-ascii 
 characters
 -

 Key: HADOOP-11316
 URL: https://issues.apache.org/jira/browse/HADOOP-11316
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Blocker
 Attachments: HADOOP-11316.1.patch


 The command fails because following files include non-ascii characters.
 * ComparableVersion.java
 * CommonConfigurationKeysPublic.java
 * ComparableVersion.java
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
 {code}
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc]   //  !--- KMSClientProvider configurations ???
   [javadoc]  ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc]   //  !--- KMSClientProvider configurations ???
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
  error: unmappable character for encoding ASCII
   [javadoc] Loading source files for package org.apache.hadoop.fs.crypto...
   [javadoc]   //  !--- KMSClientProvider configurations ???
 {code}
 {code}
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
   [javadoc]   ^
   [javadoc] 
 /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
  error: unmappable character for encoding ASCII
   [javadoc] //author a href=mailto:hbout...@apache.org;Herv?? 
 Boutemy/a
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11377) jdiff failing on java 8, Null.java not found

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294692#comment-14294692
 ] 

Tsuyoshi OZAWA commented on HADOOP-11377:
-

This problem seems to be reproduced with JDK-7 too.

 jdiff failing on java 8, Null.java not found
 --

 Key: HADOOP-11377
 URL: https://issues.apache.org/jira/browse/HADOOP-11377
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: Java8 jenkins
Reporter: Steve Loughran

 Jdiff is having problems on Java 8, as it cannot find a javadoc for the new 
 {{Null}} datatype
 {code}
 'https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-common/dev-support/jdiff/Null.java'
 The ' characters around the executable and arguments are
 not part of the command.
   [javadoc] javadoc: error - Illegal package name: 
   [javadoc] javadoc: error - File not found: 
 https://builds.apache.org/job/Hadoop-common-trunk-Java8/ws/hadoop-common-project/hadoop-common/dev-support/jdiff/Null.java;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2