[jira] [Commented] (HADOOP-12682) Test cases in TestKMS are failing

2015-12-28 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073162#comment-15073162
 ] 

Xiaoyu Yao commented on HADOOP-12682:
-

Good catch, thanks for reporting this [~jojochuang]! Somehow Jenkins passed 
with only the client side KMS unit tests run for HADOOP-12559. 

> Test cases in TestKMS are failing
> -
>
> Key: HADOOP-12682
> URL: https://issues.apache.org/jira/browse/HADOOP-12682
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2157/testReport/org.apache.hadoop.crypto.key.kms.server/TestKMS/testKMSRestartSimpleAuth/
> {noformat}
> Error Message
> loginUserFromKeyTab must be done first
> Stacktrace
> java.io.IOException: loginUserFromKeyTab must be done first
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1029)
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:994)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:679)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:697)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:259)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.createKey(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1003)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1000)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:266)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)
> {noformat}
> Seems to be introduced by HADOOP-12559



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073340#comment-15073340
 ] 

Kai Zheng commented on HADOOP-12662:


A running instance bundling ISA-L library:
{noformat}
[INFO] --- maven-antrun-plugin:1.7:run (pre-dist) @ hadoop-common ---
[INFO] Executing tasks

main:
 [exec] check_bundle_lib false snappy.lib snappy 
 [exec] Checking to bundle with:
 [exec] bundleOption=false, libOption=snappy.lib, libDir=, pattern=snappy
 [exec] check_bundle_lib false openssl.lib crypto 
 [exec] Checking to bundle with:
 [exec] bundleOption=false, libOption=openssl.lib, libDir=, pattern=crypto
 [exec] check_bundle_lib true isal.lib isa /usr/lib
 [exec] Checking to bundle with:
 [exec] bundleOption=true, libOption=isal.lib, libDir=/usr/lib, pattern=isa
[INFO] Executed tasks
{noformat}

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch, 
> HADOOP-12662-v3.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073343#comment-15073343
 ] 

Kai Zheng commented on HADOOP-12662:


I met this when getting rid of the {{X}}, and wonder is there anything wrong in 
my setup, or we still need it?
{noformat}
main:
 [exec] ./dist-copynativelibs.sh: line 16: unexpected argument `]]' to 
conditional binary operator
 [exec] ./dist-copynativelibs.sh: line 16: syntax error near `]]'
 [exec] ./dist-copynativelibs.sh: line 16: `if [[ 
"${libDir}" =  ]] || [[ ! -d ${libDir} ]]; then'
{noformat}

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch, 
> HADOOP-12662-v3.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2015-12-28 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12551:
---
Summary: Introduce FileNotFoundException for WASB FileSystem API  (was: 
Introduce FileNotFoundException for open and getFileStatus API's in WASB)

> Introduce FileNotFoundException for WASB FileSystem API
> ---
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also re-examines other Azure storage store calls to check for 
> BlobNotFoundException in setPermission(), setOwner, rename(), delete(), 
> open(), listStatus() APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2015-12-28 Thread jack liuquan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073356#comment-15073356
 ] 

jack liuquan commented on HADOOP-11828:
---

Hi Kai, 
Thank you for your reply.I will maintain the same codes of HH layer.

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction while retaining the same storage capacity and 
> failure tolerance capability as RS codes. This JIRA aims to introduce 
> Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10542) Potential null pointer dereference in Jets3tFileSystemStore#retrieveBlock()

2015-12-28 Thread Matthew Paduano (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073363#comment-15073363
 ] 

Matthew Paduano commented on HADOOP-10542:
--

This change seems to break some things.  In particular, have a closer look at:

S3FileSystem.getFileStatus()   (which no longer raises FileNotFoundException 
but instead IOException)
FileSystem.exists()   (which no longer returns false but 
instead raises IOException)
S3FileSystem.create()  (which no longer succeeds but instead raises 
IOException)

While the javadoc suggests that the API permits one to raise IOException, most 
of the code I have
encountered while debugging a command like "hadoop distcp 
hdfs://localhost:9000/test s3://xxx:y...@com.bar.foo/"
seems to expect (1) retrieveINode() to return null and (2) 
FileNotFoundException to be raised when a file is not 
found (i.e. when get() fails!).

2015-12-11 10:04:34,030 FATAL [IPC Server handler 6 on 44861] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
attempt_1449826461866_0005_m_06_0 - exited : java.io.IOException: /test 
doesn't exist
at 
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:170)
at 
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy17.retrieveINode(Unknown Source)
at 
org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:340)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:230)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

changing the "raise IOE..." to "return null" fixes all of the above code sites 
and 
allows distcp to succeed.


> Potential null pointer dereference in Jets3tFileSystemStore#retrieveBlock()
> ---
>
> Key: HADOOP-10542
> URL: https://issues.apache.org/jira/browse/HADOOP-10542
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: hadoop-10542-001.patch
>
>
> {code}
>   in = get(blockToKey(block), byteRangeStart);
>   out = new BufferedOutputStream(new FileOutputStream(fileBlock));
>   byte[] buf = new byte[bufferSize];
>   int numRead;
>   while ((numRead = in.read(buf)) >= 0) {
> {code}
> get() may return null.
> The while loop dereferences in without null check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2015-12-28 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12551:
---
Attachment: HADOOP-12551.001.patch

Attaching first iteration for the JIRA.

The patch contains modification to the handling of calls made to the Azure 
Storage layer wrapped in a try catch block to handle BlobNotFound exception. 
The patch handles open(), rename(), delete(), listStatus(), setOwner(), 
setPermission() APIs.

Testing: The patch contains new tests to verify the changes made. Also changes 
have been tested against FileSystemContractLive tests for the both Block Blobs 
and Page Blobs.

> Introduce FileNotFoundException for WASB FileSystem API
> ---
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12551.001.patch
>
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also re-examines other Azure storage store calls to check for 
> BlobNotFoundException in setPermission(), setOwner, rename(), delete(), 
> open(), listStatus() APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12608) Fix error message in WASB in connecting through Anonymous Credential codepath

2015-12-28 Thread Dushyanth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073367#comment-15073367
 ] 

Dushyanth commented on HADOOP-12608:


[~xyao] Thanks for the review.

Testing: The patch contains new tests to verify the changes made. Also changes 
have been tested against FileSystemContractLive tests for the both Block Blobs 
and Page Blobs.

> Fix error message in WASB in connecting through Anonymous Credential codepath
> -
>
> Key: HADOOP-12608
> URL: https://issues.apache.org/jira/browse/HADOOP-12608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12608.001.patch, HADOOP-12608.002.patch, 
> HADOOP-12608.003.patch, HADOOP-12608.004.patch, HADOOP-12608.005.patch
>
>
> Users of WASB have raised complaints over the error message returned back 
> from WASB when they are trying to connect to Azure storage with anonymous 
> credentials. Current implementation returns the correct message when we 
> encounter a Storage exception, however for scenarios like querying to check 
> if a container exists does not throw a StorageException but returns false 
> when URI is directly specified (Anonymous access) the error message returned 
> does not clearly state that credentials for storage account is not provided. 
> This JIRA tracks the fix the error message to return what is returned when a 
> storage exception is hit and also correct spelling mistakes in the error 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12682) Test cases in TestKMS are failing

2015-12-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073373#comment-15073373
 ] 

Wei-Chiu Chuang commented on HADOOP-12682:
--

[~xyao] I have limited knowledge of KMS. Even though the rev01 patch passed 
TestKMS tests, I am unsure if this is the right approach. Also, I am not sure 
if other test cases in TestKMS should invoke 
UserGroupInformation.loginUserFromKeytab() as well.

> Test cases in TestKMS are failing
> -
>
> Key: HADOOP-12682
> URL: https://issues.apache.org/jira/browse/HADOOP-12682
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12682.001.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2157/testReport/org.apache.hadoop.crypto.key.kms.server/TestKMS/testKMSRestartSimpleAuth/
> {noformat}
> Error Message
> loginUserFromKeyTab must be done first
> Stacktrace
> java.io.IOException: loginUserFromKeyTab must be done first
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1029)
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:994)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:679)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:697)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:259)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.createKey(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1003)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1000)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:266)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)
> {noformat}
> Seems to be introduced by HADOOP-12559



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12682) Test cases in TestKMS are failing

2015-12-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12682:
-
Status: Patch Available  (was: Open)

> Test cases in TestKMS are failing
> -
>
> Key: HADOOP-12682
> URL: https://issues.apache.org/jira/browse/HADOOP-12682
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12682.001.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2157/testReport/org.apache.hadoop.crypto.key.kms.server/TestKMS/testKMSRestartSimpleAuth/
> {noformat}
> Error Message
> loginUserFromKeyTab must be done first
> Stacktrace
> java.io.IOException: loginUserFromKeyTab must be done first
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1029)
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:994)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:679)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:697)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:259)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.createKey(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1003)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1000)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:266)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)
> {noformat}
> Seems to be introduced by HADOOP-12559



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12682) Test cases in TestKMS are failing

2015-12-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12682:
-
Attachment: HADOOP-12682.001.patch

Rev01: create a helper method doAsFromKeytab() which invokes 
UserGroupInformation.loginUserFromKeytab(), and use this helper method instead 
of doAs() in doKMSRestart().

The tests in TestKMS passed. 

> Test cases in TestKMS are failing
> -
>
> Key: HADOOP-12682
> URL: https://issues.apache.org/jira/browse/HADOOP-12682
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12682.001.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2157/testReport/org.apache.hadoop.crypto.key.kms.server/TestKMS/testKMSRestartSimpleAuth/
> {noformat}
> Error Message
> loginUserFromKeyTab must be done first
> Stacktrace
> java.io.IOException: loginUserFromKeyTab must be done first
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1029)
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:994)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:679)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:697)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:259)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.createKey(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1003)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1000)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:266)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)
> {noformat}
> Seems to be introduced by HADOOP-12559



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-28 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12662:
---
Attachment: HADOOP-12662-v3.patch

Updated the patch to:
* Used bash and {{set -o pipefail}} to check error as [~cmccabe] suggested;
* Refactored the script codes adding a function to bundle a native library;
* Fixed some minor issues related to bundling ISA-L found during the work.

As to splitting the script codes out of {{pom.xml}}, it looks like there isn't 
obvious way to put the script file somewhere and reference it in the pom file, 
I suggest we consider the change separately. We may reconsider some basic 
approach for it according to somewhat best practice in *Maven* projects. After 
some experimenting, putting it in the {{dev-support}} folder looks like doesn't 
work elegantly.

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch, 
> HADOOP-12662-v3.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12658) Clear javadoc and check style issues around DomainSocket

2015-12-28 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12658:
---
Attachment: HADOOP-12658-v4.patch

Updated the patch to address the reported checking style issues.

> Clear javadoc and check style issues around DomainSocket
> 
>
> Key: HADOOP-12658
> URL: https://issues.apache.org/jira/browse/HADOOP-12658
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Trivial
> Attachments: HADOOP-12658-v1.patch, HADOOP-12658-v2.patch, 
> HADOOP-12658-v3.patch, HADOOP-12658-v4.patch
>
>
> It was noticed Javadoc needs minor udpate in {{DomainSocket}}.
> There are some other check style issues around to clear up found when working 
> on HDFS-8562.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Patch Available  (was: Open)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Attachment: HADOOP-12678.001.patch

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12658) Clear javadoc and check style issues around DomainSocket

2015-12-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072756#comment-15072756
 ] 

Hadoop QA commented on HADOOP-12658:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
35s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 59s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
14s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 28s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 1s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 5s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 21s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072813#comment-15072813
 ] 

Hadoop QA commented on HADOOP-12678:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} 
| {color:red} HADOOP-12678 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779656/HADOOP-12678.001.patch
 |
| JIRA Issue | HADOOP-12678 |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8316/console |


This message was automatically generated.



> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12677) DecompressorStream throws IndexOutOfBoundsException when calling skip(long)

2015-12-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072928#comment-15072928
 ] 

Hadoop QA commented on HADOOP-12677:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 15s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 14s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779660/HADOOP-12677.002.patch
 |
| JIRA Issue | HADOOP-12677 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 734aa7180e1a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a0249da |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12677) DecompressorStream throws IndexOutOfBoundsException when calling skip(long)

2015-12-28 Thread Laurent Goujon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072867#comment-15072867
 ] 

Laurent Goujon commented on HADOOP-12677:
-

It's not about invoking InputStream.skip() but following the same behavior, as 
part of the API contract. Before your patch, DecompressorStream would also 
return the number of bytes skipped until EOF, changing it would mean 
introducing an incompatible change.

> DecompressorStream throws IndexOutOfBoundsException when calling skip(long)
> ---
>
> Key: HADOOP-12677
> URL: https://issues.apache.org/jira/browse/HADOOP-12677
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.4.0, 2.6.0, 3.0.0
>Reporter: Laurent Goujon
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12677.001.patch, HADOOP-12677.002.patch
>
>
> DecompressorStream.skip(long) throws an IndexOutOfBoundException when using a 
> long bigger than Integer.MAX_VALUE
> This is because of this cast from long to int: 
> https://github.com/apache/hadoop-common/blob/HADOOP-3628/src/core/org/apache/hadoop/io/compress/DecompressorStream.java#L125
> The fix is probably to do the cast after applying Math.min: in that case, it 
> should not be an issue since it should not be bigger than the buffer size 
> (512)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072833#comment-15072833
 ] 

Hadoop QA commented on HADOOP-12662:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
31s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
31s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 9m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
49s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 6s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 27s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 309m 8s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache 

[jira] [Updated] (HADOOP-12677) DecompressorStream throws IndexOutOfBoundsException when calling skip(long)

2015-12-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12677:
-
Attachment: HADOOP-12677.002.patch

[~laurentgo]
Thank you for the comments and reviews!

I have posted patch #2 to address comment#2.

Regarding comment#1, even though DecompressorStream inherits InputStream, its 
skip() implementation does not call InputStream.skip(). Instead, its internal 
implementation uses InputStream.read(), which returns -1 if end of stream. If 
InputStream.end() returns -1, an EOFException is thrown.

> DecompressorStream throws IndexOutOfBoundsException when calling skip(long)
> ---
>
> Key: HADOOP-12677
> URL: https://issues.apache.org/jira/browse/HADOOP-12677
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.4.0, 2.6.0, 3.0.0
>Reporter: Laurent Goujon
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12677.001.patch, HADOOP-12677.002.patch
>
>
> DecompressorStream.skip(long) throws an IndexOutOfBoundException when using a 
> long bigger than Integer.MAX_VALUE
> This is because of this cast from long to int: 
> https://github.com/apache/hadoop-common/blob/HADOOP-3628/src/core/org/apache/hadoop/io/compress/DecompressorStream.java#L125
> The fix is probably to do the cast after applying Math.min: in that case, it 
> should not be an issue since it should not be bigger than the buffer size 
> (512)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Patch Available  (was: Open)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Open  (was: Patch Available)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Attachment: HADOOP-12678.002.patch

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12559) KMS connection failures should trigger TGT renewal

2015-12-28 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072967#comment-15072967
 ] 

Xiaoyu Yao commented on HADOOP-12559:
-

Thanks [~zhz] for updating the patch with comments on the testing details. +1 
for V05 patch and I will commit it shortly. 

> KMS connection failures should trigger TGT renewal
> --
>
> Key: HADOOP-12559
> URL: https://issues.apache.org/jira/browse/HADOOP-12559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-12559.00.patch, HADOOP-12559.01.patch, 
> HADOOP-12559.02.patch, HADOOP-12559.03.patch, HADOOP-12559.04.patch, 
> HADOOP-12559.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12559) KMS connection failures should trigger TGT renewal

2015-12-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073014#comment-15073014
 ] 

Hudson commented on HADOOP-12559:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9028 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9028/])
HADOOP-12559. KMS connection failures should trigger TGT renewal. (xyao: rev 
993311e547e6dd7757025d5ffc285019bd4fc1f6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java


> KMS connection failures should trigger TGT renewal
> --
>
> Key: HADOOP-12559
> URL: https://issues.apache.org/jira/browse/HADOOP-12559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HADOOP-12559.00.patch, HADOOP-12559.01.patch, 
> HADOOP-12559.02.patch, HADOOP-12559.03.patch, HADOOP-12559.04.patch, 
> HADOOP-12559.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12608) Fix error message in WASB in connecting through Anonymous Credential codepath

2015-12-28 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073037#comment-15073037
 ] 

Xiaoyu Yao commented on HADOOP-12608:
-

[~dchickabasapa], thanks for working on this and the patch looks good to me 
overall. Can you add comments to this JIRA on tests that have been done to the 
latest patch (e.g., the new unit test added has passed with Azure) as suggested 
by [~steve_l] below. Thanks!

"patch requirements for anything against an object store primarily consist of 
confirming you've run all the existing tests + your patch: 
https://wiki.apache.org/hadoop/HowToContribute#Submitting_patches_against_object_stores_such_as_Amazon_S3.2C_OpenStack_Swift_and_Microsoft_Azure;


> Fix error message in WASB in connecting through Anonymous Credential codepath
> -
>
> Key: HADOOP-12608
> URL: https://issues.apache.org/jira/browse/HADOOP-12608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12608.001.patch, HADOOP-12608.002.patch, 
> HADOOP-12608.003.patch, HADOOP-12608.004.patch, HADOOP-12608.005.patch
>
>
> Users of WASB have raised complaints over the error message returned back 
> from WASB when they are trying to connect to Azure storage with anonymous 
> credentials. Current implementation returns the correct message when we 
> encounter a Storage exception, however for scenarios like querying to check 
> if a container exists does not throw a StorageException but returns false 
> when URI is directly specified (Anonymous access) the error message returned 
> does not clearly state that credentials for storage account is not provided. 
> This JIRA tracks the fix the error message to return what is returned when a 
> storage exception is hit and also correct spelling mistakes in the error 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073010#comment-15073010
 ] 

Allen Wittenauer commented on HADOOP-12662:
---

* If this is being switched to bash, then it should be using double brackets 
([[) instead of single brackets ([) for tests.  It will also make constructions 
like this:

{code}
+if [ X"${bundle.snappy.in.bin}" = X"true" ] ; then
{code}

significantly less painful since the X is no longer needed.

* This is not now bash works.

{code}
+set -o pipefail; $$TAR * | (cd $${TARGET_BIN_DIR}/; 
$$UNTAR)
...
+set -o pipefail; $$TAR *snappy* | (cd 
$${TARGET_BIN_DIR}/; $$UNTAR)
...
+set -o pipefail; $$TAR * | (cd $${TARGET_BIN_DIR}/; 
$$UNTAR)
{code}

Once you set a bash option, it's set for the rest of the execution.  So doing 
this in the middle is a debugging nightmare since the code will need to be 
traced to determine when pipefail was set.  So you either need to not set it at 
all, set it at the beginning, or immediately unset it after you don't need it 
anymore.



> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch, 
> HADOOP-12662-v3.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072971#comment-15072971
 ] 

Hadoop QA commented on HADOOP-12678:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s 
{color} | {color:red} Patch generated 2 new checkstyle issues in 
hadoop-tools/hadoop-azure (total was 25, now 27). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 38s 
{color} | {color:red} hadoop-tools/hadoop-azure introduced 1 new FindBugs 
issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 3s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.8.0_66 with JDK v1.8.0_66 
generated 3 new issues (was 26, now 29). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 20s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-azure |
|  |  The method name 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.DeleteRenamePendingFile(FileSystem,
 Path) doesn't start with a lower case letter  At 
NativeAzureFileSystem.java:doesn't start with a lower case letter  At 
NativeAzureFileSystem.java:[lines 228-247] |
\\

[jira] [Updated] (HADOOP-12559) KMS connection failures should trigger TGT renewal

2015-12-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12559:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~zhz] for the contribution and all for the reviews. I've commit the fix 
to trunk, branch-2 and branch-2.8.


> KMS connection failures should trigger TGT renewal
> --
>
> Key: HADOOP-12559
> URL: https://issues.apache.org/jira/browse/HADOOP-12559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HADOOP-12559.00.patch, HADOOP-12559.01.patch, 
> HADOOP-12559.02.patch, HADOOP-12559.03.patch, HADOOP-12559.04.patch, 
> HADOOP-12559.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12682) Test cases in TestKMS are failing

2015-12-28 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12682:


 Summary: Test cases in TestKMS are failing
 Key: HADOOP-12682
 URL: https://issues.apache.org/jira/browse/HADOOP-12682
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Jenkins
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


https://builds.apache.org/job/Hadoop-Common-trunk/2157/testReport/org.apache.hadoop.crypto.key.kms.server/TestKMS/testKMSRestartSimpleAuth/
{noformat}
Error Message

loginUserFromKeyTab must be done first

Stacktrace

java.io.IOException: loginUserFromKeyTab must be done first
at 
org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1029)
at 
org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:994)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:679)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:697)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:259)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:256)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.createKey(LoadBalancingKMSClientProvider.java:256)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1003)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1000)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:266)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)

{noformat}
Seems to be introduced by HADOOP-12559



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12682) Test cases in TestKMS are failing

2015-12-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073396#comment-15073396
 ] 

Wei-Chiu Chuang commented on HADOOP-12682:
--

[~xyao] We should probably file a YETUS jira to have jenkins test server side 
KMS as well.

> Test cases in TestKMS are failing
> -
>
> Key: HADOOP-12682
> URL: https://issues.apache.org/jira/browse/HADOOP-12682
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12682.001.patch
>
>
> https://builds.apache.org/job/Hadoop-Common-trunk/2157/testReport/org.apache.hadoop.crypto.key.kms.server/TestKMS/testKMSRestartSimpleAuth/
> {noformat}
> Error Message
> loginUserFromKeyTab must be done first
> Stacktrace
> java.io.IOException: loginUserFromKeyTab must be done first
>   at 
> org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1029)
>   at 
> org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:994)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:679)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:697)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:259)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$10.call(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.createKey(LoadBalancingKMSClientProvider.java:256)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1003)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$6$1.run(TestKMS.java:1000)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:266)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)
> {noformat}
> Seems to be introduced by HADOOP-12559



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread madhumita chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073513#comment-15073513
 ] 

madhumita chakraborty commented on HADOOP-12678:


[~cnauroth],[~gouravk],[~pravinmittal],[~dchickabasapa] could you guys please 
take a look at the patch?

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12683) Add number of samples in last interval in snapshot of MutableStat

2015-12-28 Thread Vikram Srivastava (JIRA)
Vikram Srivastava created HADOOP-12683:
--

 Summary: Add number of samples in last interval in snapshot of 
MutableStat
 Key: HADOOP-12683
 URL: https://issues.apache.org/jira/browse/HADOOP-12683
 Project: Hadoop Common
  Issue Type: Task
  Components: metrics
Affects Versions: 2.7.1
Reporter: Vikram Srivastava
Assignee: Vikram Srivastava
Priority: Minor


Besides the total number of samples, it is also useful to know the number of 
samples in the last snapshot of MutableStat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12683) Add number of samples in last interval in snapshot of MutableStat

2015-12-28 Thread Vikram Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Srivastava updated HADOOP-12683:
---
Status: Patch Available  (was: Open)

> Add number of samples in last interval in snapshot of MutableStat
> -
>
> Key: HADOOP-12683
> URL: https://issues.apache.org/jira/browse/HADOOP-12683
> Project: Hadoop Common
>  Issue Type: Task
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Vikram Srivastava
>Assignee: Vikram Srivastava
>Priority: Minor
> Attachments: HADOOP-12863.001.patch
>
>
> Besides the total number of samples, it is also useful to know the number of 
> samples in the last snapshot of MutableStat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073521#comment-15073521
 ] 

Hadoop QA commented on HADOOP-12678:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 0s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.8.0_66 with JDK v1.8.0_66 
generated 18 new issues (was 26, now 26). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 28s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.7.0_91 with JDK v1.7.0_91 
generated 1 new issues (was 1, now 1). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 20s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 19s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779754/HADOOP-12678.003.patch
 |
| JIRA Issue | HADOOP-12678 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (HADOOP-12682) Test cases in TestKMS are failing

2015-12-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073413#comment-15073413
 ] 

Hadoop QA commented on HADOOP-12682:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
47s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
51s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
28s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
41s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 43s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 42s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 54s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779737/HADOOP-12682.001.patch
 |
| JIRA Issue | HADOOP-12682 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 290e43182626 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d0a22ba |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  

[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Open  (was: Patch Available)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Patch Available  (was: Open)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-28 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Attachment: HADOOP-12678.003.patch

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073522#comment-15073522
 ] 

Kai Zheng commented on HADOOP-12662:


A failure running instance bundling openssl failed:
{noformat}
[INFO] --- maven-antrun-plugin:1.7:run (pre-dist) @ hadoop-common ---
[INFO] Executing tasks

main:
 [exec] check_bundle_lib true snappy.lib snappy /tmp/libsnappy
 [exec] Checking to bundle with:
 [exec] bundleOption=true, libOption=snappy.lib, libDir=/tmp/libsnappy, 
pattern=snappy
 [exec] check_bundle_lib true openssl.lib crypto /usr/lib64/openssl/engines
 [exec] Checking to bundle with:
 [exec] bundleOption=true, libOption=openssl.lib, 
libDir=/usr/lib64/openssl/engines, pattern=crypto
 [exec] Bundling library with openssl.lib failed tar: *crypto*: Cannot 
stat: No such file or directory
 [exec] tar: Exiting with failure status due to previous errors
 [exec] 
[INFO] 
{noformat}

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch, 
> HADOOP-12662-v3.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073531#comment-15073531
 ] 

Kai Zheng commented on HADOOP-12662:


A successful running instance bundling openssl, snappy and isal:
{noformat}
[INFO] --- maven-antrun-plugin:1.7:run (pre-dist) @ hadoop-common ---
[INFO] Executing tasks

main:
 [exec] check_bundle_lib true snappy.lib snappy /tmp/libsnappy
 [exec] Checking to bundle with:
 [exec] bundleOption=true, libOption=snappy.lib, libDir=/tmp/libsnappy, 
pattern=snappy
 [exec] check_bundle_lib true openssl.lib crypto /usr/lib64
 [exec] Checking to bundle with:
 [exec] bundleOption=true, libOption=openssl.lib, libDir=/usr/lib64, 
pattern=crypto
 [exec] check_bundle_lib true isal.lib isa /usr/lib
 [exec] Checking to bundle with:
 [exec] bundleOption=true, libOption=isal.lib, libDir=/usr/lib, pattern=isa
[INFO] Executed tasks
[INFO] 
{noformat}

And the bundle results:
{noformat}
[root@zkdesk hadoop]# find hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/ -name *.so
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libcrypto.so
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libk5crypto.so
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libisal.so
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libhadoop.so
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libsnappy.so
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libnativetask.so
{noformat}

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch, 
> HADOOP-12662-v3.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-28 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12662:
---
Attachment: HADOOP-12662-v4.patch

Updated the patch according to above discussion. Also tested as above.

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch, 
> HADOOP-12662-v3.patch, HADOOP-12662-v4.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12683) Add number of samples in last interval in snapshot of MutableStat

2015-12-28 Thread Vikram Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Srivastava updated HADOOP-12683:
---
Attachment: HADOOP-12863.001.patch

> Add number of samples in last interval in snapshot of MutableStat
> -
>
> Key: HADOOP-12683
> URL: https://issues.apache.org/jira/browse/HADOOP-12683
> Project: Hadoop Common
>  Issue Type: Task
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Vikram Srivastava
>Assignee: Vikram Srivastava
>Priority: Minor
> Attachments: HADOOP-12863.001.patch
>
>
> Besides the total number of samples, it is also useful to know the number of 
> samples in the last snapshot of MutableStat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073311#comment-15073311
 ] 

Kai Zheng commented on HADOOP-12662:


Thanks [~aw] for the great comments!
bq. significantly less painful since the X is no longer needed.
I see. Maybe we could keep it as someone like me is used to the style?
bq. Once you set a bash option, ..., 
It sounds good to me to set it at the beginning thus it will be easier for the 
simple script.

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch, 
> HADOOP-12662-v3.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for open and getFileStatus API's in WASB

2015-12-28 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12551:
---
Description: 
HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
WASB. The open and getFileStatus api currently throws FileNotFoundException 
correctly when the file does not exists when the API is called but does not 
throw the same exception if there is another thread/process deletes the file 
during its execution. This Jira fixes that behavior.

This jira also re-examines other Azure storage store calls to check for 
BlobNotFoundException in setPermission(), setOwner, rename(), delete(), open(), 
listStatus() APIs.

  was:
HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
WASB. The open and getFileStatus api currently throws FileNotFoundException 
correctly when the file does not exists when the API is called but does not 
throw the same exception if there is another thread/process deletes the file 
during its execution. This Jira fixes that behavior.

This jira also fixes the store calls in 


> Introduce FileNotFoundException for open and getFileStatus API's in WASB
> 
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also re-examines other Azure storage store calls to check for 
> BlobNotFoundException in setPermission(), setOwner, rename(), delete(), 
> open(), listStatus() APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for open and getFileStatus API's in WASB

2015-12-28 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12551:
---
Target Version/s:   (was: 2.8.0)

> Introduce FileNotFoundException for open and getFileStatus API's in WASB
> 
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also fixes the store calls in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for open and getFileStatus API's in WASB

2015-12-28 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12551:
---
Description: 
HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
WASB. The open and getFileStatus api currently throws FileNotFoundException 
correctly when the file does not exists when the API is called but does not 
throw the same exception if there is another thread/process deletes the file 
during its execution. This Jira fixes that behavior.

This jira also fixes the store calls in 

  was:HADOOP-12533 introduced FileNotFoundException to the read and seek API 
for WASB. The open and getFileStatus api currently throws FileNotFoundException 
correctly when the file does not exists when the API is called but does not 
throw the same exception if there is another thread/process deletes the file 
during its execution. This Jira fixes that behavior.


> Introduce FileNotFoundException for open and getFileStatus API's in WASB
> 
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also fixes the store calls in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12683) Add number of samples in last interval in snapshot of MutableStat

2015-12-28 Thread Vikram Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Srivastava updated HADOOP-12683:
---
Attachment: HADOOP-12683.001.patch

> Add number of samples in last interval in snapshot of MutableStat
> -
>
> Key: HADOOP-12683
> URL: https://issues.apache.org/jira/browse/HADOOP-12683
> Project: Hadoop Common
>  Issue Type: Task
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Vikram Srivastava
>Assignee: Vikram Srivastava
>Priority: Minor
> Attachments: HADOOP-12683.001.patch
>
>
> Besides the total number of samples, it is also useful to know the number of 
> samples in the last snapshot of MutableStat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12683) Add number of samples in last interval in snapshot of MutableStat

2015-12-28 Thread Vikram Srivastava (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Srivastava updated HADOOP-12683:
---
Attachment: (was: HADOOP-12863.001.patch)

> Add number of samples in last interval in snapshot of MutableStat
> -
>
> Key: HADOOP-12683
> URL: https://issues.apache.org/jira/browse/HADOOP-12683
> Project: Hadoop Common
>  Issue Type: Task
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Vikram Srivastava
>Assignee: Vikram Srivastava
>Priority: Minor
> Attachments: HADOOP-12683.001.patch
>
>
> Besides the total number of samples, it is also useful to know the number of 
> samples in the last snapshot of MutableStat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12683) Add number of samples in last interval in snapshot of MutableStat

2015-12-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073608#comment-15073608
 ] 

Hadoop QA commented on HADOOP-12683:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 12s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 2s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_91 Failed junit tests | hadoop.fs.TestSymlinkLocalFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779762/HADOOP-12683.001.patch
 |
| JIRA Issue | HADOOP-12683 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 82e80accc4c5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
|