[jira] [Created] (HADOOP-12720) Misuse of sun.misc.Unsafe by org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo causes misaligned memory access coredumps

2016-01-18 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-12720:
--

 Summary: Misuse of sun.misc.Unsafe by 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo
 causes misaligned memory access coredumps
 Key: HADOOP-12720
 URL: https://issues.apache.org/jira/browse/HADOOP-12720
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Affects Versions: 2.7.1
 Environment: Solaeis SPARC
Reporter: Alan Burlison


Core dump details below:

{noformat}
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.85-b07 mixed mode 
solaris-sparc compressed oops)
# Problematic frame:
# J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]

Stack: [0x7e20,0x7e30],  sp=0x7e2fce50,  free 
space=1011k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
j  
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
j  org.apache.hadoop.io.FastByteComparisons.compareTo([BII[BII)I+11
j  org.apache.hadoop.io.WritableComparator.compareBytes([BII[BII)I+8
j  org.apache.hadoop.io.Text$Comparator.compare([BII[BII)I+39
j  org.apache.hadoop.io.TestText.testCompare()V+167
v  ~StubRoutines::call_stub
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12630) Misuse of sun.misc.Unsafe by org.apache.hadoop.io.FastByteComparisons causes misaligned memory access coredumps

2016-01-18 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12630:
---
Summary: Misuse of sun.misc.Unsafe by 
org.apache.hadoop.io.FastByteComparisons causes misaligned memory access 
coredumps  (was: Misuse of sun.misc.Unsafe causes misaligned memory access 
coredumps)

> Misuse of sun.misc.Unsafe by org.apache.hadoop.io.FastByteComparisons causes 
> misaligned memory access coredumps
> ---
>
> Key: HADOOP-12630
> URL: https://issues.apache.org/jira/browse/HADOOP-12630
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 2.6.0, 2.7.0, 3.0.0
> Environment: Solaris SPARC
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>
> Misuse of sun.misc.unsafe by {{org.apache.hadoop.io.FastByteComparisons}} 
> causes misaligned memory accesses and results in coredumps. Stack traces 
> below:
> {code}
> hadoop-tools/hadoop-gridmix/core
>  --- called from signal handler with signal 10 (SIGBUS) ---
>  7717fa40 Unsafe_GetLong (18c000, 7e2fd6d8, 0, 19, 
> 775d4be0, 10018c000) + 158
>  70810dcc * sun/misc/Unsafe.getLong(Ljava/lang/Object;J)J+-30004
>  70810d70 * sun/misc/Unsafe.getLong(Ljava/lang/Object;J)J+0
>  70806d58 * 
> org/apache/hadoop/io/FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I+91
>  (line 405)
>  70806cb4 * 
> org/apache/hadoop/io/FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
>  (line 264)
>  7080783c * 
> org/apache/hadoop/io/FastByteComparisons.compareTo([BII[BII)I+11 (line 92)
>  70806cb4 * 
> org/apache/hadoop/io/WritableComparator.compareBytes([BII[BII)I+8 (line 376)
>  70806cb4 * 
> org/apache/hadoop/mapred/gridmix/GridmixRecord$Comparator.compare([BII[BII)I+61
>  (line 522)
>  70806cb4 * 
> org/apache/hadoop/mapred/gridmix/TestGridmixRecord.binSortTest(Lorg/apache/hadoop/mapred/gridmix/GridmixRecord;Lorg/apache/hadoop/mapred/gridmix/GridmixRecord;IILorg/apache/hadoop/io/WritableComparator;)V+280
>  (line 268)
>  70806f44 * 
> org/apache/hadoop/mapred/gridmix/TestGridmixRecord.testBaseRecord()V+57 (line 
> 482)
> {code}
> This also causes {{hadoop-mapreduce-project/hadoop-mapreduce-examples/core}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15104962#comment-15104962
 ] 

Hadoop QA commented on HADOOP-12426:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 71 
new + 0 unchanged - 0 fixed = 71 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 18s 
{color} | {color:red} hadoop-common-project/hadoop-common generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 48s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 19s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.security.KDiag.run(String[])  At 
KDiag.java:org.apache.hadoop.security.KDiag.run(String[])  At KDiag.java:[line 
146] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.security.KDiag.run(String[]):in 
org.apache.hadoop.security.KDiag.run(String[]): new 
java.io.PrintWriter(OutputStream)  At KDiag.java:[line 142] |
| JDK v1.8.0_66 Failed junit tests | hadoop.ipc.TestIPC |
\\
\\

[jira] [Created] (HADOOP-12719) Implement lease recover in wasb file system

2016-01-18 Thread Liu Shaohui (JIRA)
Liu Shaohui created HADOOP-12719:


 Summary: Implement lease recover in wasb file system
 Key: HADOOP-12719
 URL: https://issues.apache.org/jira/browse/HADOOP-12719
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Liu Shaohui


HBase depend on the lease recovery of file system to prevent the write 
operations from the "dead" regionservers. 

After HADOOP-9629, hadoop supports Azure Storage as an alternative Hadoop 
Compatible File System. If we want to deploy HBase on Azure storage and keep 
data safety, we need to implement lease recover in wasb file system.

[~cnauroth] [~chuanliu]
I don't know much about wasb. Any suggestions about how to implements this? 

Eg, changing permission of the hlog to readonly like FSMapRUtils does?

Thanks~





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-01-18 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105019#comment-15105019
 ] 

Kai Zheng commented on HADOOP-12579:


While working on this, getting rid of the engine completely, I found the 
{{RPC#waitForProxy}} method family may also be deprecated, considering:
* We probably never need to wait, giving current available engines. The proxy 
can be created and returned on demand shortly. No network connection is 
incurred. 
* In the real implementation codes of waitForProtocolProxy, it uses a while 
loop to try and try until a passed timeout value is consumed. I guess the logic 
and codes were from early days of the project? Because no connection is made 
during the proxy creating and initializing. The real network connection is only 
made when the invoker is invoked and a RPC call is called.
* Most places call {{RPC#getProxy}} already.
* Not sure to remove these, considering codes out of Hadoop might call them. 
But deprecate them should be fine and change the implementation removing the 
while loop and timeout stuffs.

Please help confirm, if sounds good I'll handle it here or separately. Thanks.

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
> Attachments: HADOOP-12579-v1.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2016-01-18 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105026#comment-15105026
 ] 

Kai Zheng commented on HADOOP-12662:


Thanks [~cmccabe], [~ste...@apache.org], [~aw] and [~vinayrpet] for the review, 
comments and commit!

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch, 
> HADOOP-12662-v3.patch, HADOOP-12662-v4.patch, HADOOP-12662-v5.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12635) Adding Append API support for WASB

2016-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105559#comment-15105559
 ] 

Hudson commented on HADOOP-12635:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9132 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9132/])
HADOOP-12635. Adding Append API support for WASB. Contributed by (cnauroth: rev 
8bc93db2e7c64830b6a662f28c8917a9eef4e7c9)
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemAppend.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterface.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobAppendStream.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockStorageInterface.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PageBlobOutputStream.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystemHelper.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeFileSystemStore.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-azure/src/site/markdown/index.md
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterfaceImpl.java


> Adding Append API support for WASB
> --
>
> Key: HADOOP-12635
> URL: https://issues.apache.org/jira/browse/HADOOP-12635
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
> Attachments: Append API.docx, HADOOP-12635-004.patch, 
> HADOOP-12635.001.patch, HADOOP-12635.002.patch, HADOOP-12635.003.patch, 
> HADOOP-12635.005.patch, HADOOP-12635.006.patch, HADOOP-12635.007.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support 
> Append API. This JIRA is added to design and implement the Append API support 
> to WASB. The intended support for Append would only support a single writer.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11505) Various native parts use bswap incorrectly and unportably

2016-01-18 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105569#comment-15105569
 ] 

Alan Burlison commented on HADOOP-11505:


Why would hadoop-common have to be rebuilt each time? Is the problem that the 
current build infrastructure rebuilds it even when nothing has changed? I've 
noticed that happening with other subcomponents. If that is the case then 
generating multiple, duplicate headers probably is both easier and quicker, 
although it sets my teeth slightly on edge.

I can't comment about Yetus as I don't know anything about it.

> Various native parts use bswap incorrectly and unportably
> -
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
> HADOOP-11505.004.patch, HADOOP-11505.005.patch, HADOOP-11505.006.patch, 
> HADOOP-11505.007.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12635) Adding Append API support for WASB

2016-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12635:
---
Issue Type: New Feature  (was: Improvement)

> Adding Append API support for WASB
> --
>
> Key: HADOOP-12635
> URL: https://issues.apache.org/jira/browse/HADOOP-12635
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: Append API.docx, HADOOP-12635-004.patch, 
> HADOOP-12635.001.patch, HADOOP-12635.002.patch, HADOOP-12635.003.patch, 
> HADOOP-12635.005.patch, HADOOP-12635.006.patch, HADOOP-12635.007.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support 
> Append API. This JIRA is added to design and implement the Append API support 
> to WASB. The intended support for Append would only support a single writer.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12691) Add CSRF Filter for REST APIs to Hadoop Common

2016-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12691:
---
Fix Version/s: (was: 2.9.0)
   2.8.0

I also cherry-picked to branch-2.8.

> Add CSRF Filter for REST APIs to Hadoop Common
> --
>
> Key: HADOOP-12691
> URL: https://issues.apache.org/jira/browse/HADOOP-12691
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: CSRFProtectionforRESTAPIs.pdf, HADOOP-12691-001.patch, 
> HADOOP-12691-002.patch, HADOOP-12691-003.patch
>
>
> CSRF prevention for REST APIs can be provided through a common servlet 
> filter. This filter would check for the existence of an expected 
> (configurable) HTTP header - such as X-XSRF-Header.
> The fact that CSRF attacks are entirely browser based means that the above 
> approach can ensure that requests are coming from either: applications served 
> by the same origin as the REST API or that there is explicit policy 
> configuration that allows the setting of a header on XmlHttpRequest from 
> another origin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12635) Adding Append API support for WASB

2016-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12635:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2 and branch-2.8.  [~dchickabasapa], 
thank you for the contribution.

> Adding Append API support for WASB
> --
>
> Key: HADOOP-12635
> URL: https://issues.apache.org/jira/browse/HADOOP-12635
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
> Attachments: Append API.docx, HADOOP-12635-004.patch, 
> HADOOP-12635.001.patch, HADOOP-12635.002.patch, HADOOP-12635.003.patch, 
> HADOOP-12635.005.patch, HADOOP-12635.006.patch, HADOOP-12635.007.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support 
> Append API. This JIRA is added to design and implement the Append API support 
> to WASB. The intended support for Append would only support a single writer.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12356) CPU usage statistics on Windows

2016-01-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105547#comment-15105547
 ] 

Chris Nauroth commented on HADOOP-12356:


Good catch.  I didn't realize {{SysInfo}} was new in branch-2.8, so no need to 
preserve interface compatibility with something that didn't exist in any prior 
release.

Once again, I am +1 for patch v8.

> CPU usage statistics on Windows
> ---
>
> Key: HADOOP-12356
> URL: https://issues.apache.org/jira/browse/HADOOP-12356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
> Environment: CPU: Intel Xeon
> OS: Windows server
>Reporter: Yunqi Zhang
>Assignee: Inigo Goiri
>  Labels: easyfix, newbie, patch
> Attachments: 0001-Correct-the-CPU-usage-calcualtion.patch, 
> 0001-Correct-the-CPU-usage-calcualtion.patch, HADOOP-12356-v3.patch, 
> HADOOP-12356-v4.patch, HADOOP-12356-v5.patch, HADOOP-12356-v6.patch, 
> HADOOP-12356-v7.patch, HADOOP-12356-v8.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The CPU usage information on Windows is computed incorrectly. The proposed 
> patch fixes the issue, and unifies the the interface with Linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12635) Adding Append API support for WASB

2016-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12635:
---
Release Note: The Azure Blob Storage file system (WASB) now includes 
optional support for use of the append API by a single writer on a path.  
Please note that the implementation differs from the semantics of HDFS append.  
HDFS append internally guarantees that only a single writer may append to a 
path at a given time.  WASB does not enforce this guarantee internally.  
Instead, the application must enforce access by a single writer, such as by 
running single-threaded or relying on some external locking mechanism to 
coordinate concurrent processes.

> Adding Append API support for WASB
> --
>
> Key: HADOOP-12635
> URL: https://issues.apache.org/jira/browse/HADOOP-12635
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
> Attachments: Append API.docx, HADOOP-12635-004.patch, 
> HADOOP-12635.001.patch, HADOOP-12635.002.patch, HADOOP-12635.003.patch, 
> HADOOP-12635.005.patch, HADOOP-12635.006.patch, HADOOP-12635.007.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support 
> Append API. This JIRA is added to design and implement the Append API support 
> to WASB. The intended support for Append would only support a single writer.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12635) Adding Append API support for WASB

2016-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12635:
---
Release Note: The Azure Blob Storage file system (WASB) now includes 
optional support for use of the append API by a single writer on a path.  
Please note that the implementation differs from the semantics of HDFS append.  
HDFS append internally guarantees that only a single writer may append to a 
path at a given time.  WASB does not enforce this guarantee internally.  
Instead, the application must enforce access by a single writer, such as by 
running single-threaded or relying on some external locking mechanism to 
coordinate concurrent processes.  Refer to the Azure Blob Storage documentation 
page for more details on enabling append in configuration.  (was: The Azure 
Blob Storage file system (WASB) now includes optional support for use of the 
append API by a single writer on a path.  Please note that the implementation 
differs from the semantics of HDFS append.  HDFS append internally guarantees 
that only a single writer may append to a path at a given time.  WASB does not 
enforce this guarantee internally.  Instead, the application must enforce 
access by a single writer, such as by running single-threaded or relying on 
some external locking mechanism to coordinate concurrent processes.)

> Adding Append API support for WASB
> --
>
> Key: HADOOP-12635
> URL: https://issues.apache.org/jira/browse/HADOOP-12635
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
> Attachments: Append API.docx, HADOOP-12635-004.patch, 
> HADOOP-12635.001.patch, HADOOP-12635.002.patch, HADOOP-12635.003.patch, 
> HADOOP-12635.005.patch, HADOOP-12635.006.patch, HADOOP-12635.007.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support 
> Append API. This JIRA is added to design and implement the Append API support 
> to WASB. The intended support for Append would only support a single writer.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12720) Misuse of sun.misc.Unsafe by org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo causes misaligned memory access coredumps

2016-01-18 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12720:
---
Description: 
Core dump details below:

{noformat}
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.85-b07 mixed mode 
solaris-sparc compressed oops)
# Problematic frame:
# J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]

Stack: [0x7e20,0x7e30],  sp=0x7e2fce50,  free 
space=1011k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
j  
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
j  org.apache.hadoop.io.FastByteComparisons.compareTo([BII[BII)I+11
j  org.apache.hadoop.io.WritableComparator.compareBytes([BII[BII)I+8
j  org.apache.hadoop.io.Text$Comparator.compare([BII[BII)I+39
j  org.apache.hadoop.io.TestText.testCompare()V+167
v  ~StubRoutines::call_stub
{noformat}

{noformat}

{noformat}


  was:
Core dump details below:

{noformat}
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.85-b07 mixed mode 
solaris-sparc compressed oops)
# Problematic frame:
# J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]

Stack: [0x7e20,0x7e30],  sp=0x7e2fce50,  free 
space=1011k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
j  
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
j  org.apache.hadoop.io.FastByteComparisons.compareTo([BII[BII)I+11
j  org.apache.hadoop.io.WritableComparator.compareBytes([BII[BII)I+8
j  org.apache.hadoop.io.Text$Comparator.compare([BII[BII)I+39
j  org.apache.hadoop.io.TestText.testCompare()V+167
v  ~StubRoutines::call_stub
{noformat}


> Misuse of sun.misc.Unsafe by 
> org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo
>  causes misaligned memory access coredumps
> --
>
> Key: HADOOP-12720
> URL: https://issues.apache.org/jira/browse/HADOOP-12720
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Affects Versions: 2.7.1
> Environment: Solaeis SPARC
>Reporter: Alan Burlison
>
> Core dump details below:
> {noformat}
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.85-b07 mixed mode 
> solaris-sparc compressed oops)
> # Problematic frame:
> # J 86 C2 
> org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
>  (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
> Stack: [0x7e20,0x7e30],  sp=0x7e2fce50,  free 
> space=1011k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> J 86 C2 
> org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
>  (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
> j  
> org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
> j  org.apache.hadoop.io.FastByteComparisons.compareTo([BII[BII)I+11
> j  org.apache.hadoop.io.WritableComparator.compareBytes([BII[BII)I+8
> j  org.apache.hadoop.io.Text$Comparator.compare([BII[BII)I+39
> j  org.apache.hadoop.io.TestText.testCompare()V+167
> v  ~StubRoutines::call_stub
> {noformat}
> {noformat}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12720) Misuse of sun.misc.Unsafe by org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo causes misaligned memory access coredumps

2016-01-18 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12720:
---
Description: 
Core dump details below:

{noformat}
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.85-b07 mixed mode 
solaris-sparc compressed oops)
# Problematic frame:
# J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]

Stack: [0x7e20,0x7e30],  sp=0x7e2fce50,  free 
space=1011k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
j  
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
j  org.apache.hadoop.io.FastByteComparisons.compareTo([BII[BII)I+11
j  org.apache.hadoop.io.WritableComparator.compareBytes([BII[BII)I+8
j  org.apache.hadoop.io.Text$Comparator.compare([BII[BII)I+39
j  org.apache.hadoop.io.TestText.testCompare()V+167
v  ~StubRoutines::call_stub
{noformat}

{noformat}
# Problematic frame:
# V  [libjvm.so+0xc7fa40]  Unsafe_GetLong+0x158


{noformat}


  was:
Core dump details below:

{noformat}
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.85-b07 mixed mode 
solaris-sparc compressed oops)
# Problematic frame:
# J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]

Stack: [0x7e20,0x7e30],  sp=0x7e2fce50,  free 
space=1011k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
j  
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
j  org.apache.hadoop.io.FastByteComparisons.compareTo([BII[BII)I+11
j  org.apache.hadoop.io.WritableComparator.compareBytes([BII[BII)I+8
j  org.apache.hadoop.io.Text$Comparator.compare([BII[BII)I+39
j  org.apache.hadoop.io.TestText.testCompare()V+167
v  ~StubRoutines::call_stub
{noformat}

{noformat}

{noformat}



> Misuse of sun.misc.Unsafe by 
> org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo
>  causes misaligned memory access coredumps
> --
>
> Key: HADOOP-12720
> URL: https://issues.apache.org/jira/browse/HADOOP-12720
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Affects Versions: 2.7.1
> Environment: Solaeis SPARC
>Reporter: Alan Burlison
>
> Core dump details below:
> {noformat}
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.85-b07 mixed mode 
> solaris-sparc compressed oops)
> # Problematic frame:
> # J 86 C2 
> org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
>  (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
> Stack: [0x7e20,0x7e30],  sp=0x7e2fce50,  free 
> space=1011k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> J 86 C2 
> org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
>  (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
> j  
> org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
> j  org.apache.hadoop.io.FastByteComparisons.compareTo([BII[BII)I+11
> j  org.apache.hadoop.io.WritableComparator.compareBytes([BII[BII)I+8
> j  org.apache.hadoop.io.Text$Comparator.compare([BII[BII)I+39
> j  org.apache.hadoop.io.TestText.testCompare()V+167
> v  ~StubRoutines::call_stub
> {noformat}
> {noformat}
> # Problematic frame:
> # V  [libjvm.so+0xc7fa40]  Unsafe_GetLong+0x158
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12720) Misuse of sun.misc.Unsafe by org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo causes misaligned memory access coredumps

2016-01-18 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12720:
---
Description: 
Core dump details below:

{noformat}
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.85-b07 mixed mode 
solaris-sparc compressed oops)
# Problematic frame:
# J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]

Stack: [0x7e20,0x7e30],  sp=0x7e2fce50,  free 
space=1011k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
j  
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
j  org.apache.hadoop.io.FastByteComparisons.compareTo([BII[BII)I+11
j  org.apache.hadoop.io.WritableComparator.compareBytes([BII[BII)I+8
j  org.apache.hadoop.io.Text$Comparator.compare([BII[BII)I+39
j  org.apache.hadoop.io.TestText.testCompare()V+167
v  ~StubRoutines::call_stub
{noformat}

{noformat}
# Problematic frame:
# V  [libjvm.so+0xc7fa40]  Unsafe_GetLong+0x158

Stack: [0x7e20,0x7e30],  sp=0x7e2fc9b0,  free 
space=1010k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0xc7fa40]  Unsafe_GetLong+0x158
j  sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+-292148
j  sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+0
j  
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I+91
j  
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
j  org.apache.hadoop.io.FastByteComparisons.compareTo([BII[BII)I+11
j  org.apache.hadoop.io.WritableComparator.compareBytes([BII[BII)I+8
j  
org.apache.hadoop.mapred.gridmix.GridmixRecord$Comparator.compare([BII[BII)I+61
j  
org.apache.hadoop.mapred.gridmix.TestGridmixRecord.binSortTest(Lorg/apache/hadoop/mapred/gridmix/GridmixRecord;Lorg/apache/hadoop/mapred/gridmix/GridmixRecord;IILorg/apache/hadoop/io/WritableComparator;)V+280
j  org.apache.hadoop.mapred.gridmix.TestGridmixRecord.testBaseRecord()V+57
v  ~StubRoutines::call_stub
{noformat}


  was:
Core dump details below:

{noformat}
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.85-b07 mixed mode 
solaris-sparc compressed oops)
# Problematic frame:
# J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]

Stack: [0x7e20,0x7e30],  sp=0x7e2fce50,  free 
space=1011k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 86 C2 
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
 (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
j  
org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
j  org.apache.hadoop.io.FastByteComparisons.compareTo([BII[BII)I+11
j  org.apache.hadoop.io.WritableComparator.compareBytes([BII[BII)I+8
j  org.apache.hadoop.io.Text$Comparator.compare([BII[BII)I+39
j  org.apache.hadoop.io.TestText.testCompare()V+167
v  ~StubRoutines::call_stub
{noformat}

{noformat}
# Problematic frame:
# V  [libjvm.so+0xc7fa40]  Unsafe_GetLong+0x158


{noformat}



> Misuse of sun.misc.Unsafe by 
> org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo
>  causes misaligned memory access coredumps
> --
>
> Key: HADOOP-12720
> URL: https://issues.apache.org/jira/browse/HADOOP-12720
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Affects Versions: 2.7.1
> Environment: Solaeis SPARC
>Reporter: Alan Burlison
>
> Core dump details below:
> {noformat}
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.85-b07 mixed mode 
> solaris-sparc compressed oops)
> # Problematic frame:
> # J 86 C2 
> org.apache.hadoop.io.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I
>  (273 bytes) @ 0x6fc9b150 [0x6fc9b0e0+0x70]
> Stack: [0x7e20,0x7e30],  sp=0x7e2fce50,  free 
> space=1011k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> J 86 C2 
> 

[jira] [Commented] (HADOOP-12696) Add Tests for S3FileSystem Contract

2016-01-18 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105667#comment-15105667
 ] 

Ravi Prakash commented on HADOOP-12696:
---

Also for s3n
{code}
---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3n.TestS3NContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 76.296 sec - in 
org.apache.hadoop.fs.contract.s3n.TestS3NContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3n.TestS3NContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.855 sec - in 
org.apache.hadoop.fs.contract.s3n.TestS3NContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDir
vi Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.537 sec - 
in org.apache.hadoop.fs.contract.s3n.TestS3NContractRootDir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3n.TestS3NContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.402 sec - in 
org.apache.hadoop.fs.contract.s3n.TestS3NContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3n.TestS3NContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 43.637 sec - in 
org.apache.hadoop.fs.contract.s3n.TestS3NContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3n.TestS3NContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.472 sec - 
in org.apache.hadoop.fs.contract.s3n.TestS3NContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3n.TestS3NContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.469 sec - in 
org.apache.hadoop.fs.contract.s3n.TestS3NContractMkdir

Results :

Tests run: 47, Failures: 0, Errors: 0, Skipped: 3
{code}
The skipped tests
{code}
2016-01-15 16:29:49,262 INFO  contract.ContractTestUtils 
(ContractTestUtils.java:skip(424)) - Skipping: Object store allows a file to 
overwrite a directory
2016-01-15 16:29:55,553 INFO  contract.ContractTestUtils 
(ContractTestUtils.java:skip(424)) - Skipping: Filesystem is an object store 
and newly created files are not immediately visible
2016-01-15 16:29:58,982 INFO  contract.ContractTestUtils 
(ContractTestUtils.java:skip(424)) - Skipping: blobstores can't distinguish 
empty directories from files
{code}

For s3a
{code}
---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.099 sec - in 
org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.163 sec - in 
org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.361 sec - in 
org.apache.hadoop.fs.contract.s3a.TestS3AContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.973 sec - in 
org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.859 sec - in 
org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractSeek
Tests run: 10, Failures: 0, 

[jira] [Updated] (HADOOP-12721) Hadoop-tools jars should be included in the classpath of hadoop command

2016-01-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12721:
--
Hadoop Flags: Incompatible change

I'm marking this as an incompatible change.

> Hadoop-tools jars should be included in the classpath of hadoop command
> ---
>
> Key: HADOOP-12721
> URL: https://issues.apache.org/jira/browse/HADOOP-12721
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Attachments: HDFS-9656-v1.patch
>
>
> Currently, jars under Hadoop-tools dir are not be included in the classpath 
> of hadoop command. So we will fail to execute cmds about wasb or s3 file 
> systems.
> {quote}
> $ ./hdfs dfs -ls wasb://d...@demo.blob.core.windows.net/
> ls: No FileSystem for scheme: wasb
> {quote}
> A simple solution is to add those jars into the classpath of the cmds. 
> Suggestions are welcomed~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-12721) Hadoop-tools jars should be included in the classpath of hadoop command

2016-01-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved HDFS-9656 to HADOOP-12721:
-

Issue Type: Wish  (was: Bug)
   Key: HADOOP-12721  (was: HDFS-9656)
   Project: Hadoop Common  (was: Hadoop HDFS)

> Hadoop-tools jars should be included in the classpath of hadoop command
> ---
>
> Key: HADOOP-12721
> URL: https://issues.apache.org/jira/browse/HADOOP-12721
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Attachments: HDFS-9656-v1.patch
>
>
> Currently, jars under Hadoop-tools dir are not be included in the classpath 
> of hadoop command. So we will fail to execute cmds about wasb or s3 file 
> systems.
> {quote}
> $ ./hdfs dfs -ls wasb://d...@demo.blob.core.windows.net/
> ls: No FileSystem for scheme: wasb
> {quote}
> A simple solution is to add those jars into the classpath of the cmds. 
> Suggestions are welcomed~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12426:

Status: Open  (was: Patch Available)

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12426:

Assignee: Steve Loughran
  Status: Patch Available  (was: Open)

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12426:

Attachment: HADOOP-12426-002.patch

Patch -002

# addresses findbug issues
# addresses checkstyle issues related to line endings, encoding, whitespace in 
statements
# pointedly refuses to comply with checkstyle's arbitrary policy about what 
constitutes valid indentation, on the basis that it is attempting to impose 
rules which are not in the sun guidelines.

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12721) Hadoop-tools jars should be included in the classpath of hadoop command

2016-01-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105747#comment-15105747
 ] 

Chris Nauroth commented on HADOOP-12721:


When I code reviewed HADOOP-11485, I tested it by writing my own shell profile 
to add the Azure jars to the default classpath.  This was only ~5 lines of code.

https://issues.apache.org/jira/browse/HADOOP-11485?focusedCommentId=14308135=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14308135

Perhaps we could address this by shipping the distro with a library of commonly 
useful shell profiles (azure, s3a, etc.).  They would be inactive by default 
for backwards-compatibility, but users could activate them quickly and easily 
by copying or symlinking.

> Hadoop-tools jars should be included in the classpath of hadoop command
> ---
>
> Key: HADOOP-12721
> URL: https://issues.apache.org/jira/browse/HADOOP-12721
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Attachments: HDFS-9656-v1.patch
>
>
> Currently, jars under Hadoop-tools dir are not be included in the classpath 
> of hadoop command. So we will fail to execute cmds about wasb or s3 file 
> systems.
> {quote}
> $ ./hdfs dfs -ls wasb://d...@demo.blob.core.windows.net/
> ls: No FileSystem for scheme: wasb
> {quote}
> A simple solution is to add those jars into the classpath of the cmds. 
> Suggestions are welcomed~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-01-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105754#comment-15105754
 ] 

Allen Wittenauer commented on HADOOP-12563:
---

bq. can we talk about the ability to have different formats now or do we have 
to talk about adding the ability in a follow up to this?

I'd prefer we cover them as a separate JIRA.  My plan was to only commit this 
to trunk, since we already know that some ecosystem projects (e.g., Spark) are 
doing question stuff like shading the Credential class.  If everyone thinks the 
general framework here is OK, then let's commit this and spread out/move on to 
further enhancements such as adding other token formats.

(Yes, technically the changes here are "legally" compatible. But there have 
been enough surprising/damaging changes in branch-2 throughout it's extremely 
long lifetime that unless one completely disregards the users, it's 
unconscionable to make the situation even worse by committing this there.)

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105991#comment-15105991
 ] 

Hadoop QA commented on HADOOP-12426:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 33s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 59 
new + 0 unchanged - 0 fixed = 59 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 59s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 40s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_91 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12782952/HADOOP-12426-003.patch
 |
| JIRA Issue | HADOOP-12426 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 51056509c464 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12356) CPU usage statistics on Windows

2016-01-18 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106108#comment-15106108
 ] 

Wangda Tan commented on HADOOP-12356:
-

Thanks [~cnauroth], manually kicked Jenkins build, will commit once Jenkins 
gets back.

> CPU usage statistics on Windows
> ---
>
> Key: HADOOP-12356
> URL: https://issues.apache.org/jira/browse/HADOOP-12356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
> Environment: CPU: Intel Xeon
> OS: Windows server
>Reporter: Yunqi Zhang
>Assignee: Inigo Goiri
>  Labels: easyfix, newbie, patch
> Attachments: 0001-Correct-the-CPU-usage-calcualtion.patch, 
> 0001-Correct-the-CPU-usage-calcualtion.patch, HADOOP-12356-v3.patch, 
> HADOOP-12356-v4.patch, HADOOP-12356-v5.patch, HADOOP-12356-v6.patch, 
> HADOOP-12356-v7.patch, HADOOP-12356-v8.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The CPU usage information on Windows is computed incorrectly. The proposed 
> patch fixes the issue, and unifies the the interface with Linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12426:

Attachment: HADOOP-12426-004.patch

patch -004; fixes all non-indentation related checkstyles

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch, 
> HADOOP-12426-003.patch, HADOOP-12426-004.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11505) Various native parts use bswap incorrectly and unportably

2016-01-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105949#comment-15105949
 ] 

Colin Patrick McCabe commented on HADOOP-11505:
---

I haven't looked at the build system in a while, but I believe it starts with a 
{{mvn clean}}.  That would wipe out {{config.h}}.  So saying that only 
{{hadoop-common}} can generate {{config.h}} is equivalent to saying every 
native build needs to start by building {{hadoop-common}}, whether or not 
anything in common changed.  Does that make sense?

bq. Is the problem that the current build infrastructure rebuilds it even when 
nothing has changed?  I've noticed that happening with other subcomponents.

No, that's not the problem.  Sorry if my explanation was confusing.

Basically, I am arguing that the savings in build time from generating 
{{config.h}} only once rather than multiple times is so small that it is not 
worth the extra complexity and cross-module dependencies.  And since 
hadoop-common takes a long time to build (about 20 seconds!), it would actually 
be much slower to have the cross-module dependency than to skip it.

> Various native parts use bswap incorrectly and unportably
> -
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
> HADOOP-11505.004.patch, HADOOP-11505.005.patch, HADOOP-11505.006.patch, 
> HADOOP-11505.007.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2016-01-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106097#comment-15106097
 ] 

ASF GitHub Bot commented on HADOOP-12587:
-

Github user steveloughran closed the pull request at:

https://github.com/apache/hadoop/pull/48


> Hadoop AuthToken refuses to work without a maxinactive attribute in issued 
> token
> 
>
> Key: HADOOP-12587
> URL: https://issues.apache.org/jira/browse/HADOOP-12587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
> Environment: OSX heimdal kerberos client against Linux KDC -talking 
> to a Hadoop 2.6.0 cluster
>Reporter: Steve Loughran
>Assignee: Benoy Antony
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-12587-001.patch, HADOOP-12587-002.patch, 
> HADOOP-12587-003.patch
>
>
> If you don't have a max-inactive attribute in the auth token returned from 
> the web site, AuthToken will raise an exception. This stops callers without 
> this token being able to submit jobs to a secure Hadoop 2.6 YARN cluster with 
> timeline server enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12426:

Status: Open  (was: Patch Available)

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch, 
> HADOOP-12426-003.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12356) CPU usage statistics on Windows

2016-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106210#comment-15106210
 ] 

Hadoop QA commented on HADOOP-12356:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 5s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 46s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 56s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 29s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 9s 
{color} | {color:green} hadoop-gridmix in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 3s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s 
{color} | {color:green} hadoop-yarn-common in the patch passed with 

[jira] [Commented] (HADOOP-12722) Get rid of mockito-all 1.8.5

2016-01-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106247#comment-15106247
 ] 

Steve Loughran commented on HADOOP-12722:
-

Is it bundled as a redistributable? That shouldn't be needed, even if hadoop 
stays on that version itself

> Get rid of mockito-all 1.8.5
> 
>
> Key: HADOOP-12722
> URL: https://issues.apache.org/jira/browse/HADOOP-12722
> Project: Hadoop Common
>  Issue Type: Wish
>Affects Versions: 2.6.3
>Reporter: Paul Polishchuk
>
> Currently, this is a big pain to write custom 
> MapOutputCollector/ShuffleConsumerPlugin with mockito > 1.8.5, as it clashes 
> with Hadoop bundled version.
> It would be really nice to get rid of this dependency at all



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106164#comment-15106164
 ] 

Kai Zheng commented on HADOOP-12426:


Good work and nice tool! Some comments and hope they're helpful.

The tool could be improved further as follow-on tasks.
1. Looks like only Oracle JVM is expected. Not sure how it will behave on other 
JVMs like IBM JDK.
2. {{validateKrb5File}} could also be supported on Windows, since the krb5 conf 
file can be retrieved from JAVA_SECURITY_KRB5_CONF. But when it's null, sure 
it's good to try particularly for non-Windows machines.
3. A {{usage()}} function or the like would be nice to have. I know it's well 
documented here in the JIRA.
4. {{dumpKeytab}} can dump more than the principal names, information about 
keys like key type, key version sometimes is also desired.
5. A try-the-best model might be desired, not aborting immediately when hitting 
errors, but continuing to find more mismatch issues.
6. Wonder if it's tool can also be used in client, services and applications, 
being called at the very beginning, dumping out the troubleshooting messages in 
the log (security log?). If possible or desired, maybe the dump content can be 
returned back instead of {{System.out}} itself.

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch, 
> HADOOP-12426-003.patch, HADOOP-12426-004.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106241#comment-15106241
 ] 

Steve Loughran commented on HADOOP-12426:
-

thx for the comments. 

# you should know that I'm stabilising some jenkins-test-run-failures on the 
slider branch: that test run is failing if there's no default realm, i.e. you 
are testing on a machine that isn't set up for kerberos.
# ...I see what you mean about keytab contents —and that I can get at them. 
timestamp would be good

> A try-the-best model might be desired

I see that...it's already handling the situation where security is off in 
core-site.xml but has been set on the command line; Checking principals and 
keytabs is something you can do without worrying about cluster security.

Maybe the {{failif()}} method could be made something that a {{--nofail}} 
option would downgrade to error log; have it return a boolean so that those 
followon operations which depend on the condition could be skipped. 

{code}
 if (failif(!keytab.exists(),CAT_CONF, "no keytab %s", keytab)) {
   loginFromkeytab()
}
{code}

of course, I'd have to invert the condition, to something like "require(... )"

Regarding dumping, there's a --out option which can save it to a file. But as 
half the log info goes to stderr (all the sun.java stuff), you do need to 
capture both streams, ideally interleaved. And while I could briefly cache the 
System.out and System.err streams & replace them with something to catch the 
output, loggers really hate that.

As for startup, I think services would need to do the login stuff themselves. 
You start trying to log in once and not only does UGI lock down, so do bits of 
the JVM internal state. (that is, {{UGI.reset()}} doesn't completely reset 
things. So I don't think I'd want to have it all there. 

What could be possible? 

* keylength
* keytab existing
* dump a keytab
* look for principal in a keytab
* All the relevant env vars and properties could be logged 

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch, 
> HADOOP-12426-003.patch, HADOOP-12426-004.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-18 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106230#comment-15106230
 ] 

Kai Sasaki commented on HADOOP-12698:
-

It seems to be fixed thanks to YARN-4438. I'll close this.
https://issues.apache.org/jira/browse/YARN-4438

Thanks [~aw], [~wheat9] anyway.

> Set default Docker build uses JDK7
> --
>
> Key: HADOOP-12698
> URL: https://issues.apache.org/jira/browse/HADOOP-12698
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: build, docker
> Attachments: HADOOP-12698.01.patch
>
>
> The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
> from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
> (HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-18 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-12698:

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Set default Docker build uses JDK7
> --
>
> Key: HADOOP-12698
> URL: https://issues.apache.org/jira/browse/HADOOP-12698
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: build, docker
> Attachments: HADOOP-12698.01.patch
>
>
> The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
> from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
> (HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12722) Get rid of mockito-all 1.8.5

2016-01-18 Thread Paul Polishchuk (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106405#comment-15106405
 ] 

Paul Polishchuk commented on HADOOP-12722:
--

Could you please elaborate a bit?
Our ShuffleConsumerPlugin implementation is bundled into jar with all 
dependencies

> Get rid of mockito-all 1.8.5
> 
>
> Key: HADOOP-12722
> URL: https://issues.apache.org/jira/browse/HADOOP-12722
> Project: Hadoop Common
>  Issue Type: Wish
>Affects Versions: 2.6.3
>Reporter: Paul Polishchuk
>
> Currently, this is a big pain to write custom 
> MapOutputCollector/ShuffleConsumerPlugin with mockito > 1.8.5, as it clashes 
> with Hadoop bundled version.
> It would be really nice to get rid of this dependency at all



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2016-01-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9969:
---
Status: Patch Available  (was: Open)

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta
> Environment: IBM JDK7
>Reporter: Yu Gao
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12722) Get rid of mockito-all 1.8.5

2016-01-18 Thread Paul Polishchuk (JIRA)
Paul Polishchuk created HADOOP-12722:


 Summary: Get rid of mockito-all 1.8.5
 Key: HADOOP-12722
 URL: https://issues.apache.org/jira/browse/HADOOP-12722
 Project: Hadoop Common
  Issue Type: Wish
Affects Versions: 2.6.3
Reporter: Paul Polishchuk


Currently, this is a big pain to write custom 
MapOutputCollector/ShuffleConsumerPlugin with mockito > 1.8.5, as it clashes 
with Hadoop bundled version.

It would be really nice to get rid of this dependency at all



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-01-18 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105789#comment-15105789
 ] 

Larry McCay commented on HADOOP-12563:
--

That seems reasonable to me.

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105826#comment-15105826
 ] 

Hadoop QA commented on HADOOP-12426:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 38s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 71 
new + 0 unchanged - 0 fixed = 71 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 16s 
{color} | {color:red} hadoop-common-project/hadoop-common generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 28s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 22s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.security.KDiag.run(String[])  At 
KDiag.java:org.apache.hadoop.security.KDiag.run(String[])  At KDiag.java:[line 
146] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.security.KDiag.run(String[]):in 
org.apache.hadoop.security.KDiag.run(String[]): new 
java.io.PrintWriter(OutputStream)  At KDiag.java:[line 142] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Updated] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12426:

Status: Patch Available  (was: Open)

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch, 
> HADOOP-12426-003.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12426:

Status: Open  (was: Patch Available)

patch 002 didn't have the updated diffs in

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch, 
> HADOOP-12426-003.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12426) Add Entry point for Kerberos health check

2016-01-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12426:

Attachment: HADOOP-12426-003.patch

Patch 003, has the improvements promised in 002

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12426-001.patch, HADOOP-12426-002.patch, 
> HADOOP-12426-003.patch
>
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)