[jira] [Commented] (HADOOP-12821) Change "Auth successful" audit log level from info to debug

2016-02-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158491#comment-15158491
 ] 

Vinayakumar B commented on HADOOP-12821:


bq. We include -Dsecurity.audit.logger=WARN,your_favorite_appender
You mean, {{-Dhadoop.security.logger}} ?
I can see only this in log4j.properties. not {{-Dsecurity.audit.logger}}

> Change "Auth successful" audit log level from info to debug
> ---
>
> Key: HADOOP-12821
> URL: https://issues.apache.org/jira/browse/HADOOP-12821
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Yong Zhang
>Assignee: Yong Zhang
>Priority: Minor
> Attachments: HADOOP-12821.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12821) Change "Auth successful" audit log level from info to debug

2016-02-22 Thread Yong Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158398#comment-15158398
 ] 

Yong Zhang commented on HADOOP-12821:
-

So if we change this log to DEBUG level, we no need to worry this confusion. 

> Change "Auth successful" audit log level from info to debug
> ---
>
> Key: HADOOP-12821
> URL: https://issues.apache.org/jira/browse/HADOOP-12821
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Yong Zhang
>Assignee: Yong Zhang
>Priority: Minor
> Attachments: HADOOP-12821.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-22 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158357#comment-15158357
 ] 

Aaron Fabbri commented on HADOOP-12666:
---

Thank your for your contributions.  This is a large patch with some dense spots,
which makes it hard for folks to get time to review properly.  In the future you
should break up the work into multiple commits and associate patches with jira
subtasks.  This will make your life easier as well.

Summary of issues, this round:

1. Still some parts I haven't carefully reviewed due to size of patch.
2. FileStatusCacheManager seems to have local race conditions and zero
intra-node coherency.
3. Seems like abuse of volatile / lack of locking in BatchByteArrayInputStream.
4. How do Hadoop folks feel about this hadoop-tools/hadoop-azure-datalake code
declaring classes in the hadoop.hdfs.web package?  I feel it needs cleanup.
5. Still need to put config parms in core-default.xml and make names lower case.

There are a bunch of other comments / questions inline below.  Search for "AF>"

{quote}


diff --git 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/FileStatusCacheManager.java
 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/FileStatusCacheManager.java
new file mode 100644
index 000..fd6a2ff
--- /dev/null
+++ 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/FileStatusCacheManager.java
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one

+ * ACID properties are maintained in overloaded api in @see
+ * PrivateAzureDataLakeFileSystem class.
+ */
+public final class FileStatusCacheManager {
+  private static final FileStatusCacheManager FILE_STATUS_CACHE_MANAGER = new
+  FileStatusCacheManager();
+  private Map syncMap = null;
+
+  /**
+   * Constructor.
+   */
+  private FileStatusCacheManager() {

AF> This class seems to have serious issues that need addressing:

1. Local race conditions in caller PrivateAzureDataLakeFileSystem
2. No mechanism for cache invalidation across nodes in the cluster.

+LinkedHashMap map = new
+LinkedHashMap() {
+
+  private static final int MAX_ENTRIES = 5000;
+
+  @Override
+  protected boolean removeEldestEntry(Map.Entry eldest) {

diff --git 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/FileStatusCacheObject.java
 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/FileStatusCacheObject.java
new file mode 100644
index 000..5316443
--- /dev/null
+++ 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/FileStatusCacheObject.java
@@ -0,0 +1,59 @@

diff --git 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/PrivateAzureDataLake.java
 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/PrivateAzureDataLake.java
new file mode 100644
index 000..a0ca4a9
--- /dev/null
+++ 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/PrivateAzureDataLake.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one

+
+/**
+ * Create ADL filesystem delegation with Swebhdfs scheme. Intent to use by
+ * AdlFileSystem only.
+ */

AF> Update comment?  This uses "adl" scheme, right?

+public class PrivateAzureDataLake extends DelegateToFileSystem {
+  public static final int DEFAULT_PORT = 443;

AF> What is this class used for?  I didn't see any uses.

+
+  PrivateAzureDataLake(URI theUri, Configuration conf)
+  throws IOException, URISyntaxException {
+super(theUri, createFileSystem(conf), conf,
+PrivateAzureDataLakeFileSystem.SCHEME, false);

diff --git 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/PrivateAzureDataLakeFileSystem.java
 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/PrivateAzureDataLakeFileSystem.java
new file mode 100644
index 000..db4a83c
--- /dev/null
+++ 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/PrivateAzureDataLakeFileSystem.java
@@ -0,0 +1,1516 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one

+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.hdfs.web;
+
AF> Care to comment why this is in the ..hdfs.web package instead of fs.adl?
It lives in hadoop-tools/hadoop-azure-datalake in the source tree.

+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.BlockLocation;

+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+/**
+ * Extended @see SWebHdfsFileSystem API. This class contains Azure data lake
+ * specific stability, Reliability and performance improvement.
+ * 
+ * Motivation behind 

[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-22 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158253#comment-15158253
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


[~chris.douglas], [~cnauroth] and [~fabbri] - Do you have any further comments 
on the latest patch?

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12825) Log slow name resolutions

2016-02-22 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158147#comment-15158147
 ] 

Sidharta Seethana commented on HADOOP-12825:


The test failure is unrelated to this patch.

> Log slow name resolutions 
> --
>
> Key: HADOOP-12825
> URL: https://issues.apache.org/jira/browse/HADOOP-12825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Attachments: HADOOP-12825.001.patch, HADOOP-12825.002.patch, 
> getByName-call-graph.txt
>
>
> Logging slow name resolutions would be useful in identifying DNS performance 
> issues in a cluster. Most resolutions go through 
> {{org.apache.hadoop.security.SecurityUtil.getByName}} ( see attached call 
> graph ). Adding additional logging to this method would expose such issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158127#comment-15158127
 ] 

Hudson commented on HADOOP-12555:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9344 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9344/])
HADOOP-12555. WASB to read credentials from a credential provider. (cnauroth: 
rev 27b77751c1163ab4a1ce081a426e5190d1b8aff4)
* hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestWasbUriAndConfiguration.java
* hadoop-tools/hadoop-azure/src/site/markdown/index.md
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SimpleKeyProvider.java


> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch, HADOOP-12555-005.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158116#comment-15158116
 ] 

Larry McCay commented on HADOOP-12555:
--

Thanks, [~cnauroth]!

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch, HADOOP-12555-005.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12555:
---
Release Note: The hadoop-azure file system now supports configuration of 
the Azure Storage account credentials using the standard Hadoop Credential 
Provider API.  For details, please refer to the documentation on hadoop-azure 
and the Credential Provider API.

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch, HADOOP-12555-005.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12555:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 for patch v005.  I have committed this to trunk, branch-2 and branch-2.8.  
[~lmccay], thank you for contributing this patch.

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch, HADOOP-12555-005.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network usage on the node in Windows

2016-02-22 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158099#comment-15158099
 ] 

Inigo Goiri commented on HADOOP-12824:
--

[~cnauroth], I think you were the one implementing the CPU and memory data 
collection for Windows; do you mind taking a look?
I couldn't find a better interface for collecting Network/Disk other than PDH.

> Collect network usage on the node in Windows
> 
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158055#comment-15158055
 ] 

Hadoop QA commented on HADOOP-12829:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 47s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 2s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 22s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.net.TestClusterTopology |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789086/HADOOP-12829.patch |
| JIRA Issue | HADOOP-12829 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9f779c158052 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15158045#comment-15158045
 ] 

Hadoop QA commented on HADOOP-12555:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 16s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 35s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 20s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 44s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (HADOOP-12832) Implement unix-like 'FsShell -touch'

2016-02-22 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157985#comment-15157985
 ] 

Gera Shegalov commented on HADOOP-12832:


[~cmccabe], providing settimes sounds great. 

I would also argue that *touchz* should be deprecated and replaced by a more 
conventional *touch*.

> Implement unix-like 'FsShell -touch' 
> -
>
> Key: HADOOP-12832
> URL: https://issues.apache.org/jira/browse/HADOOP-12832
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>
> We needed to touch a bunch of files as in 
> https://en.wikipedia.org/wiki/Touch_(Unix) . 
> Because FsShell does not expose FileSystem#setTimes  , we had to do it 
> programmatically in Scalding REPL. Seems like it should not be this 
> complicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12825) Log slow name resolutions

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157983#comment-15157983
 ] 

Hadoop QA commented on HADOOP-12825:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 19s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 17s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 33s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789067/HADOOP-12825.002.patch
 |
| JIRA Issue | HADOOP-12825 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  

[jira] [Commented] (HADOOP-12832) Implement unix-like 'FsShell -touch'

2016-02-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157930#comment-15157930
 ] 

Colin Patrick McCabe commented on HADOOP-12832:
---

Hmm.  Maybe we should just expose setTimes, rather than having another touch 
command

> Implement unix-like 'FsShell -touch' 
> -
>
> Key: HADOOP-12832
> URL: https://issues.apache.org/jira/browse/HADOOP-12832
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>
> We needed to touch a bunch of files as in 
> https://en.wikipedia.org/wiki/Touch_(Unix) . 
> Because FsShell does not expose FileSystem#setTimes  , we had to do it 
> programmatically in Scalding REPL. Seems like it should not be this 
> complicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12668) Support excluding weak Ciphers in HttpServer2 through ssl-server.conf

2016-02-22 Thread Vijay Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157916#comment-15157916
 ] 

Vijay Singh commented on HADOOP-12668:
--

[~zhz]
Thanks a ton for all your suggestions and updating the license header file. 
Thank you again.

> Support excluding weak Ciphers in HttpServer2 through ssl-server.conf 
> --
>
> Key: HADOOP-12668
> URL: https://issues.apache.org/jira/browse/HADOOP-12668
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Vijay Singh
>Assignee: Vijay Singh
>Priority: Critical
>  Labels: common, ha, hadoop, hdfs, security
> Fix For: 2.8.0
>
> Attachments: Hadoop-12668.006.patch, Hadoop-12668.007.patch, 
> Hadoop-12668.008.patch, Hadoop-12668.009.patch, Hadoop-12668.010.patch, 
> Hadoop-12668.011.patch, Hadoop-12668.012.patch, test.log
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Embeded jetty Server used across all hadoop services is configured 
> through ssl-server.xml file from their respective configuration section. 
> However, the SSL/TLS protocol being used for this jetty servers can be 
> downgraded to weak cipher suites. This code changes aims to add following 
> functionality:
> 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
> spawn jetty servers with ability to exclude weak cipher suites. I propose we 
> make this though ssl-server.xml and hence each service can choose to disable 
> specific ciphers.
> 2) Modify DFSUtil.java used by HDFS code to supply new parameter 
> ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
> ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-22 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-12829:

Attachment: HADOOP-12829.patch

Updated patch for Colin's comments.

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Attachments: HADOOP-12829.patch, HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-22 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157866#comment-15157866
 ] 

Gregory Chanan commented on HADOOP-12829:
-

Thanks for taking a look [~cmccabe], will update the patch.

Also agree with sjlee, lowering the priority to minor.

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-22 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-12829:

Priority: Minor  (was: Major)

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Attachments: HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8065) distcp should have an option to compress data while copying.

2016-02-22 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157863#comment-15157863
 ] 

Ravi Prakash commented on HADOOP-8065:
--

Thanks for the initiative Stephen! Could you please rebase the patch against 
current trunk and ping me? I'm sorry you haven't received the necessary 
attention. I'll try to fix that.

> distcp should have an option to compress data while copying.
> 
>
> Key: HADOOP-8065
> URL: https://issues.apache.org/jira/browse/HADOOP-8065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.20.2
>Reporter: Suresh Antony
>Assignee: Stephen Veiss
>Priority: Minor
>  Labels: distcp
> Fix For: 0.20.2
>
> Attachments: HADOOP-8065-trunk_2015-11-03.patch, 
> HADOOP-8065-trunk_2015-11-04.patch, patch.distcp.2012-02-10
>
>
> We would like compress the data while transferring from our source system to 
> target system. One way to do this is to write a map/reduce job to compress 
> that after/before being transferred. This looks inefficient. 
> Since distcp already reading writing data it would be better if it can 
> accomplish while doing this. 
> Flip side of this is that distcp -update option can not check file size 
> before copying data. It can only check for the existence of file. 
> So I propose if -compress option is given then file size is not checked.
> Also when we copy file appropriate extension needs to be added to file 
> depending on compression type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12668) Support excluding weak Ciphers in HttpServer2 through ssl-server.conf

2016-02-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157852#comment-15157852
 ] 

Hudson commented on HADOOP-12668:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9342 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9342/])
HADOOP-12668. Support excluding weak Ciphers in HttpServer2 through (zhz: rev 
a2fdfff02daef85b651eda31e99868986aab5b28)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java
* hadoop-common-project/hadoop-common/src/main/conf/ssl-server.xml.example
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpCookieFlag.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java


> Support excluding weak Ciphers in HttpServer2 through ssl-server.conf 
> --
>
> Key: HADOOP-12668
> URL: https://issues.apache.org/jira/browse/HADOOP-12668
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Vijay Singh
>Assignee: Vijay Singh
>Priority: Critical
>  Labels: common, ha, hadoop, hdfs, security
> Fix For: 2.8.0
>
> Attachments: Hadoop-12668.006.patch, Hadoop-12668.007.patch, 
> Hadoop-12668.008.patch, Hadoop-12668.009.patch, Hadoop-12668.010.patch, 
> Hadoop-12668.011.patch, Hadoop-12668.012.patch, test.log
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Embeded jetty Server used across all hadoop services is configured 
> through ssl-server.xml file from their respective configuration section. 
> However, the SSL/TLS protocol being used for this jetty servers can be 
> downgraded to weak cipher suites. This code changes aims to add following 
> functionality:
> 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
> spawn jetty servers with ability to exclude weak cipher suites. I propose we 
> make this though ssl-server.xml and hence each service can choose to disable 
> specific ciphers.
> 2) Modify DFSUtil.java used by HDFS code to supply new parameter 
> ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
> ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove commons-httpclient dependency from hadoop-azure

2016-02-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157853#comment-15157853
 ] 

Hudson commented on HADOOP-11613:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9342 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9342/])
HADOOP-11613. Remove commons-httpclient dependency from hadoop-azure. 
(cnauroth: rev d4f5fc23b208635e8f9a14c375d4101141aefa4a)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockStorageInterface.java


> Remove commons-httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Masatake Iwasaki
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.004.patch, HADOOP-11613.008.patch, 
> HADOOP-11613.009.patch, HADOOP-11613.05.patch, HADOOP-11613.06.patch, 
> HADOOP-11613.07.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-12555:
-
Status: Patch Available  (was: Open)

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch, HADOOP-12555-005.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-12555:
-
Attachment: HADOOP-12555-005.patch

v005 addresses review comments.

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch, HADOOP-12555-005.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12668) Support excluding weak Ciphers in HttpServer2 through ssl-server.conf

2016-02-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12668:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed v12 patch to trunk, branch-2, and branch-2.8. Thanks Vijay for the 
contribution!

> Support excluding weak Ciphers in HttpServer2 through ssl-server.conf 
> --
>
> Key: HADOOP-12668
> URL: https://issues.apache.org/jira/browse/HADOOP-12668
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Vijay Singh
>Assignee: Vijay Singh
>Priority: Critical
>  Labels: common, ha, hadoop, hdfs, security
> Fix For: 2.8.0
>
> Attachments: Hadoop-12668.006.patch, Hadoop-12668.007.patch, 
> Hadoop-12668.008.patch, Hadoop-12668.009.patch, Hadoop-12668.010.patch, 
> Hadoop-12668.011.patch, Hadoop-12668.012.patch, test.log
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Embeded jetty Server used across all hadoop services is configured 
> through ssl-server.xml file from their respective configuration section. 
> However, the SSL/TLS protocol being used for this jetty servers can be 
> downgraded to weak cipher suites. This code changes aims to add following 
> functionality:
> 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
> spawn jetty servers with ability to exclude weak cipher suites. I propose we 
> make this though ssl-server.xml and hence each service can choose to disable 
> specific ciphers.
> 2) Modify DFSUtil.java used by HDFS code to supply new parameter 
> ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
> ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-12555:
-
Status: Open  (was: Patch Available)

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12668) Support excluding weak Ciphers in HttpServer2 through ssl-server.conf

2016-02-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12668:
---
Summary: Support excluding weak Ciphers in HttpServer2 through 
ssl-server.conf   (was: Modify HDFS embeded jetty server logic in 
HttpServer2.java to exclude weak Ciphers through ssl-server.conf)

> Support excluding weak Ciphers in HttpServer2 through ssl-server.conf 
> --
>
> Key: HADOOP-12668
> URL: https://issues.apache.org/jira/browse/HADOOP-12668
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Vijay Singh
>Assignee: Vijay Singh
>Priority: Critical
>  Labels: common, ha, hadoop, hdfs, security
> Attachments: Hadoop-12668.006.patch, Hadoop-12668.007.patch, 
> Hadoop-12668.008.patch, Hadoop-12668.009.patch, Hadoop-12668.010.patch, 
> Hadoop-12668.011.patch, Hadoop-12668.012.patch, test.log
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Embeded jetty Server used across all hadoop services is configured 
> through ssl-server.xml file from their respective configuration section. 
> However, the SSL/TLS protocol being used for this jetty servers can be 
> downgraded to weak cipher suites. This code changes aims to add following 
> functionality:
> 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
> spawn jetty servers with ability to exclude weak cipher suites. I propose we 
> make this though ssl-server.xml and hence each service can choose to disable 
> specific ciphers.
> 2) Modify DFSUtil.java used by HDFS code to supply new parameter 
> ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
> ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12668) Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak Ciphers through ssl-server.conf

2016-02-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12668:
---
Attachment: Hadoop-12668.012.patch

Thanks [~SINGHVJD], +1 on the v11 patch except for the unnecessary license 
header change. I'm attaching v12 to address. Will commit the patch soon.

> Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak 
> Ciphers through ssl-server.conf
> --
>
> Key: HADOOP-12668
> URL: https://issues.apache.org/jira/browse/HADOOP-12668
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Vijay Singh
>Assignee: Vijay Singh
>Priority: Critical
>  Labels: common, ha, hadoop, hdfs, security
> Attachments: Hadoop-12668.006.patch, Hadoop-12668.007.patch, 
> Hadoop-12668.008.patch, Hadoop-12668.009.patch, Hadoop-12668.010.patch, 
> Hadoop-12668.011.patch, Hadoop-12668.012.patch, test.log
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Embeded jetty Server used across all hadoop services is configured 
> through ssl-server.xml file from their respective configuration section. 
> However, the SSL/TLS protocol being used for this jetty servers can be 
> downgraded to weak cipher suites. This code changes aims to add following 
> functionality:
> 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
> spawn jetty servers with ability to exclude weak cipher suites. I propose we 
> make this though ssl-server.xml and hence each service can choose to disable 
> specific ciphers.
> 2) Modify DFSUtil.java used by HDFS code to supply new parameter 
> ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
> ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157811#comment-15157811
 ] 

Larry McCay commented on HADOOP-12555:
--

Will do!

Credenitals. :)

Good catch.

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157784#comment-15157784
 ] 

Chris Nauroth commented on HADOOP-12555:


Hi [~lmccay].  This looks great.  I'd just like to request a couple of minor 
changes:

# Please change {{SimpleKeyProvider#LOG}} from {{public}} to {{private}}.
# Please correct the typo in "credentials" in index.md: 
{code}
In addition to using the credential provider framework to protect your 
credenitals, it's
{code}


> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11613) Remove commons-httpclient dependency from hadoop-azure

2016-02-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11613:
---
Fix Version/s: 2.8.0

> Remove commons-httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Masatake Iwasaki
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.004.patch, HADOOP-11613.008.patch, 
> HADOOP-11613.009.patch, HADOOP-11613.05.patch, HADOOP-11613.06.patch, 
> HADOOP-11613.07.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11613) Remove commons-httpclient dependency from hadoop-azure

2016-02-22 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11613:
---
  Resolution: Fixed
Hadoop Flags: Reviewed  (was: Incompatible change)
  Status: Resolved  (was: Patch Available)

+1 for patch v009.  I committed this to trunk, branch-2 and branch-2.8.  
Masatake, thank you for contributing the patch.  Thanks also Brahma, Akira and 
Wei-Chiu for participation.

I have removed the Incompatible Change flag.  The final patch only changed 
implementation details in test code, so there is no compatibility concern.

> Remove commons-httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Masatake Iwasaki
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.004.patch, HADOOP-11613.008.patch, 
> HADOOP-11613.009.patch, HADOOP-11613.05.patch, HADOOP-11613.06.patch, 
> HADOOP-11613.07.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12825) Log slow name resolutions

2016-02-22 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated HADOOP-12825:
---
Attachment: HADOOP-12825.002.patch

Thanks, [~ste...@apache.org] - I didn't realize that the constructors were 
deprecated in a recent version of guava. I have made the changes you mentioned 
and uploaded a new patch - could you please take a look?

thanks,
-Sidharta

> Log slow name resolutions 
> --
>
> Key: HADOOP-12825
> URL: https://issues.apache.org/jira/browse/HADOOP-12825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Attachments: HADOOP-12825.001.patch, HADOOP-12825.002.patch, 
> getByName-call-graph.txt
>
>
> Logging slow name resolutions would be useful in identifying DNS performance 
> issues in a cluster. Most resolutions go through 
> {{org.apache.hadoop.security.SecurityUtil.getByName}} ( see attached call 
> graph ). Adding additional logging to this method would expose such issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-22 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157717#comment-15157717
 ] 

Wei-Chiu Chuang commented on HADOOP-12711:
--

I'm still working on it. The issue is there's no good equivalent of 
URIUtil.encodeWithinPath(), which is used to implement 
ServiceUtil.encodeQueryValue(). But I have a plan which can use URIBuilder() to 
achieve the same thing, and remove ServiceUtil.encodeQueryValue() entirely. I 
should be able to post a patch soon.

> Remove dependency on commons-httpclient for ServletUtil
> ---
>
> Key: HADOOP-12711
> URL: https://issues.apache.org/jira/browse/HADOOP-12711
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12711.001.patch
>
>
> This is a branch-2 only change, as ServletUtil for trunk removes the code 
> that depends on commons-httpclient.
> We need to retire the use of commons-httpclient in Hadoop to address the 
> security concern in CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> {noformat}
> import org.apache.commons.httpclient.URIException;
> import org.apache.commons.httpclient.util.URIUtil;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12690) Consolidate access of sun.misc.Unsafe

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157594#comment-15157594
 ] 

Hadoop QA commented on HADOOP-12690:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 25s {color} 
| {color:red} root-jdk1.8.0_72 with JDK v1.8.0_72 generated 6 new + 731 
unchanged - 9 fixed = 737 total (was 740) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 56s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 6 new + 726 
unchanged - 9 fixed = 732 total (was 735) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 11s 
{color} | {color:red} root: patch generated 4 new + 141 unchanged - 2 fixed = 
145 total (was 143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 32s 
{color} | {color:green} hadoop-common-project_hadoop-common-jdk1.7.0_95 with 
JDK v1.7.0_95 generated 0 new + 12 unchanged - 1 fixed = 12 total (was 13) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 9s 
{color} | 

[jira] [Created] (HADOOP-12832) Implement unix-like 'FsShell -touch'

2016-02-22 Thread Gera Shegalov (JIRA)
Gera Shegalov created HADOOP-12832:
--

 Summary: Implement unix-like 'FsShell -touch' 
 Key: HADOOP-12832
 URL: https://issues.apache.org/jira/browse/HADOOP-12832
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.6.4
Reporter: Gera Shegalov


We needed to touch a bunch of files as in 
https://en.wikipedia.org/wiki/Touch_(Unix) . 

Because FsShell does not expose FileSystem#setTimes  , we had to do it 
programmatically in Scalding REPL. Seems like it should not be this complicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12830) Bash environment for quick command operations

2016-02-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157464#comment-15157464
 ] 

Allen Wittenauer commented on HADOOP-12830:
---

bq. A malicious root user can attack more directly with "su  
-c". I think the attack from root is unavoidable.

Not necessarily.  If Kerberos is enabled, keys are being stored locked in 
memory, etc, then su isn't guaranteed to work.

The more I think about this patch, the more I think making it a separate 
executable shell script is making it harder.  If this is merged into the main 
hadoop script, then not only is access to the functions easier, common env vars 
guaranteed, etc, but there's no question about which hadoop was used to 
trigger.  

Also, rather than using flock, why not just use the pid file with status 
support?  Sure, it's not as rock solid as flock, but it is also much more 
portable, especially if you un-GNU the mkfifo command and actually use a POSIX 
command line.  This should make this function work pretty much everywhere.

> Bash environment for quick command operations
> -
>
> Key: HADOOP-12830
> URL: https://issues.apache.org/jira/browse/HADOOP-12830
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin
>Reporter: Kazuho Fujii
>Assignee: Kazuho Fujii
> Attachments: HADOOP-12830.001.patch
>
>
> Hadoop file system shell commands are slow. This issue is about building a 
> shell environment for quick command operations.
> Previously an interactive shell is tried to build in HADOOP-6541. But, it 
> seems to be poor because users are used to powerful shells like bash. This 
> issue is not about creating a new shell, but just opening a new bash process. 
> Therefore, user can operate commands as before.
> {code}
> fjk@x240:~/hadoop-2.7.2$ ./bin/hadoop shell
> fjk@x240 hadoop> hadoop fs -ls /
> Found 2 items
> -rw-r--r--   3 fjk supergroup  0 2016-02-21 00:26 /file1
> -rw-r--r--   3 fjk supergroup  0 2016-02-21 00:26 /file2
> {code}
> The shell has a mini daemon process that is living until the shell is closed. 
> The hadoop fs command delegates the operation to the daemon. They communicate 
> with named pipes. The daemon conducts the operation and returns the result to 
> the command.
> In this shell the hadoop fs commands operation becomes quick. In a local 
> environment, "hadoop fs -ls" command is about 100 times faster than the 
> normal command.
> {code}
> fjk@x240 hadoop> time hadoop fs -ls hdfs://localhost:8020/ > /dev/null
> real  0m0.021s
> user  0m0.003s
> sys   0m0.011s
> {code}
> Using bash's function, commands and file names are automatically completed.
> {code}
> fjk@x240 hadoop> hadoop fs -ch
> -checksum  -chgrp -chmod -chown
> fjk@x240 hadoop> hadoop fs -ls /file
> /file1  /file2  /file3
> {code}
> Additionally, we can make equivalents with bash build-in commands, e.g., cd, 
> umask. In this shell, they can work because the daemon remembers the state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12690) Consolidate access of sun.misc.Unsafe

2016-02-22 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12690:

Attachment: HADOOP-12690-v3.1.patch

Missing an import in v3 patch which will cause build failure. Update it in v3.1 
patch.

> Consolidate access of sun.misc.Unsafe 
> --
>
> Key: HADOOP-12690
> URL: https://issues.apache.org/jira/browse/HADOOP-12690
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-12690-v2.1.patch, HADOOP-12690-v2.patch, 
> HADOOP-12690-v3.1.patch, HADOOP-12690-v3.patch, HADOOP-12690.patch
>
>
> Per discussion in Hadoop-12630 
> (https://issues.apache.org/jira/browse/HADOOP-12630?focusedCommentId=15082142=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15082142),
>  we found the access of sun.misc.Unsafe could be problematic for some JVMs in 
> other platforms. Also, hints from other comments, it is better to consolidate 
> it as a helper/utility method to shared with several places 
> (FastByteComparisons, NativeIO, ShortCircuitShm). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157433#comment-15157433
 ] 

Hadoop QA commented on HADOOP-12829:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 43s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 8s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12788805/HADOOP-12829.patch |
| JIRA Issue | HADOOP-12829 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1b3a38ba5b74 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 

[jira] [Updated] (HADOOP-12690) Consolidate access of sun.misc.Unsafe

2016-02-22 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12690:

Attachment: HADOOP-12690-v3.patch

> Consolidate access of sun.misc.Unsafe 
> --
>
> Key: HADOOP-12690
> URL: https://issues.apache.org/jira/browse/HADOOP-12690
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-12690-v2.1.patch, HADOOP-12690-v2.patch, 
> HADOOP-12690-v3.patch, HADOOP-12690.patch
>
>
> Per discussion in Hadoop-12630 
> (https://issues.apache.org/jira/browse/HADOOP-12630?focusedCommentId=15082142=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15082142),
>  we found the access of sun.misc.Unsafe could be problematic for some JVMs in 
> other platforms. Also, hints from other comments, it is better to consolidate 
> it as a helper/utility method to shared with several places 
> (FastByteComparisons, NativeIO, ShortCircuitShm). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12690) Consolidate access of sun.misc.Unsafe

2016-02-22 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157428#comment-15157428
 ] 

Junping Du commented on HADOOP-12690:
-

Thanks for review, [~cmccabe]. That's reasonable concern and incorporate it in 
v3 patch. Also, fix some whitespace issues raised by Kai.

> Consolidate access of sun.misc.Unsafe 
> --
>
> Key: HADOOP-12690
> URL: https://issues.apache.org/jira/browse/HADOOP-12690
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-12690-v2.1.patch, HADOOP-12690-v2.patch, 
> HADOOP-12690.patch
>
>
> Per discussion in Hadoop-12630 
> (https://issues.apache.org/jira/browse/HADOOP-12630?focusedCommentId=15082142=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15082142),
>  we found the access of sun.misc.Unsafe could be problematic for some JVMs in 
> other platforms. Also, hints from other comments, it is better to consolidate 
> it as a helper/utility method to shared with several places 
> (FastByteComparisons, NativeIO, ShortCircuitShm). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-22 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157413#comment-15157413
 ] 

Sangjin Lee commented on HADOOP-12829:
--

I am +1 with the change proposed here. That said, I'd like to add a little more 
context to this.

I agree that as a rule one should always restore the interrupt upon catching 
the {{InterruptedException}} in order to make the thread interruptible. 
However, in this particular case the issue becomes bit academic. This thread is 
private to the {{FileSystem}} class, meaning that one cannot easily obtain a 
reference to this thread and interrupt it explicitly. This thread is also a 
daemon thread, and as such it will not hold up the process when the process is 
terminating. Those two facts combined make it acceptable to have an 
uninterruptible daemon thread. In a non-test scenario, interrupting this thread 
should *not* happen. So in that sense, I don't think this is a major issue one 
way or another (in that sense I would recommend lowering the priority of the 
issue to minor).

The most important goal here is to ensure that this thread does *NOT* terminate 
under any other condition (exceptions or errors) as that would mean a 
catastrophic consequence for memory, which we're still doing.


> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-12555:
-
Status: Open  (was: Patch Available)

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-12555:
-
Status: Patch Available  (was: Open)

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157353#comment-15157353
 ] 

Larry McCay commented on HADOOP-12555:
--

I don't think the failure is related to this patch in any way.

bq. [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-common: There was a timeout or other error in the fork -> [Help 
1]

I'll try and resubmit the patch and see what happens.

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157348#comment-15157348
 ] 

Colin Patrick McCabe commented on HADOOP-12829:
---

Thank you, [~gchanan].  I can't think of any reason why this thread should 
swallow {{InterruptedException}} without a trace.  It is not performing any 
operations that should inherently generate INE, as far as I can see, and if 
someone else sends an INE we ought to... interrupt the thread.

{code}
+  } catch (InterruptedException ie) {
+LOG.warn("cleaner thread interrupted, will stop");
+Thread.currentThread().interrupt();
{code}
Can you include the stack trace so that this exception has a better chance of 
getting noticed?  Also, capitalize the error?  +1 pending those changes

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2016-02-22 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157288#comment-15157288
 ] 

Eric Badger commented on HADOOP-8717:
-

Starting a pseudo-distributed cluster on Mac OS X El Capitan using the 
sbin/start-yarn.sh script encounters the JAVA_HOME issue on Hadoop 2.7.3. The 
culprit is the slaves.sh script, which is called by yarn-daemons.sh in a for 
loop over all of the slave nodes. slaves.sh ssh's into each slave machine and 
then runs the sbin/yarn-daemon.sh script to start the NM. In Pseudo-distributed 
mode, this will just be localhost. Running sbin/yarn-daemon.sh manually to 
start the NM on localhost works fine, but running it indirectly through 
start-yarn.sh -> yarn-daemons.sh -> slaves.sh seems to unset or clear 
JAVA_HOME. I have not dug too far into the NM code, but it would make sense for 
JAVA_HOME to be unset/cleared, since the NM fails when starting containers 
because it can't find "/bin/java" instead of "/somepath/bin/java". 

Steps to reproduce failure. This will fail with exit code 127 due to the 
container failing to start.
{noformat}
$HADOOP_PREFIX/bin/hdfs namenode -format;
$HADOOP_PREFIX/sbin/start-dfs.sh;
$HADOOP_PREFIX/sbin/start-yarn.sh;
{noformat}

Running The following sleep job will give an error that the container failed to 
start.
{noformat}
$HADOOP_PREFIX/bin/hadoop jar 
$HADOOP_PREFIX/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-${HADOOP_VERSION}-tests.jar
 sleep -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1 -rt 1
{noformat}

{noformat}
2016-02-22 10:55:58,142 INFO  [main] mapreduce.Job 
(Job.java:monitorAndPrintJob(1449)) - Job job_1456160109510_0001 failed with 
state FAILED due to: Application application_1456160109510_0001 failed 3 times 
due to AM Container for appattempt_1456160109510_0001_03 exited with  
exitCode: 127
For more detailed output, check application tracking 
page:http://localhost:8088/cluster/app/application_1456160109510_0001Then, 
click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e53_1456160109510_0001_03_01
Exit code: 127
Stack trace: ExitCodeException exitCode=127: 
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 127
Failing this attempt. Failing the application.
2016-02-22 10:55:58,158 INFO  [main] mapreduce.Job 
(Job.java:monitorAndPrintJob(1454)) - Counters: 0
{noformat}

Steps that set up pseudo-distributed correctly. Running a sleep job using this 
setup will succeed. 
{noformat}
$HADOOP_PREFIX/bin/hdfs namenode -format;
$HADOOP_PREFIX/sbin/start-dfs.sh;
$HADOOP_PREFIX/sbin/yarn-daemon.sh start resourcemanager;
$HADOOP_PREFIX/sbin/yarn-daemon.sh start nodemanager;
{noformat}

> JAVA_HOME detected in hadoop-config.sh under OS X does not work
> ---
>
> Key: HADOOP-8717
> URL: https://issues.apache.org/jira/browse/HADOOP-8717
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, scripts
> Environment: OS: Darwin 11.4.0 Darwin Kernel Version 11.4.0: Mon Apr  
> 9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64
> java version "1.6.0_33"
> Java(TM) SE Runtime Environment (build 1.6.0_33-b03-424-11M3720)
> Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03-424, mixed mode)
>Reporter: Jianbin Wei
>Assignee: Jianbin Wei
>Priority: Minor
>  Labels: newbie, scripts
> Attachments: HADOOP-8717.patch, HADOOP-8717.patch, HADOOP-8717.patch, 
> HADOOP-8717.patch
>
>
> After setting up a single node hadoop on mac, copy some text file to it and 
> run
> $ hadoop jar 
> ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar  
> wordcount /file.txt output
> It reports
> 12/08/21 15:32:18 INFO Job.java:mapreduce.Job:1265: Running job: 
> job_1345588312126_0001
> 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1286: Job 
> job_1345588312126_0001 running in uber mode : false
> 

[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157280#comment-15157280
 ] 

Hadoop QA commented on HADOOP-12827:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 38s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 39s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 55m 54s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-22 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157269#comment-15157269
 ] 

Xiaoyu Yao commented on HADOOP-12827:
-

One disadvantage of configurable timeout key is it is hard to tune. But I agree 
it is complementary to retry when the default timeout does not work.

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-22 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157243#comment-15157243
 ] 

Xiaoyu Yao commented on HADOOP-12827:
-

Thanks [~and1000] for reporting the issue and propose the fix. We have seen 
similar customer issues recently and think about approach similar to the one 
you proposed here. We proceed with an existing webhdfs retry mechanism 
introduced by  HDFS-5219/HDFS-5122 to work around it without code changes, 
which is orthogonal to approach proposed here. 

The patch looks good to me overall. I like the usage of time unit with suffix 
instead of hard-code time duration to ms or seconds.  Below are some of my 
suggestions:

1. We need to document the new keys in hdfs-default.xml?
2. Your manual test result looks good to me. Can you add them to unit tests by 
enhancing TestWebHdfsTimeouts with the new configuration keys?


> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12831) LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum set to 0

2016-02-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12831:

Summary: LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum  
set to 0  (was: FSOutputSummer NPEs in ctor if bytes per checksum  set to 0)

> LocalFS/FSOutputSummer NPEs in constructor if bytes per checksum  set to 0
> --
>
> Key: HADOOP-12831
> URL: https://issues.apache.org/jira/browse/HADOOP-12831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> If you set the number of bytes per checksum to zero, 
> {code}
> conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
> {code}
> then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12711:
-
Affects Version/s: 2.8.0

> Remove dependency on commons-httpclient for ServletUtil
> ---
>
> Key: HADOOP-12711
> URL: https://issues.apache.org/jira/browse/HADOOP-12711
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12711.001.patch
>
>
> This is a branch-2 only change, as ServletUtil for trunk removes the code 
> that depends on commons-httpclient.
> We need to retire the use of commons-httpclient in Hadoop to address the 
> security concern in CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> {noformat}
> import org.apache.commons.httpclient.URIException;
> import org.apache.commons.httpclient.util.URIUtil;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12831) FSOutputSummer NPEs in ctor if bytes per checksum set to 0

2016-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157199#comment-15157199
 ] 

Steve Loughran commented on HADOOP-12831:
-

I had hoped this would be a way to disable checksumming and buffering on file:, 
but it instead I found a new way to break things
{code}
 contains 1 event(s)
java.lang.NullPointerException
at org.apache.hadoop.fs.FSOutputSummer.(FSOutputSummer.java:54)
at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:390)
at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:917)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:898)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:795)
at 
org.apache.hadoop.yarn.client.api.impl.FileSystemTimelineWriter$LogFD.createLogFileStream(FileSystemTimelineWriter.java:406)
at 
org.apache.hadoop.yarn.client.api.impl.FileSystemTimelineWriter$LogFD.prepareForWrite(FileSystemTimelineWriter.java:386)
at 
org.apache.hadoop.yarn.client.api.impl.FileSystemTimelineWriter$LogFD.(FileSystemTimelineWriter.java:363)
at 
org.apache.hadoop.yarn.client.api.impl.FileSystemTimelineWriter$EntityLogFD.(FileSystemTimelineWriter.java:329)
at 
org.apache.hadoop.yarn.client.api.impl.FileSystemTimelineWriter$LogFDsCache.createSummaryFDAndWrite(FileSystemTimelineWriter.java:842)
at 
org.apache.hadoop.yarn.client.api.impl.FileSystemTimelineWriter$LogFDsCache.writeSummmaryEntityLogs(FileSystemTimelineWriter.java:826)
at 
org.apache.hadoop.yarn.client.api.impl.FileSystemTimelineWriter$LogFDsCache.writeSummaryEntityLogs(FileSystemTimelineWriter.java:805)
at 
org.apache.hadoop.yarn.client.api.impl.FileSystemTimelineWriter.putEntities(FileSystemTimelineWriter.java:222)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:658)
at 
org.apache.spark.deploy.history.yarn.YarnHistoryService.postOneEntity(YarnHistoryService.scala:825)
at 
org.apache.spark.deploy.history.yarn.YarnHistoryService.org$apache$spark$deploy$history$yarn$YarnHistoryService$$postEntities(YarnHistoryService.scala:899)
at 
org.apache.spark.deploy.history.yarn.YarnHistoryService$EntityPoster.run(YarnHistoryService.scala:1105)
{code}

> FSOutputSummer NPEs in ctor if bytes per checksum  set to 0
> ---
>
> Key: HADOOP-12831
> URL: https://issues.apache.org/jira/browse/HADOOP-12831
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> If you set the number of bytes per checksum to zero, 
> {code}
> conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
> {code}
> then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12831) FSOutputSummer NPEs in ctor if bytes per checksum set to 0

2016-02-22 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12831:
---

 Summary: FSOutputSummer NPEs in ctor if bytes per checksum  set to 0
 Key: HADOOP-12831
 URL: https://issues.apache.org/jira/browse/HADOOP-12831
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


If you set the number of bytes per checksum to zero, 
{code}
conf.setInt(LocalFileSystemConfigKeys.LOCAL_FS_BYTES_PER_CHECKSUM_KEY, 0)
{code}
then create a "file://" instance, you get to see a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157148#comment-15157148
 ] 

Hadoop QA commented on HADOOP-12555:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 31s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 26s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Timed out junit tests | 

[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-22 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Status: Patch Available  (was: Open)

Here's a patch which fixes this issue.

Testing: no new test code added, since this only adds two config options.

Manual testing: I tested the following scenarios:
 * New config not present.  Client timeout verified unchanged at 60s.
 * New config present: 30s for connect timeout, 2m for read timeout:
 - WebHdfs server not listening => client timeout at 30s as expected.
 - WebHdfs server up, but modified to stall data => client timeout at 2m as 
expected.
 - WebHdfs server up, operating normally => client operates normally.
 * Also tested with distCp.  Before patch, some transfers would timeout, after 
patch, set longer (30m) read timeout, and distCp completes without timeouts.

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove commons-httpclient dependency from hadoop-azure

2016-02-22 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157081#comment-15157081
 ] 

Akira AJISAKA commented on HADOOP-11613:


+1, thanks Masatake.

> Remove commons-httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Masatake Iwasaki
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.004.patch, HADOOP-11613.008.patch, 
> HADOOP-11613.009.patch, HADOOP-11613.05.patch, HADOOP-11613.06.patch, 
> HADOOP-11613.07.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-12555:
-
Status: Patch Available  (was: Open)

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-22 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Attachment: HADOOP-12827.001.patch

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-12555:
-
Status: Open  (was: Patch Available)

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12555) WASB to read credentials from a credential provider

2016-02-22 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-12555:
-
Attachment: HADOOP-12555-004.patch

Added logging of potential IOE from getPassword.

> WASB to read credentials from a credential provider
> ---
>
> Key: HADOOP-12555
> URL: https://issues.apache.org/jira/browse/HADOOP-12555
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12555-001.patch, HADOOP-12555-002.patch, 
> HADOOP-12555-003.patch, HADOOP-12555-004.patch
>
>
> As HADOOP-12548 is going to do for s3, WASB should be able to read a password 
> from a credential provider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove commons-httpclient dependency from hadoop-azure

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156817#comment-15156817
 ] 

Hadoop QA commented on HADOOP-11613:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 25s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12788990/HADOOP-11613.009.patch
 |
| JIRA Issue | HADOOP-11613 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a11da055e769 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5e7d4d5 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Updated] (HADOOP-11613) Remove commons-httpclient dependency from hadoop-azure

2016-02-22 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11613:
--
Attachment: HADOOP-11613.009.patch

I attached 009 based on the Akira's review comment.

> Remove commons-httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Masatake Iwasaki
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.004.patch, HADOOP-11613.008.patch, 
> HADOOP-11613.009.patch, HADOOP-11613.05.patch, HADOOP-11613.06.patch, 
> HADOOP-11613.07.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12830) Bash environment for quick command operations

2016-02-22 Thread Kazuho Fujii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156780#comment-15156780
 ] 

Kazuho Fujii commented on HADOOP-12830:
---

[~aw], thank you very much for many suggestions. I will rewrite the source code.

I am worried about security issues. But, I think the fifo itself can not be a 
security hole. It has 0600 mode and the parent directory has 0700 mode.
{code}
  chmod 700 ${HSH_TMP_DIR}
{code}
{code}
  mkfifo --mode 600 ${fifo_names}
{code}
A malicious root user can attack more directly with "su  -c". 
I think the attack from root is unavoidable.



> Bash environment for quick command operations
> -
>
> Key: HADOOP-12830
> URL: https://issues.apache.org/jira/browse/HADOOP-12830
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin
>Reporter: Kazuho Fujii
>Assignee: Kazuho Fujii
> Attachments: HADOOP-12830.001.patch
>
>
> Hadoop file system shell commands are slow. This issue is about building a 
> shell environment for quick command operations.
> Previously an interactive shell is tried to build in HADOOP-6541. But, it 
> seems to be poor because users are used to powerful shells like bash. This 
> issue is not about creating a new shell, but just opening a new bash process. 
> Therefore, user can operate commands as before.
> {code}
> fjk@x240:~/hadoop-2.7.2$ ./bin/hadoop shell
> fjk@x240 hadoop> hadoop fs -ls /
> Found 2 items
> -rw-r--r--   3 fjk supergroup  0 2016-02-21 00:26 /file1
> -rw-r--r--   3 fjk supergroup  0 2016-02-21 00:26 /file2
> {code}
> The shell has a mini daemon process that is living until the shell is closed. 
> The hadoop fs command delegates the operation to the daemon. They communicate 
> with named pipes. The daemon conducts the operation and returns the result to 
> the command.
> In this shell the hadoop fs commands operation becomes quick. In a local 
> environment, "hadoop fs -ls" command is about 100 times faster than the 
> normal command.
> {code}
> fjk@x240 hadoop> time hadoop fs -ls hdfs://localhost:8020/ > /dev/null
> real  0m0.021s
> user  0m0.003s
> sys   0m0.011s
> {code}
> Using bash's function, commands and file names are automatically completed.
> {code}
> fjk@x240 hadoop> hadoop fs -ch
> -checksum  -chgrp -chmod -chown
> fjk@x240 hadoop> hadoop fs -ls /file
> /file1  /file2  /file3
> {code}
> Additionally, we can make equivalents with bash build-in commands, e.g., cd, 
> umask. In this shell, they can work because the daemon remembers the state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12825) Log slow name resolutions

2016-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156753#comment-15156753
 ] 

Steve Loughran commented on HADOOP-12825:
-

Not with Guava stopwatch; that 's one of the classes that isn't forwards 
compatible. Use {{org.apache.hadoop.util.StopWatch}}

> Log slow name resolutions 
> --
>
> Key: HADOOP-12825
> URL: https://issues.apache.org/jira/browse/HADOOP-12825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Attachments: HADOOP-12825.001.patch, getByName-call-graph.txt
>
>
> Logging slow name resolutions would be useful in identifying DNS performance 
> issues in a cluster. Most resolutions go through 
> {{org.apache.hadoop.security.SecurityUtil.getByName}} ( see attached call 
> graph ). Adding additional logging to this method would expose such issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12825) Log slow name resolutions

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156709#comment-15156709
 ] 

Hadoop QA commented on HADOOP-12825:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 35s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 3 
new + 105 unchanged - 0 fixed = 108 total (was 105) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 19s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 6s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 3s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12788806/HADOOP-12825.001.patch
 |
| JIRA Issue | HADOOP-12825 |
| 

[jira] [Commented] (HADOOP-9946) NumAllSinks metrics shows lower value than NumActiveSinks

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156680#comment-15156680
 ] 

Hadoop QA commented on HADOOP-9946:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 39s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 30s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 7s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12788641/HADOOP-9946.02.patch |
| JIRA Issue | HADOOP-9946 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d79d0279845e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5e7d4d5 |
| 

[jira] [Commented] (HADOOP-12825) Log slow name resolutions

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156645#comment-15156645
 ] 

Hadoop QA commented on HADOOP-12825:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 3 
new + 105 unchanged - 0 fixed = 108 total (was 105) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 27s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 41s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 14s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12788806/HADOOP-12825.001.patch
 |
| JIRA Issue | HADOOP-12825 |
| Optional Tests | 

[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-02-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156637#comment-15156637
 ] 

Hadoop QA commented on HADOOP-9613:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 31 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
4s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 56s 
{color} | {color:red} root-jdk1.8.0_72 with JDK v1.8.0_72 generated 1 new + 740 
unchanged - 0 fixed = 741 total (was 740) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 20m 51s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 735 
unchanged - 0 fixed = 736 total (was 735) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | 

[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2016-02-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156632#comment-15156632
 ] 

Kai Zheng commented on HADOOP-12090:


Just made a proposal to suggest updating the related codes to rebase on Apache 
Kerby as Apache Directory project has shifted the Kerberos related effort to 
the sub-project.

> minikdc-related unit tests fail consistently on some platforms
> --
>
> Key: HADOOP-12090
> URL: https://issues.apache.org/jira/browse/HADOOP-12090
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch
>
>
> On some platforms all unit tests that use minikdc fail consistently. Those 
> tests include TestKMS, TestSaslDataTransfer, 
> TestTimelineAuthenticationFilter, etc.
> Typical failures on the unit tests:
> {noformat}
> java.lang.AssertionError: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Cannot get a 
> KDC reply)
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
> {noformat}
> The errors that cause this failure on the KDC server on the minikdc are a 
> NullPointerException:
> {noformat}
> org.apache.mina.filter.codec.ProtocolDecoderException: 
> java.lang.NullPointerException: message (Hexdump: ...)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
>   at 
> org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException: message
>   at 
> org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
>   at 
> org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12163) Add xattr APIs to the FileSystem specification

2016-02-22 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-12163:
--
Assignee: (was: Brahma Reddy Battula)

> Add xattr APIs to the FileSystem specification
> --
>
> Key: HADOOP-12163
> URL: https://issues.apache.org/jira/browse/HADOOP-12163
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Arpit Agarwal
>  Labels: newbie
>
> The following xattr APIs should be added to the [FileSystem 
> specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
> # setXAttr(Path path, String name, byte[] value)
> # setXAttr(Path path, String name, byte[] value, EnumSet flag)
> # getXAttr(Path path, String name)
> # Map getXAttrs(Path path, List names)
> # listXAttrs(Path path)
> # removeXAttr(Path path, String name)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12162) Add ACL APIs to the FileSystem specification

2016-02-22 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-12162:
--
Assignee: (was: Brahma Reddy Battula)

> Add ACL APIs to the FileSystem specification
> 
>
> Key: HADOOP-12162
> URL: https://issues.apache.org/jira/browse/HADOOP-12162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Arpit Agarwal
>  Labels: newbie
>
> The following ACL APIs should be added to the [FileSystem 
> specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
> # modifyAclEntries
> # removeAclEntries
> # removeDefaultAcl
> # removeAcl
> # setAcl
> # getAclStatus 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9893) Ticket cache support for MiniKdc

2016-02-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156625#comment-15156625
 ] 

Kai Zheng commented on HADOOP-9893:
---

Hi [~ichattopadhyaya],
Really very sorry for the late reply, but to address this in my plan has to be 
pending on something really happened. I'm not sure it's going to upgrade to 
latest ApacheDS as the project has shifted the Kerberos related effort to a 
standalone sub-project [Apache 
Kerby|https://github.com/apache/directory-kerby]. I had made a proposal to the 
community to update the relevant codes rebased on Kerby and wish it could 
happen. If so, this issue will be happlily addressed as Kerby has the nice 
support.

> Ticket cache support for MiniKdc
> 
>
> Key: HADOOP-9893
> URL: https://issues.apache.org/jira/browse/HADOOP-9893
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> As discussed in HADOOP-9881, it would be good to support ticket cache 
> generation utilizing MiniKdc, which can be used to test some Kerberos cases 
> for UGI regarding user login via kinit or ticket cache. Currently it's not 
> supported and this issue is to implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12072) conftest raises a false alarm over the fair scheduler configuration file

2016-02-22 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-12072:
--
Assignee: (was: Brahma Reddy Battula)

> conftest raises a false alarm over the fair scheduler configuration file
> 
>
> Key: HADOOP-12072
> URL: https://issues.apache.org/jira/browse/HADOOP-12072
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kengo Seki
>
> hadoop conftest subcommand validates the XML files in ${HADOOP_CONF_DIR} by 
> default, and assumes the root element of the XML is .
> But it is popular to put the fair scheduler configuration file as 
> ${HADOOP_CONF_DIR}/fair-scheduler.xml, and its root element is , 
> so conftest raises a false alarm.
> {code}
> [sekikn@mobile hadoop-3.0.0-SNAPSHOT]$ bin/hadoop conftest
> /Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/capacity-scheduler.xml:
>  valid
> /Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/core-site.xml:
>  valid
> /Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/fair-scheduler.xml:
>   bad conf file: top-level element not 
> /Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/hadoop-policy.xml:
>  valid
> /Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/hdfs-site.xml:
>  valid
> /Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/httpfs-site.xml:
>  valid
> /Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/kms-acls.xml:
>  valid
> /Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/kms-site.xml:
>  valid
> /Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/mapred-site.xml:
>  valid
> /Users/sekikn/hadoop/hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/yarn-site.xml:
>  valid
> Invalid file exists
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)