[jira] [Commented] (HADOOP-11467) KerberosAuthenticator can connect to a non-secure cluster

2015-02-10 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315728#comment-14315728
 ] 

Robert Kanter commented on HADOOP-11467:


+1

I'll commit this in two days if nobody says anything.

> KerberosAuthenticator can connect to a non-secure cluster
> -
>
> Key: HADOOP-11467
> URL: https://issues.apache.org/jira/browse/HADOOP-11467
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HADOOP-11467.001.patch, HADOOP-11467.002.patch, 
> HADOOP-11467.003.patch, HADOOP-11467.004.patch
>
>
> While looking at HADOOP-10895, we discovered that the 
> {{KerberosAuthenticator}} can authenticate with a non-secure cluster, even 
> without falling back.
> The problematic code is here:
> {code:java}
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {// <- 
> A
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else if (isNegotiate()) {   // <- 
> B
> LOG.debug("Performing our own SPNEGO sequence.");
> doSpnegoSequence(token);
>   } else {  // <- 
> C
> LOG.debug("Using fallback authenticator sequence.");
> Authenticator auth = getFallBackAuthenticator();
> // Make sure that the fall back authenticator have the same
> // ConnectionConfigurator, since the method might be overridden.
> // Otherwise the fall back authenticator might not have the 
> information
> // to make the connection (e.g., SSL certificates)
> auth.setConnectionConfigurator(connConfigurator);
> auth.authenticate(url, token);
>   }
> }
> {code}
> Sometimes the JVM does the SPNEGO for us, and path A is used.  However, if 
> the {{KerberosAuthenticator}} tries to talk to a non-secure cluster, path A 
> also succeeds in this case.  
> More details can be found in this comment:
> https://issues.apache.org/jira/browse/HADOOP-10895?focusedCommentId=14247476&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14247476
> We've actually dealt with this before.  HADOOP-8883 tried to fix a related 
> problem by adding another condition to path A that would look for a header.  
> However, the JVM hides this header, making path A never occur.  We reverted 
> this change in HADOOP-10078, and didn't realize that there was still a 
> problem until now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10181) GangliaContext does not work with multicast ganglia setup

2015-02-10 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315720#comment-14315720
 ] 

Todd Lipcon commented on HADOOP-10181:
--

Sorry for the late reply, I was on vacation the last few weeks. The patch looks 
fine to me. Thanks for reviewing and committing, Chris.

> GangliaContext does not work with multicast ganglia setup
> -
>
> Key: HADOOP-10181
> URL: https://issues.apache.org/jira/browse/HADOOP-10181
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.6.0
>Reporter: Andrew Otto
>Assignee: Andrew Johnson
>Priority: Minor
>  Labels: ganglia, hadoop, metrics, multicast
> Fix For: 2.7.0
>
> Attachments: HADOOP-10181.001.patch, HADOOP-10181.002.patch, 
> HADOOP-10181.003.patch
>
>
> The GangliaContext class which is used to send Hadoop metrics to Ganglia uses 
> a DatagramSocket to send these metrics.  This works fine for Ganglia 
> multicast setups that are all on the same VLAN.  However, when working with 
> multiple VLANs, a packet sent via DatagramSocket to a multicast address will 
> end up with a TTL of 1.  Multicast TTL indicates the number of network hops 
> for which a particular multicast packet is valid.  The packets sent by 
> GangliaContext do not make it to ganglia aggregrators on the same multicast 
> group, but in different VLANs.
> To fix, we'd need a configuration property that specifies that multicast is 
> to be used, and another that allows setting of the multicast packet TTL.  
> With these set, we could then use MulticastSocket setTimeToLive() instead of 
> just plain ol' DatagramSocket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11581) Multithreaded correctness Warnings #org.apache.hadoop.fs.shell.Ls

2015-02-10 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11581:
--
Description: 
Please check the following for same..

{noformat}
Bug type STCAL_INVOKE_ON_STATIC_DATE_FORMAT_INSTANCE (click for details)
In class org.apache.hadoop.fs.shell.Ls
In method org.apache.hadoop.fs.shell.Ls.processPath(PathData)
Called method java.text.SimpleDateFormat.format(Date)
Field org.apache.hadoop.fs.shell.Ls.dateFormat
At Ls.java:[line 204]


Bug type STCAL_STATIC_SIMPLE_DATE_FORMAT_INSTANCE (click for details)
In class org.apache.hadoop.fs.shell.Ls
Field org.apache.hadoop.fs.shell.Ls.dateFormat
In Ls.java
{noformat}
https://builds.apache.org/job/PreCommit-HADOOP-Build/5646//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html#Warnings_MT_CORRECTNESS

  was:
Please check the following for same..

https://builds.apache.org/job/PreCommit-HADOOP-Build/5646//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html#Warnings_MT_CORRECTNESS


> Multithreaded correctness Warnings #org.apache.hadoop.fs.shell.Ls
> -
>
> Key: HADOOP-11581
> URL: https://issues.apache.org/jira/browse/HADOOP-11581
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> Please check the following for same..
>   
> {noformat}
> Bug type STCAL_INVOKE_ON_STATIC_DATE_FORMAT_INSTANCE (click for details)
> In class org.apache.hadoop.fs.shell.Ls
> In method org.apache.hadoop.fs.shell.Ls.processPath(PathData)
> Called method java.text.SimpleDateFormat.format(Date)
> Field org.apache.hadoop.fs.shell.Ls.dateFormat
> At Ls.java:[line 204]
>   
> Bug type STCAL_STATIC_SIMPLE_DATE_FORMAT_INSTANCE (click for details)
> In class org.apache.hadoop.fs.shell.Ls
> Field org.apache.hadoop.fs.shell.Ls.dateFormat
> In Ls.java
> {noformat}
> https://builds.apache.org/job/PreCommit-HADOOP-Build/5646//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html#Warnings_MT_CORRECTNESS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11581) Fix Multithreaded correctness Warnings #org.apache.hadoop.fs.shell.Ls

2015-02-10 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11581:
--
Summary: Fix Multithreaded correctness Warnings 
#org.apache.hadoop.fs.shell.Ls  (was: Multithreaded correctness Warnings 
#org.apache.hadoop.fs.shell.Ls)

> Fix Multithreaded correctness Warnings #org.apache.hadoop.fs.shell.Ls
> -
>
> Key: HADOOP-11581
> URL: https://issues.apache.org/jira/browse/HADOOP-11581
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> Please check the following for same..
>   
> {noformat}
> Bug type STCAL_INVOKE_ON_STATIC_DATE_FORMAT_INSTANCE (click for details)
> In class org.apache.hadoop.fs.shell.Ls
> In method org.apache.hadoop.fs.shell.Ls.processPath(PathData)
> Called method java.text.SimpleDateFormat.format(Date)
> Field org.apache.hadoop.fs.shell.Ls.dateFormat
> At Ls.java:[line 204]
>   
> Bug type STCAL_STATIC_SIMPLE_DATE_FORMAT_INSTANCE (click for details)
> In class org.apache.hadoop.fs.shell.Ls
> Field org.apache.hadoop.fs.shell.Ls.dateFormat
> In Ls.java
> {noformat}
> https://builds.apache.org/job/PreCommit-HADOOP-Build/5646//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html#Warnings_MT_CORRECTNESS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11581) Multithreaded correctness Warnings #org.apache.hadoop.fs.shell.Ls

2015-02-10 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11581:
-

 Summary: Multithreaded correctness Warnings 
#org.apache.hadoop.fs.shell.Ls
 Key: HADOOP-11581
 URL: https://issues.apache.org/jira/browse/HADOOP-11581
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Please check the following for same..

https://builds.apache.org/job/PreCommit-HADOOP-Build/5646//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html#Warnings_MT_CORRECTNESS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11554) Expose HadoopKerberosName as a hadoop subcommand

2015-02-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315631#comment-14315631
 ] 

Chris Nauroth commented on HADOOP-11554:


The lack of tests is expected, since this patch doesn't touch Java code, and 
the Findbugs warnings are unrelated.

> Expose HadoopKerberosName as a hadoop subcommand
> 
>
> Key: HADOOP-11554
> URL: https://issues.apache.org/jira/browse/HADOOP-11554
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11554-01.patch, HADOOP-11554-02.patch, 
> HADOOP-11554.patch
>
>
> HadoopKerberosName has been around as a "secret hack" for quite a while.  We 
> should clean up the output and make it official by exposing it via the hadoop 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11580) Remove SingleNodeSetup.md from trunk

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315595#comment-14315595
 ] 

Hadoop QA commented on HADOOP-11580:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697967/HADOOP-11580.patch
  against trunk revision 7c6b654.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-common-project/hadoop-common:

org.apache.hadoop.ha.TestZKFailoverController

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5652//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5652//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5652//console

This message is automatically generated.

> Remove SingleNodeSetup.md from trunk
> 
>
> Key: HADOOP-11580
> URL: https://issues.apache.org/jira/browse/HADOOP-11580
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Akira AJISAKA
> Attachments: HADOOP-11580.patch
>
>
> This document is slated to go away "for the next major release" according to 
> itself.  So let's remove it then!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11467) KerberosAuthenticator can connect to a non-secure cluster

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1431#comment-1431
 ] 

Hadoop QA commented on HADOOP-11467:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12697856/HADOOP-11467.004.patch
  against trunk revision 7c6b654.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5651//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5651//console

This message is automatically generated.

> KerberosAuthenticator can connect to a non-secure cluster
> -
>
> Key: HADOOP-11467
> URL: https://issues.apache.org/jira/browse/HADOOP-11467
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HADOOP-11467.001.patch, HADOOP-11467.002.patch, 
> HADOOP-11467.003.patch, HADOOP-11467.004.patch
>
>
> While looking at HADOOP-10895, we discovered that the 
> {{KerberosAuthenticator}} can authenticate with a non-secure cluster, even 
> without falling back.
> The problematic code is here:
> {code:java}
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {// <- 
> A
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else if (isNegotiate()) {   // <- 
> B
> LOG.debug("Performing our own SPNEGO sequence.");
> doSpnegoSequence(token);
>   } else {  // <- 
> C
> LOG.debug("Using fallback authenticator sequence.");
> Authenticator auth = getFallBackAuthenticator();
> // Make sure that the fall back authenticator have the same
> // ConnectionConfigurator, since the method might be overridden.
> // Otherwise the fall back authenticator might not have the 
> information
> // to make the connection (e.g., SSL certificates)
> auth.setConnectionConfigurator(connConfigurator);
> auth.authenticate(url, token);
>   }
> }
> {code}
> Sometimes the JVM does the SPNEGO for us, and path A is used.  However, if 
> the {{KerberosAuthenticator}} tries to talk to a non-secure cluster, path A 
> also succeeds in this case.  
> More details can be found in this comment:
> https://issues.apache.org/jira/browse/HADOOP-10895?focusedCommentId=14247476&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14247476
> We've actually dealt with this before.  HADOOP-8883 tried to fix a related 
> problem by adding another condition to path A that would look for a header.  
> However, the JVM hides this header, making path A never occur.  We reverted 
> this change in HADOOP-10078, and didn't realize that there was still a 
> problem until now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11554) Expose HadoopKerberosName as a hadoop subcommand

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315551#comment-14315551
 ] 

Hadoop QA commented on HADOOP-11554:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697940/HADOOP-11554-02.patch
  against trunk revision 7c6b654.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5650//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5650//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5650//console

This message is automatically generated.

> Expose HadoopKerberosName as a hadoop subcommand
> 
>
> Key: HADOOP-11554
> URL: https://issues.apache.org/jira/browse/HADOOP-11554
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11554-01.patch, HADOOP-11554-02.patch, 
> HADOOP-11554.patch
>
>
> HadoopKerberosName has been around as a "secret hack" for quite a while.  We 
> should clean up the output and make it official by exposing it via the hadoop 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11565) Add --slaves shell option

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315488#comment-14315488
 ] 

Hadoop QA commented on HADOOP-11565:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697857/HADOOP-11565-00.patch
  against trunk revision d5855c0.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5646//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5646//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5646//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5646//console

This message is automatically generated.

> Add --slaves shell option
> -
>
> Key: HADOOP-11565
> URL: https://issues.apache.org/jira/browse/HADOOP-11565
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11565-00.patch
>
>
> Add a --slaves shell option to hadoop-config.sh to trigger the given command 
> on slave nodes.  This is required to deprecate hadoop-daemons.sh and 
> yarn-daemons.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11579) Documentation for truncate

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315486#comment-14315486
 ] 

Hadoop QA commented on HADOOP-11579:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697861/HDFS-7665.patch
  against trunk revision 7c6b654.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5649//console

This message is automatically generated.

> Documentation for truncate
> --
>
> Key: HADOOP-11579
> URL: https://issues.apache.org/jira/browse/HADOOP-11579
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Konstantin Shvachko
> Attachments: HDFS-7665.patch
>
>
> With the addition of a major new feature to filesystems, the filesystem 
> specification in hadoop-common/site is now out of sync. 
> This means that
> # there's no strict specification of what it should do
> # you can't derive tests from that specification
> # other people trying to implement the API will have to infer what to do from 
> the HDFS source
> # there's no way to decide whether or not the HDFS implementation does what 
> it is intended.
> # without matching tests against the raw local FS, differences between the 
> HDFS impl and the Posix standard one won't be caught until it is potentially 
> too late to fix.
> The operation should be relatively easy to define (after a truncate, the 
> files bytes [0...len-1] must equal the original bytes, length(file)==len, etc)
> The truncate tests already written could then be pulled up into contract 
> tests which any filesystem implementation can run against.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10578) Find command - add navigation and execution expressions to find command

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315477#comment-14315477
 ] 

Hadoop QA commented on HADOOP-10578:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697921/HADOOP-10578.patch
  against trunk revision 7c6b654.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5648//console

This message is automatically generated.

> Find command - add navigation and execution expressions to find command
> ---
>
> Key: HADOOP-10578
> URL: https://issues.apache.org/jira/browse/HADOOP-10578
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Attachments: HADOOP-10578.patch, HADOOP-10578.patch
>
>
> Add the navigation and execution expressions to the find command created 
> under HADOOP-8989, e.g.
> - exec
> - maxDepth
> - minDepth
> - prune
> - depth



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11552) Allow handoff on the server side for RPC requests

2015-02-10 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315474#comment-14315474
 ] 

Sanjay Radia commented on HADOOP-11552:
---

bq.  If we move to an offer-based system like Mesos,
You are mixing layers. SId is talking about the RPC layer. The layer above RPC 
such as  how Yarn resources are obtained and used will be unaffected.

bq. have the resource manager make outgoing connections to the executors
Making outgoing connections as you suggest is another valid approach. For that 
to work well we need client-side async support while this jira is proposing a 
server-side "async" (I put async in quotes because in my mind the hand-off is 
not asycn-rpc since the rpc client blocks till the work is done).

Another good usecase for this jira is the the write-operations on the NN that 
write to the journal. Such operations should be handed off to a worker thread 
who writes to the journal and then replies. The original handler-thread goes 
back to serving new requests as soon as the hand off is done. If we do this we 
could drastically reduce the number of handler threads needed in NN (you 
already noted the reduction in handler threads for the other use case).



> Allow handoff on the server side for RPC requests
> -
>
> Key: HADOOP-11552
> URL: https://issues.apache.org/jira/browse/HADOOP-11552
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HADOOP-11552.1.wip.txt
>
>
> An RPC server handler thread is tied up for each incoming RPC request. This 
> isn't ideal, since this essentially implies that RPC operations should be 
> short lived, and most operations which could take time end up falling back to 
> a polling mechanism.
> Some use cases where this is useful.
> - YARN submitApplication - which currently submits, followed by a poll to 
> check if the application is accepted while the submit operation is written 
> out to storage. This can be collapsed into a single call.
> - YARN allocate - requests and allocations use the same protocol. New 
> allocations are received via polling.
> The allocate protocol could be split into a request/heartbeat along with a 
> 'awaitResponse'. The request/heartbeat is sent only when there's a request or 
> on a much longer heartbeat interval. awaitResponse is always left active with 
> the RM - and returns the moment something is available.
> MapReduce/Tez task to AM communication is another example of this pattern.
> The same pattern of splitting calls can be used for other protocols as well. 
> This should serve to improve latency, as well as reduce network traffic since 
> the keep-alive heartbeat can be sent less frequently.
> I believe there's some cases in HDFS as well, where the DN gets told to 
> perform some operations when they heartbeat into the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11552) Allow handoff on the server side for RPC requests

2015-02-10 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315460#comment-14315460
 ] 

Sanjay Radia commented on HADOOP-11552:
---

bq. Are you proposing to keep the TCP session open, but reuse the handler 
thread for something else, while the RPC is progressing?
bq. Yes, the intent is to keep the TPC session open and re-use the handlers

Note our RPC system forces the handler thread to do the response and hence we 
have to have a large number of handler threads since some of the requests (such 
a write operation on a NN) takes a longer because it has to write to the 
journal. Other RPC system and also request-response message passing systems 
allow hand-off to any thread to do the work and reply. The TCP connection being 
kept open is not due to the handler thread-binding, but it is instead because 
our RCP layer depends on a connection  close to detect server failures (and i 
believe we send some heartbeat bytes to detect server failures promptly). So we 
need to keep the connection open if the RPC is operation is not completed.
 Now the impact on RCP connections that you raised:
* for normal end-clients (e.g. HDFS clients) the connections will remain open 
as in the original case - ie the till the request is completed and reply is 
sent. Hence the number of such connections will be the same.
* for internal clients where the request is of type "do you have more work for 
me" (as sent by DN or NM) the number of connections will increase but will be 
bounded. Here we can have a hybrid approach where the the RM could keep a few 
requests blocked and  reply only when work is available and for other such 
requests it could say  "no work,  but try 2 seconds later".

> Allow handoff on the server side for RPC requests
> -
>
> Key: HADOOP-11552
> URL: https://issues.apache.org/jira/browse/HADOOP-11552
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HADOOP-11552.1.wip.txt
>
>
> An RPC server handler thread is tied up for each incoming RPC request. This 
> isn't ideal, since this essentially implies that RPC operations should be 
> short lived, and most operations which could take time end up falling back to 
> a polling mechanism.
> Some use cases where this is useful.
> - YARN submitApplication - which currently submits, followed by a poll to 
> check if the application is accepted while the submit operation is written 
> out to storage. This can be collapsed into a single call.
> - YARN allocate - requests and allocations use the same protocol. New 
> allocations are received via polling.
> The allocate protocol could be split into a request/heartbeat along with a 
> 'awaitResponse'. The request/heartbeat is sent only when there's a request or 
> on a much longer heartbeat interval. awaitResponse is always left active with 
> the RM - and returns the moment something is available.
> MapReduce/Tez task to AM communication is another example of this pattern.
> The same pattern of splitting calls can be used for other protocols as well. 
> This should serve to improve latency, as well as reduce network traffic since 
> the keep-alive heartbeat can be sent less frequently.
> I believe there's some cases in HDFS as well, where the DN gets told to 
> perform some operations when they heartbeat into the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11468) Remove Findbugs dependency from mvn package -Pdocs command

2015-02-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315431#comment-14315431
 ] 

Akira AJISAKA commented on HADOOP-11468:


The test failure and findbugs warnings are not related to the patch. See 
MAPREDUCE-6223 and MAPREDUCE-6225.

> Remove Findbugs dependency from mvn package -Pdocs command
> --
>
> Key: HADOOP-11468
> URL: https://issues.apache.org/jira/browse/HADOOP-11468
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.6.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: build
> Attachments: HADOOP-11468-001.patch, HADOOP-11468-002.patch
>
>
> "mvn package -Pdist,docs,src -DskipTests -Dtar" fails without installing 
> Findbugs.
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project 
> hadoop-common: An Ant BuildException has occured: stylesheet 
> /Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/${env.FINDBUGS_HOME}/src/xsl/default.xsl
>  doesn't exist.
> [ERROR] around Ant part ... style="${env.FINDBUGS_HOME}/src/xsl/default.xsl" 
> in="/Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/findbugsXml.xml"
>  
> out="/Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/site/findbugs.html"/>...
>  @ 44:245 in 
> /Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> {code}
> Maven now automatically downloads Findbugs, so it's better to remove the 
> dependency for users to build it without installing Findbugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11559) Add links to RackAwareness and InterfaceClassification to site index

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315415#comment-14315415
 ] 

Hudson commented on HADOOP-11559:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #7066 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7066/])
HADOOP-11559. Add links to RackAwareness and InterfaceClassification to site 
index (Masatake Iwasaki via aw) (aw: rev 
7eeca90daabd74934d4c94af6f07fd598abdb4ed)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
* hadoop-project/src/site/site.xml
* 
hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md


> Add links to RackAwareness and InterfaceClassification to site index
> 
>
> Key: HADOOP-11559
> URL: https://issues.apache.org/jira/browse/HADOOP-11559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-11559.001.patch
>
>
> RackAwareness.html and InterfaceClassification.html are not linked from site 
> index. Add link to them to site.xml if the contents are not outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11467) KerberosAuthenticator can connect to a non-secure cluster

2015-02-10 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315412#comment-14315412
 ] 

Yongjun Zhang commented on HADOOP-11467:


Thanks Robert!


> KerberosAuthenticator can connect to a non-secure cluster
> -
>
> Key: HADOOP-11467
> URL: https://issues.apache.org/jira/browse/HADOOP-11467
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HADOOP-11467.001.patch, HADOOP-11467.002.patch, 
> HADOOP-11467.003.patch, HADOOP-11467.004.patch
>
>
> While looking at HADOOP-10895, we discovered that the 
> {{KerberosAuthenticator}} can authenticate with a non-secure cluster, even 
> without falling back.
> The problematic code is here:
> {code:java}
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {// <- 
> A
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else if (isNegotiate()) {   // <- 
> B
> LOG.debug("Performing our own SPNEGO sequence.");
> doSpnegoSequence(token);
>   } else {  // <- 
> C
> LOG.debug("Using fallback authenticator sequence.");
> Authenticator auth = getFallBackAuthenticator();
> // Make sure that the fall back authenticator have the same
> // ConnectionConfigurator, since the method might be overridden.
> // Otherwise the fall back authenticator might not have the 
> information
> // to make the connection (e.g., SSL certificates)
> auth.setConnectionConfigurator(connConfigurator);
> auth.authenticate(url, token);
>   }
> }
> {code}
> Sometimes the JVM does the SPNEGO for us, and path A is used.  However, if 
> the {{KerberosAuthenticator}} tries to talk to a non-secure cluster, path A 
> also succeeds in this case.  
> More details can be found in this comment:
> https://issues.apache.org/jira/browse/HADOOP-10895?focusedCommentId=14247476&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14247476
> We've actually dealt with this before.  HADOOP-8883 tried to fix a related 
> problem by adding another condition to path A that would look for a header.  
> However, the JVM hides this header, making path A never occur.  We reverted 
> this change in HADOOP-10078, and didn't realize that there was still a 
> problem until now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11423) [Umbrella] Support Java 10 in Hadoop

2015-02-10 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas resolved HADOOP-11423.

Resolution: Later

Let's wait for Java 10 to be announced.

> [Umbrella] Support Java 10 in Hadoop
> 
>
> Key: HADOOP-11423
> URL: https://issues.apache.org/jira/browse/HADOOP-11423
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: sneaky
>Priority: Minor
>
> Java 10 is coming quickly to various clusters. Making sure Hadoop seamlessly 
> works with Java 10 is important for the Apache community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11467) KerberosAuthenticator can connect to a non-secure cluster

2015-02-10 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315396#comment-14315396
 ] 

Robert Kanter commented on HADOOP-11467:


Looking at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5645//testReport/, it 
looks like everything passed so not sure what Jenkins is unhappy with; I've 
just retrogressed the job so we can try again.

> KerberosAuthenticator can connect to a non-secure cluster
> -
>
> Key: HADOOP-11467
> URL: https://issues.apache.org/jira/browse/HADOOP-11467
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HADOOP-11467.001.patch, HADOOP-11467.002.patch, 
> HADOOP-11467.003.patch, HADOOP-11467.004.patch
>
>
> While looking at HADOOP-10895, we discovered that the 
> {{KerberosAuthenticator}} can authenticate with a non-secure cluster, even 
> without falling back.
> The problematic code is here:
> {code:java}
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {// <- 
> A
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else if (isNegotiate()) {   // <- 
> B
> LOG.debug("Performing our own SPNEGO sequence.");
> doSpnegoSequence(token);
>   } else {  // <- 
> C
> LOG.debug("Using fallback authenticator sequence.");
> Authenticator auth = getFallBackAuthenticator();
> // Make sure that the fall back authenticator have the same
> // ConnectionConfigurator, since the method might be overridden.
> // Otherwise the fall back authenticator might not have the 
> information
> // to make the connection (e.g., SSL certificates)
> auth.setConnectionConfigurator(connConfigurator);
> auth.authenticate(url, token);
>   }
> }
> {code}
> Sometimes the JVM does the SPNEGO for us, and path A is used.  However, if 
> the {{KerberosAuthenticator}} tries to talk to a non-secure cluster, path A 
> also succeeds in this case.  
> More details can be found in this comment:
> https://issues.apache.org/jira/browse/HADOOP-10895?focusedCommentId=14247476&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14247476
> We've actually dealt with this before.  HADOOP-8883 tried to fix a related 
> problem by adding another condition to path A that would look for a header.  
> However, the JVM hides this header, making path A never occur.  We reverted 
> this change in HADOOP-10078, and didn't realize that there was still a 
> problem until now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11467) KerberosAuthenticator can connect to a non-secure cluster

2015-02-10 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315397#comment-14315397
 ] 

Robert Kanter commented on HADOOP-11467:


*retriggered

> KerberosAuthenticator can connect to a non-secure cluster
> -
>
> Key: HADOOP-11467
> URL: https://issues.apache.org/jira/browse/HADOOP-11467
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HADOOP-11467.001.patch, HADOOP-11467.002.patch, 
> HADOOP-11467.003.patch, HADOOP-11467.004.patch
>
>
> While looking at HADOOP-10895, we discovered that the 
> {{KerberosAuthenticator}} can authenticate with a non-secure cluster, even 
> without falling back.
> The problematic code is here:
> {code:java}
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {// <- 
> A
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else if (isNegotiate()) {   // <- 
> B
> LOG.debug("Performing our own SPNEGO sequence.");
> doSpnegoSequence(token);
>   } else {  // <- 
> C
> LOG.debug("Using fallback authenticator sequence.");
> Authenticator auth = getFallBackAuthenticator();
> // Make sure that the fall back authenticator have the same
> // ConnectionConfigurator, since the method might be overridden.
> // Otherwise the fall back authenticator might not have the 
> information
> // to make the connection (e.g., SSL certificates)
> auth.setConnectionConfigurator(connConfigurator);
> auth.authenticate(url, token);
>   }
> }
> {code}
> Sometimes the JVM does the SPNEGO for us, and path A is used.  However, if 
> the {{KerberosAuthenticator}} tries to talk to a non-secure cluster, path A 
> also succeeds in this case.  
> More details can be found in this comment:
> https://issues.apache.org/jira/browse/HADOOP-10895?focusedCommentId=14247476&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14247476
> We've actually dealt with this before.  HADOOP-8883 tried to fix a related 
> problem by adding another condition to path A that would look for a header.  
> However, the JVM hides this header, making path A never occur.  We reverted 
> this change in HADOOP-10078, and didn't realize that there was still a 
> problem until now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11580) Remove SingleNodeSetup.md from trunk

2015-02-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11580:
---
Attachment: HADOOP-11580.patch

Attaching a patch to remove the document.

> Remove SingleNodeSetup.md from trunk
> 
>
> Key: HADOOP-11580
> URL: https://issues.apache.org/jira/browse/HADOOP-11580
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Akira AJISAKA
> Attachments: HADOOP-11580.patch
>
>
> This document is slated to go away "for the next major release" according to 
> itself.  So let's remove it then!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11580) Remove SingleNodeSetup.md from trunk

2015-02-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11580:
---
Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

> Remove SingleNodeSetup.md from trunk
> 
>
> Key: HADOOP-11580
> URL: https://issues.apache.org/jira/browse/HADOOP-11580
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Akira AJISAKA
> Attachments: HADOOP-11580.patch
>
>
> This document is slated to go away "for the next major release" according to 
> itself.  So let's remove it then!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11580) Remove SingleNodeSetup.md from trunk

2015-02-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315388#comment-14315388
 ] 

Akira AJISAKA commented on HADOOP-11580:


This document was to be removed by HADOOP-10618 but forgot to remove from 
trunk. Thanks [~aw] for pointing out.

> Remove SingleNodeSetup.md from trunk
> 
>
> Key: HADOOP-11580
> URL: https://issues.apache.org/jira/browse/HADOOP-11580
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Akira AJISAKA
>
> This document is slated to go away "for the next major release" according to 
> itself.  So let's remove it then!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11580) Remove SingleNodeSetup.md from trunk

2015-02-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-11580:
--

Assignee: Akira AJISAKA

> Remove SingleNodeSetup.md from trunk
> 
>
> Key: HADOOP-11580
> URL: https://issues.apache.org/jira/browse/HADOOP-11580
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Akira AJISAKA
>
> This document is slated to go away "for the next major release" according to 
> itself.  So let's remove it then!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11539) add Apache Thrift support to hadoop-maven-plugins

2015-02-10 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-11539:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Instead of adding a dependency on Thrift to Hadoop, copying this code into 
those projects is probably easier to maintain. The Hadoop project needs to 
build its message types, but that's not a problem it solves for downstream 
components.

I'm going to close this as WONTFIX, but please reopen it if I haven't 
appreciated the problem you're solving.

> add Apache Thrift support to hadoop-maven-plugins
> -
>
> Key: HADOOP-11539
> URL: https://issues.apache.org/jira/browse/HADOOP-11539
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.6.0
>Reporter: John Wang
>Assignee: John Wang
> Attachments: HADOOP-11539.patch
>
>
> Make generating java code from thrift IDL more easy if there are massive 
> input files.  Good news for hbase-thrift, hive-service and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11539) add Apache Thrift support to hadoop-maven-plugins

2015-02-10 Thread John Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315342#comment-14315342
 ] 

John Wang commented on HADOOP-11539:


Hi [~chris.douglas], anther reason that I did not push this patch to Apache 
Thrift is that this patch code is totally borrowed from hadoop-maven-plugins.  
I.E. it may be not good to specify one fact in several places.  Thanks.

> add Apache Thrift support to hadoop-maven-plugins
> -
>
> Key: HADOOP-11539
> URL: https://issues.apache.org/jira/browse/HADOOP-11539
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.6.0
>Reporter: John Wang
>Assignee: John Wang
> Attachments: HADOOP-11539.patch
>
>
> Make generating java code from thrift IDL more easy if there are massive 
> input files.  Good news for hbase-thrift, hive-service and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11559) Add links to RackAwareness and InterfaceClassification to site index

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11559:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

+1 committed to trunk

Thanks!

> Add links to RackAwareness and InterfaceClassification to site index
> 
>
> Key: HADOOP-11559
> URL: https://issues.apache.org/jira/browse/HADOOP-11559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-11559.001.patch
>
>
> RackAwareness.html and InterfaceClassification.html are not linked from site 
> index. Add link to them to site.xml if the contents are not outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11554) Expose HadoopKerberosName as a hadoop subcommand

2015-02-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315321#comment-14315321
 ] 

Chris Nauroth commented on HADOOP-11554:


+1 for patch v02 pending Jenkins.

> Expose HadoopKerberosName as a hadoop subcommand
> 
>
> Key: HADOOP-11554
> URL: https://issues.apache.org/jira/browse/HADOOP-11554
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11554-01.patch, HADOOP-11554-02.patch, 
> HADOOP-11554.patch
>
>
> HadoopKerberosName has been around as a "secret hack" for quite a while.  We 
> should clean up the output and make it official by exposing it via the hadoop 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11559) Add links to RackAwareness and InterfaceClassification to site index

2015-02-10 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11559:
--
Status: Patch Available  (was: Open)

> Add links to RackAwareness and InterfaceClassification to site index
> 
>
> Key: HADOOP-11559
> URL: https://issues.apache.org/jira/browse/HADOOP-11559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-11559.001.patch
>
>
> RackAwareness.html and InterfaceClassification.html are not linked from site 
> index. Add link to them to site.xml if the contents are not outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11559) Add links to RackAwareness and InterfaceClassification to site index

2015-02-10 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11559:
--
Attachment: HADOOP-11559.001.patch

I attached patch including fixes as
* added link to RackAwareness and InterfaceClassification to site.xml
* updated RackAwareness.md
** added newlines to too long lines
** replaced deprecated config keys with new one
** removed sentence about mapreduce.jobtracker.taskcache.levels
* updated InterfaceClassification.md
** added newlines to too long lines
** fixed formatting for ease of editing/reading


> Add links to RackAwareness and InterfaceClassification to site index
> 
>
> Key: HADOOP-11559
> URL: https://issues.apache.org/jira/browse/HADOOP-11559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-11559.001.patch
>
>
> RackAwareness.html and InterfaceClassification.html are not linked from site 
> index. Add link to them to site.xml if the contents are not outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11580) Remove SingleNodeSetup.md from trunk

2015-02-10 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11580:
-

 Summary: Remove SingleNodeSetup.md from trunk
 Key: HADOOP-11580
 URL: https://issues.apache.org/jira/browse/HADOOP-11580
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


This document is slated to go away "for the next major release" according to 
itself.  So let's remove it then!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9329) document native build dependencies in BUILDING.txt

2015-02-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315285#comment-14315285
 ] 

Colin Patrick McCabe commented on HADOOP-9329:
--

I think BUILDING.txt could use a new coat of paint and probably some dusting 
off.  It still talks about jdk1.6 :(

Those are also nowhere near all the native dependencies, although many native 
deps are optional.  If you look in the CMakeLists.txt, you'll find many more 
libraries we should probably add to BUILDLING.txt.

> document native build dependencies in BUILDING.txt
> --
>
> Key: HADOOP-9329
> URL: https://issues.apache.org/jira/browse/HADOOP-9329
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.1.0-beta
>Reporter: Colin Patrick McCabe
>Assignee: Vijay Bhat
>Priority: Trivial
>
> {{BUILDING.txt}} describes {{-Pnative}}, but it does not specify what native 
> libraries are needed for the build.  We should address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11554) Expose HadoopKerberosName as a hadoop subcommand

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11554:
--
Status: Patch Available  (was: Open)

> Expose HadoopKerberosName as a hadoop subcommand
> 
>
> Key: HADOOP-11554
> URL: https://issues.apache.org/jira/browse/HADOOP-11554
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11554-01.patch, HADOOP-11554-02.patch, 
> HADOOP-11554.patch
>
>
> HadoopKerberosName has been around as a "secret hack" for quite a while.  We 
> should clean up the output and make it official by exposing it via the hadoop 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11554) Expose HadoopKerberosName as a hadoop subcommand

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11554:
--
Status: Open  (was: Patch Available)

> Expose HadoopKerberosName as a hadoop subcommand
> 
>
> Key: HADOOP-11554
> URL: https://issues.apache.org/jira/browse/HADOOP-11554
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11554-01.patch, HADOOP-11554-02.patch, 
> HADOOP-11554.patch
>
>
> HadoopKerberosName has been around as a "secret hack" for quite a while.  We 
> should clean up the output and make it official by exposing it via the hadoop 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11554) Expose HadoopKerberosName as a hadoop subcommand

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11554:
--
Attachment: HADOOP-11554-02.patch

-02:
* new patch generated to match updated doc format.

> Expose HadoopKerberosName as a hadoop subcommand
> 
>
> Key: HADOOP-11554
> URL: https://issues.apache.org/jira/browse/HADOOP-11554
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11554-01.patch, HADOOP-11554-02.patch, 
> HADOOP-11554.patch
>
>
> HadoopKerberosName has been around as a "secret hack" for quite a while.  We 
> should clean up the output and make it official by exposing it via the hadoop 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11579) Documentation for truncate

2015-02-10 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-11579:
-
Status: Patch Available  (was: Open)

> Documentation for truncate
> --
>
> Key: HADOOP-11579
> URL: https://issues.apache.org/jira/browse/HADOOP-11579
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Konstantin Shvachko
> Attachments: HDFS-7665.patch
>
>
> With the addition of a major new feature to filesystems, the filesystem 
> specification in hadoop-common/site is now out of sync. 
> This means that
> # there's no strict specification of what it should do
> # you can't derive tests from that specification
> # other people trying to implement the API will have to infer what to do from 
> the HDFS source
> # there's no way to decide whether or not the HDFS implementation does what 
> it is intended.
> # without matching tests against the raw local FS, differences between the 
> HDFS impl and the Posix standard one won't be caught until it is potentially 
> too late to fix.
> The operation should be relatively easy to define (after a truncate, the 
> files bytes [0...len-1] must equal the original bytes, length(file)==len, etc)
> The truncate tests already written could then be pulled up into contract 
> tests which any filesystem implementation can run against.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11579) Documentation for truncate

2015-02-10 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-11579:
-
Target Version/s: 2.7.0
Assignee: Konstantin Shvachko
 Summary: Documentation for truncate  (was: Add definition of 
truncate preconditions/postconditions to filesystem specification)

Moved to hadoop-common.
Changed the Summary, which was
"Add definition of truncate preconditions/postconditions to filesystem 
specification"

> Documentation for truncate
> --
>
> Key: HADOOP-11579
> URL: https://issues.apache.org/jira/browse/HADOOP-11579
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Konstantin Shvachko
> Attachments: HDFS-7665.patch
>
>
> With the addition of a major new feature to filesystems, the filesystem 
> specification in hadoop-common/site is now out of sync. 
> This means that
> # there's no strict specification of what it should do
> # you can't derive tests from that specification
> # other people trying to implement the API will have to infer what to do from 
> the HDFS source
> # there's no way to decide whether or not the HDFS implementation does what 
> it is intended.
> # without matching tests against the raw local FS, differences between the 
> HDFS impl and the Posix standard one won't be caught until it is potentially 
> too late to fix.
> The operation should be relatively easy to define (after a truncate, the 
> files bytes [0...len-1] must equal the original bytes, length(file)==len, etc)
> The truncate tests already written could then be pulled up into contract 
> tests which any filesystem implementation can run against.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11579) Add definition of truncate preconditions/postconditions to filesystem specification

2015-02-10 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko moved HDFS-7665 to HADOOP-11579:


  Component/s: (was: documentation)
   documentation
Fix Version/s: (was: 3.0.0)
Affects Version/s: (was: 3.0.0)
   2.7.0
  Key: HADOOP-11579  (was: HDFS-7665)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Add definition of truncate preconditions/postconditions to filesystem 
> specification
> ---
>
> Key: HADOOP-11579
> URL: https://issues.apache.org/jira/browse/HADOOP-11579
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
> Attachments: HDFS-7665.patch
>
>
> With the addition of a major new feature to filesystems, the filesystem 
> specification in hadoop-common/site is now out of sync. 
> This means that
> # there's no strict specification of what it should do
> # you can't derive tests from that specification
> # other people trying to implement the API will have to infer what to do from 
> the HDFS source
> # there's no way to decide whether or not the HDFS implementation does what 
> it is intended.
> # without matching tests against the raw local FS, differences between the 
> HDFS impl and the Posix standard one won't be caught until it is potentially 
> too late to fix.
> The operation should be relatively easy to define (after a truncate, the 
> files bytes [0...len-1] must equal the original bytes, length(file)==len, etc)
> The truncate tests already written could then be pulled up into contract 
> tests which any filesystem implementation can run against.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11554) Expose HadoopKerberosName as a hadoop subcommand

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315223#comment-14315223
 ] 

Hadoop QA commented on HADOOP-11554:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697142/HADOOP-11554-01.patch
  against trunk revision d5855c0.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5647//console

This message is automatically generated.

> Expose HadoopKerberosName as a hadoop subcommand
> 
>
> Key: HADOOP-11554
> URL: https://issues.apache.org/jira/browse/HADOOP-11554
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11554-01.patch, HADOOP-11554.patch
>
>
> HadoopKerberosName has been around as a "secret hack" for quite a while.  We 
> should clean up the output and make it official by exposing it via the hadoop 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10578) Find command - add navigation and execution expressions to find command

2015-02-10 Thread Jonathan Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Allen updated HADOOP-10578:

Status: Patch Available  (was: In Progress)

> Find command - add navigation and execution expressions to find command
> ---
>
> Key: HADOOP-10578
> URL: https://issues.apache.org/jira/browse/HADOOP-10578
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Attachments: HADOOP-10578.patch, HADOOP-10578.patch
>
>
> Add the navigation and execution expressions to the find command created 
> under HADOOP-8989, e.g.
> - exec
> - maxDepth
> - minDepth
> - prune
> - depth



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11578) Hadoop Azure file system does not track all FileSystem.Statistics

2015-02-10 Thread Ivan Mitic (JIRA)
Ivan Mitic created HADOOP-11578:
---

 Summary: Hadoop Azure file system does not track all 
FileSystem.Statistics
 Key: HADOOP-11578
 URL: https://issues.apache.org/jira/browse/HADOOP-11578
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Just noticed that Azure file system does not implement all counters from 
FileSystem.Statistics.

Missing counters are:
Number of read operations
Number of large read operations
Number of write operations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10578) Find command - add navigation and execution expressions to find command

2015-02-10 Thread Jonathan Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Allen updated HADOOP-10578:

Attachment: HADOOP-10578.patch

> Find command - add navigation and execution expressions to find command
> ---
>
> Key: HADOOP-10578
> URL: https://issues.apache.org/jira/browse/HADOOP-10578
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Attachments: HADOOP-10578.patch, HADOOP-10578.patch
>
>
> Add the navigation and execution expressions to the find command created 
> under HADOOP-8989, e.g.
> - exec
> - maxDepth
> - minDepth
> - prune
> - depth



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11575) Daemon log documentation is misleading

2015-02-10 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315204#comment-14315204
 ] 

Naganarasimha G R commented on HADOOP-11575:


Hi [~aw],
Thanks for moving but i had few queries:
#  whether we can update document for YARN and Hadoop common in the same jira 
as daemon log is documented separately for 
[Hadoop|http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/CommandsManual.html#daemonlog]
 and 
[YARN|http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/YarnCommands.html#daemonlog],
#  I feel daemonlog usage can  be updated as i mentioned earlier, please 
provide your opinion on this
{quote}
Usage: General options are:
[-getlevel  ]
[-setlevel   ]
{quote}
#  ??These levels are defined by log4j and defined as uppercase everywhere in 
both code and config. Making it mixed case here means supporting mixed case 
everywhere...??
ok, I mentioned this because, its failing at the validation done @ the servlet 
level and Level.toLevel(string) takes care of appropriate case conversions.
{quote}
if (!level.equals(org.apache.log4j.Level.toLevel(level).toString())) {
  out.println(MARKER + "Bad level : " + level + "");
} else {
{quote}


> Daemon log documentation is misleading
> --
>
> Key: HADOOP-11575
> URL: https://issues.apache.org/jira/browse/HADOOP-11575
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jagadesh Kiran N
>Assignee: Naganarasimha G R
>
> a. Execute the command
> ./yarn daemonlog -setlevel xx.xx.xx.xxx:45020 ResourceManager DEBUG
> b. It is not reflecting in process logs even after performing client level 
> operations
> c. Log level is not changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8502) Quota accounting should be calculated based on actual size rather than block size

2015-02-10 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-8502.
-
Resolution: Not a Problem

If the file is known to be small, it can use a small block size.  It this 
example, it can set block size equal to 16kB.  Then it won't get quota 
exception.

Resolving as not-a-problem.  Please feel free to reopen if you disagree.

> Quota accounting should be calculated based on actual size rather than block 
> size
> -
>
> Key: HADOOP-8502
> URL: https://issues.apache.org/jira/browse/HADOOP-8502
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: E. Sammer
>
> When calculating quotas, the block size is used rather than the actual size 
> of the file. This limits the granularity of quota enforcement to increments 
> of the block size which is wasteful and limits the usefulness (i.e. it's 
> possible to violate the quota in a way that's not at all intuitive.
> {code}
> [esammer@xxx ~]$ hadoop fs -count -q /user/esammer/quota-test
> none inf 1048576 10485761 
>2  0 hdfs://xxx/user/esammer/quota-test
> [esammer@xxx ~]$ du /etc/passwd
> 4   /etc/passwd
> esammer@xxx ~]$ hadoop fs -put /etc/passwd /user/esammer/quota-test/
> 12/06/09 13:56:16 WARN hdfs.DFSClient: DataStreamer Exception: 
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: 
> org.apache.hadoop.hdf
> s.protocol.DSQuotaExceededException: The DiskSpace quota of 
> /user/esammer/quota-test is exceeded: quota=1048576 diskspace consumed=384.0m
> ...
> {code}
> Obviously the file in question would only occupy 12KB, not 384MB, and should 
> easily fit within the 1MB quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10544) Find command - add operator functions to find command

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315123#comment-14315123
 ] 

Hadoop QA commented on HADOOP-10544:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697599/HADOOP-10544.patch
  against trunk revision 3f5431a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5643//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5643//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5643//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5643//console

This message is automatically generated.

> Find command - add operator functions to find command
> -
>
> Key: HADOOP-10544
> URL: https://issues.apache.org/jira/browse/HADOOP-10544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Attachments: HADOOP-10544.patch, HADOOP-10544.patch, 
> HADOOP-10544.patch, HADOOP-10544.patch
>
>
> Add operator functions (OR, NOT) to the find command created under 
> HADOOP-8989.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11495) Convert site documentation from apt to markdown

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315096#comment-14315096
 ] 

Hudson commented on HADOOP-11495:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7064 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7064/])
HADOOP-11495. Convert site documentation from apt to markdown (Masatake Iwasaki 
via aw) (aw: rev e9d26fe9eb16a0482d3581504ecad22b4cd65077)
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md
* hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
* hadoop-common-project/hadoop-common/src/site/apt/HttpAuthentication.apt.vm
* 
hadoop-common-project/hadoop-common/src/site/apt/InterfaceClassification.apt.vm
* hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
* hadoop-common-project/hadoop-common/src/site/markdown/Superusers.md
* hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
* hadoop-common-project/hadoop-common/src/site/apt/ServiceLevelAuth.apt.vm
* hadoop-common-project/hadoop-common/src/site/apt/Metrics.apt.vm
* hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md
* hadoop-common-project/hadoop-common/src/site/apt/Compatibility.apt.vm
* hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md
* hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm
* hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
* hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm
* hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
* hadoop-common-project/hadoop-common/src/site/markdown/HttpAuthentication.md
* hadoop-common-project/hadoop-common/src/site/apt/RackAwareness.apt.vm
* hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
* hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
* 
hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
* hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/apt/Tracing.apt.vm
* hadoop-common-project/hadoop-common/src/site/markdown/SingleNodeSetup.md
* hadoop-common-project/hadoop-common/src/site/apt/SingleCluster.apt.vm
* hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
* hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
* hadoop-common-project/hadoop-common/src/site/markdown/DeprecatedProperties.md
* hadoop-common-project/hadoop-common/src/site/apt/DeprecatedProperties.apt.vm
* hadoop-common-project/hadoop-common/src/site/apt/CLIMiniCluster.apt.vm
* hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* hadoop-common-project/hadoop-common/src/site/apt/SecureMode.apt.vm
* hadoop-common-project/hadoop-common/src/site/apt/Superusers.apt.vm


> Convert site documentation from apt to markdown
> ---
>
> Key: HADOOP-11495
> URL: https://issues.apache.org/jira/browse/HADOOP-11495
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Fix For: 3.0.0
>
> Attachments: HADOOP-11495-02.patch, HADOOP-11495-03.patch, 
> HADOOP-11495-04.patch, HADOOP-11495-05.patch, HADOOP-11496-00.patch, 
> HADOOP-11496-01.patch
>
>
> Almost Plain Text (aka APT) lost.  Markdown won.
> As a result, there are a ton of tools and online resources for Markdown that 
> would make editing and using our documentation much easier.  It would be 
> extremely beneficial for the community as a whole to move from apt to 
> markdown.
> This JIRA proposes to do this migration for the common project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10953) NetworkTopology#add calls NetworkTopology#toString without holding the netlock

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315095#comment-14315095
 ] 

Hudson commented on HADOOP-10953:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7064 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7064/])
HADOOP-10953. NetworkTopology#add calls NetworkTopology#toString without 
holding the netlock (Liang Xie via Colin P. McCabe) (cmccabe: rev 
6338ce3ae8870548cac5abe2f685748b5efb13c1)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> NetworkTopology#add calls NetworkTopology#toString without holding the netlock
> --
>
> Key: HADOOP-10953
> URL: https://issues.apache.org/jira/browse/HADOOP-10953
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-10953.txt
>
>
> Found this issue while reading the related code. In 
> NetworkTopology.toString() method, there is no thread safety guarantee 
> directly, it's called by add/remove, and inside add/remove, most of 
> this.toString() calls are protected by rwlock, except a couple of error 
> handling codes, one possible fix is that moving them into lock as well, due 
> to not heavy operations, so no obvious downgration should be observed per my 
> current knowledge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11565) Add --slaves shell option

2015-02-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315029#comment-14315029
 ] 

Allen Wittenauer commented on HADOOP-11565:
---

slaves specifically means "use the slaves" file.  We reference it like this 
throughout large chunks of the documentation.  Without --slaves, it is 
(effectively) the local host, i.e., no ssh and is the current functionality.

> Add --slaves shell option
> -
>
> Key: HADOOP-11565
> URL: https://issues.apache.org/jira/browse/HADOOP-11565
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11565-00.patch
>
>
> Add a --slaves shell option to hadoop-config.sh to trigger the given command 
> on slave nodes.  This is required to deprecate hadoop-daemons.sh and 
> yarn-daemons.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11565) Add --slaves shell option

2015-02-10 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14315012#comment-14315012
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-11565:
--

What is the use of the slave-mode? What is the diff between starting a master 
and a slave?

> Add --slaves shell option
> -
>
> Key: HADOOP-11565
> URL: https://issues.apache.org/jira/browse/HADOOP-11565
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11565-00.patch
>
>
> Add a --slaves shell option to hadoop-config.sh to trigger the given command 
> on slave nodes.  This is required to deprecate hadoop-daemons.sh and 
> yarn-daemons.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11495) Convert site documentation from apt to markdown

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11495:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

+1 committing to trunk.

Thanks!

> Convert site documentation from apt to markdown
> ---
>
> Key: HADOOP-11495
> URL: https://issues.apache.org/jira/browse/HADOOP-11495
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Fix For: 3.0.0
>
> Attachments: HADOOP-11495-02.patch, HADOOP-11495-03.patch, 
> HADOOP-11495-04.patch, HADOOP-11495-05.patch, HADOOP-11496-00.patch, 
> HADOOP-11496-01.patch
>
>
> Almost Plain Text (aka APT) lost.  Markdown won.
> As a result, there are a ton of tools and online resources for Markdown that 
> would make editing and using our documentation much easier.  It would be 
> extremely beneficial for the community as a whole to move from apt to 
> markdown.
> This JIRA proposes to do this migration for the common project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10953) NetworkTopology#add calls NetworkTopology#toString without holding the netlock

2015-02-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10953:
--
  Resolution: Fixed
   Fix Version/s: 2.7.0
Target Version/s: 2.7.0
  Status: Resolved  (was: Patch Available)

> NetworkTopology#add calls NetworkTopology#toString without holding the netlock
> --
>
> Key: HADOOP-10953
> URL: https://issues.apache.org/jira/browse/HADOOP-10953
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-10953.txt
>
>
> Found this issue while reading the related code. In 
> NetworkTopology.toString() method, there is no thread safety guarantee 
> directly, it's called by add/remove, and inside add/remove, most of 
> this.toString() calls are protected by rwlock, except a couple of error 
> handling codes, one possible fix is that moving them into lock as well, due 
> to not heavy operations, so no obvious downgration should be observed per my 
> current knowledge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10953) NetworkTopology#add calls NetworkTopology#toString without holding the netlock

2015-02-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314967#comment-14314967
 ] 

Colin Patrick McCabe commented on HADOOP-10953:
---

findbugs warnings are for {{org.apache.hadoop.fs.shell.Ls.processPath}} and 
{{org.apache.hadoop.fs.shell.Ls.dateFormat}}, neither of which were modified by 
this patch.

committing... thanks Liang Xie!

> NetworkTopology#add calls NetworkTopology#toString without holding the netlock
> --
>
> Key: HADOOP-10953
> URL: https://issues.apache.org/jira/browse/HADOOP-10953
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>Priority: Minor
> Attachments: HADOOP-10953.txt
>
>
> Found this issue while reading the related code. In 
> NetworkTopology.toString() method, there is no thread safety guarantee 
> directly, it's called by add/remove, and inside add/remove, most of 
> this.toString() calls are protected by rwlock, except a couple of error 
> handling codes, one possible fix is that moving them into lock as well, due 
> to not heavy operations, so no obvious downgration should be observed per my 
> current knowledge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11495) Convert site documentation from apt to markdown

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11495:
--
Status: Open  (was: Patch Available)

> Convert site documentation from apt to markdown
> ---
>
> Key: HADOOP-11495
> URL: https://issues.apache.org/jira/browse/HADOOP-11495
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-11495-02.patch, HADOOP-11495-03.patch, 
> HADOOP-11495-04.patch, HADOOP-11495-05.patch, HADOOP-11496-00.patch, 
> HADOOP-11496-01.patch
>
>
> Almost Plain Text (aka APT) lost.  Markdown won.
> As a result, there are a ton of tools and online resources for Markdown that 
> would make editing and using our documentation much easier.  It would be 
> extremely beneficial for the community as a whole to move from apt to 
> markdown.
> This JIRA proposes to do this migration for the common project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11495) Convert site documentation from apt to markdown

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11495:
--
Status: Patch Available  (was: Open)

> Convert site documentation from apt to markdown
> ---
>
> Key: HADOOP-11495
> URL: https://issues.apache.org/jira/browse/HADOOP-11495
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-11495-02.patch, HADOOP-11495-03.patch, 
> HADOOP-11495-04.patch, HADOOP-11495-05.patch, HADOOP-11496-00.patch, 
> HADOOP-11496-01.patch
>
>
> Almost Plain Text (aka APT) lost.  Markdown won.
> As a result, there are a ton of tools and online resources for Markdown that 
> would make editing and using our documentation much easier.  It would be 
> extremely beneficial for the community as a whole to move from apt to 
> markdown.
> This JIRA proposes to do this migration for the common project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11495) Convert site documentation from apt to markdown

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11495:
--
Assignee: Masatake Iwasaki

> Convert site documentation from apt to markdown
> ---
>
> Key: HADOOP-11495
> URL: https://issues.apache.org/jira/browse/HADOOP-11495
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-11495-02.patch, HADOOP-11495-03.patch, 
> HADOOP-11495-04.patch, HADOOP-11495-05.patch, HADOOP-11496-00.patch, 
> HADOOP-11496-01.patch
>
>
> Almost Plain Text (aka APT) lost.  Markdown won.
> As a result, there are a ton of tools and online resources for Markdown that 
> would make editing and using our documentation much easier.  It would be 
> extremely beneficial for the community as a whole to move from apt to 
> markdown.
> This JIRA proposes to do this migration for the common project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11495) Convert site documentation from apt to markdown

2015-02-10 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11495:
--
Attachment: HADOOP-11495-05.patch

I attached 05 patch fixing nits above and additional ones below.
* escaped double dash in command opts
* removed blank lines between items in lists
* reflected the changes added to FileSystemShell.apt.vm after 04 was created

I fixed only formatting part and left some dead links as is. I will fix them in 
follow-up jiras.


> Convert site documentation from apt to markdown
> ---
>
> Key: HADOOP-11495
> URL: https://issues.apache.org/jira/browse/HADOOP-11495
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11495-02.patch, HADOOP-11495-03.patch, 
> HADOOP-11495-04.patch, HADOOP-11495-05.patch, HADOOP-11496-00.patch, 
> HADOOP-11496-01.patch
>
>
> Almost Plain Text (aka APT) lost.  Markdown won.
> As a result, there are a ton of tools and online resources for Markdown that 
> would make editing and using our documentation much easier.  It would be 
> extremely beneficial for the community as a whole to move from apt to 
> markdown.
> This JIRA proposes to do this migration for the common project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11467) KerberosAuthenticator can connect to a non-secure cluster

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314918#comment-14314918
 ] 

Hadoop QA commented on HADOOP-11467:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12697856/HADOOP-11467.004.patch
  against trunk revision 3f5431a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-common-project/hadoop-auth 

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5645//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5645//console

This message is automatically generated.

> KerberosAuthenticator can connect to a non-secure cluster
> -
>
> Key: HADOOP-11467
> URL: https://issues.apache.org/jira/browse/HADOOP-11467
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HADOOP-11467.001.patch, HADOOP-11467.002.patch, 
> HADOOP-11467.003.patch, HADOOP-11467.004.patch
>
>
> While looking at HADOOP-10895, we discovered that the 
> {{KerberosAuthenticator}} can authenticate with a non-secure cluster, even 
> without falling back.
> The problematic code is here:
> {code:java}
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {// <- 
> A
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else if (isNegotiate()) {   // <- 
> B
> LOG.debug("Performing our own SPNEGO sequence.");
> doSpnegoSequence(token);
>   } else {  // <- 
> C
> LOG.debug("Using fallback authenticator sequence.");
> Authenticator auth = getFallBackAuthenticator();
> // Make sure that the fall back authenticator have the same
> // ConnectionConfigurator, since the method might be overridden.
> // Otherwise the fall back authenticator might not have the 
> information
> // to make the connection (e.g., SSL certificates)
> auth.setConnectionConfigurator(connConfigurator);
> auth.authenticate(url, token);
>   }
> }
> {code}
> Sometimes the JVM does the SPNEGO for us, and path A is used.  However, if 
> the {{KerberosAuthenticator}} tries to talk to a non-secure cluster, path A 
> also succeeds in this case.  
> More details can be found in this comment:
> https://issues.apache.org/jira/browse/HADOOP-10895?focusedCommentId=14247476&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14247476
> We've actually dealt with this before.  HADOOP-8883 tried to fix a related 
> problem by adding another condition to path A that would look for a header.  
> However, the JVM hides this header, making path A never occur.  We reverted 
> this change in HADOOP-10078, and didn't realize that there was still a 
> problem until now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11577) Need document for storage type label of data node storage locations under dfs.data.dir

2015-02-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-11577:

Description: 
HDFS-2832 enables support for heterogeneous storages in HDFS, which allows DN 
as a collection of storages with different types. However, I can't find 
document on how to label different storage types from the following two 
documents. I found the information from the design spec. It will be good we 
document this for admins and users to use the related Archival storage and 
storage policy features. 

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

This JIRA is opened to add document for the new storage type labels. 

1. Add an example under ArchivalStorage.html#Configuration section:

{code}
  
dfs.data.dir
[DISK]file:///hddata/dn/disk0,  
[SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0
  
{code}

2. Add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
type is labeled in the data node storage location configuration. 


  was:
HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN as 
a collection of storages with different types. However, I can't find document 
on how to label different storage types from the following two documents. I 
found the information from the design spec.It will be good we document this for 
admins and users to use the related Archival storage and storage policy 
features. 

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

This JIRA is opened to add document for the new storage type labels. 

1. Add an example under ArchivalStorage.html#Configuration section:

{code}
  
dfs.data.dir
[DISK]file:///hddata/dn/disk0,  
[SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0
  
{code}

2. Add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
type is labeled in the data node storage location configuration. 



> Need document for storage type label of data node storage locations under 
> dfs.data.dir
> --
>
> Key: HADOOP-11577
> URL: https://issues.apache.org/jira/browse/HADOOP-11577
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> HDFS-2832 enables support for heterogeneous storages in HDFS, which allows DN 
> as a collection of storages with different types. However, I can't find 
> document on how to label different storage types from the following two 
> documents. I found the information from the design spec. It will be good we 
> document this for admins and users to use the related Archival storage and 
> storage policy features. 
> http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html
> http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
> This JIRA is opened to add document for the new storage type labels. 
> 1. Add an example under ArchivalStorage.html#Configuration section:
> {code}
>   
> dfs.data.dir
> [DISK]file:///hddata/dn/disk0,  
> [SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0
>   
> {code}
> 2. Add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
> hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
> type is labeled in the data node storage location configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11565) Add --slaves shell option

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11565:
--
Status: Patch Available  (was: Open)

> Add --slaves shell option
> -
>
> Key: HADOOP-11565
> URL: https://issues.apache.org/jira/browse/HADOOP-11565
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11565-00.patch
>
>
> Add a --slaves shell option to hadoop-config.sh to trigger the given command 
> on slave nodes.  This is required to deprecate hadoop-daemons.sh and 
> yarn-daemons.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11577) Need document for storage type label of data node storage locations under dfs.data.dir

2015-02-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-11577:

Description: 
HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN as 
a collection of storages with different types. However, I can't find document 
on how to label different storage types from the following two documents. I 
found the information from the design spec.It will be good we document this for 
admins and users to use the related Archival storage and storage policy 
features. 

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

This JIRA is opened to add document for the new storage type labels. 

1. Add an example under ArchivalStorage.html#Configuration section:

{code}
  
dfs.data.dir
[DISK]file:///hddata/dn/disk0,  
[SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0
  
{code}

2. Add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
type is labeled in the data node storage location configuration. 


  was:
HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN as 
a collection of storages with different types. However, I can't find document 
on how to label different storage types from the following two documents. I 
found the information from the design spec.It will be good we document this for 
admins and users to use the related Archival storage and storage policy 
features. 

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

This JIRA is opened to add document for the new storage type labels. I propose 
to add an example under ArchivalStorage.html#Configuration section:

{code}
  
dfs.data.dir
[DISK]file:///hddata/dn/disk0,  
[SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0
  
{code}

Also add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
type is labeled in the data node storage location configuration. 



> Need document for storage type label of data node storage locations under 
> dfs.data.dir
> --
>
> Key: HADOOP-11577
> URL: https://issues.apache.org/jira/browse/HADOOP-11577
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN 
> as a collection of storages with different types. However, I can't find 
> document on how to label different storage types from the following two 
> documents. I found the information from the design spec.It will be good we 
> document this for admins and users to use the related Archival storage and 
> storage policy features. 
> http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html
> http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
> This JIRA is opened to add document for the new storage type labels. 
> 1. Add an example under ArchivalStorage.html#Configuration section:
> {code}
>   
> dfs.data.dir
> [DISK]file:///hddata/dn/disk0,  
> [SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0
>   
> {code}
> 2. Add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
> hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
> type is labeled in the data node storage location configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11577) Need document for storage type label of data node storage locations under dfs.data.dir

2015-02-10 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-11577:
---

 Summary: Need document for storage type label of data node storage 
locations under dfs.data.dir
 Key: HADOOP-11577
 URL: https://issues.apache.org/jira/browse/HADOOP-11577
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN as 
a collection of storages with different types. However, I can't find document 
on how to label different storage types from the following two documents. I 
found the information from the design spec.It will be good we document this for 
admins and users to use the related Archival storage and storage policy 
features. 

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

This JIRA is opened to add document for the new storage type labels. I propose 
to add an example under ArchivalStorage.html#Configuration section:

{code}
  
dfs.data.dir
[DISK]file:///hddata/dn/disk0,  
[SSD]file:///hddata/dn/ssd0,[ARCHIVAL]file:///hddata/dn/archival0,
  
{code}

Also add a short description of [DISK/SSD/ARCHIVAL/RAM_DISK] options in 
hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
type is labeled in the data node storage location configuration. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11577) Need document for storage type label of data node storage locations under dfs.data.dir

2015-02-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-11577:

Description: 
HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN as 
a collection of storages with different types. However, I can't find document 
on how to label different storage types from the following two documents. I 
found the information from the design spec.It will be good we document this for 
admins and users to use the related Archival storage and storage policy 
features. 

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

This JIRA is opened to add document for the new storage type labels. I propose 
to add an example under ArchivalStorage.html#Configuration section:

{code}
  
dfs.data.dir
[DISK]file:///hddata/dn/disk0,  
[SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0,
  
{code}

Also add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
type is labeled in the data node storage location configuration. 


  was:
HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN as 
a collection of storages with different types. However, I can't find document 
on how to label different storage types from the following two documents. I 
found the information from the design spec.It will be good we document this for 
admins and users to use the related Archival storage and storage policy 
features. 

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

This JIRA is opened to add document for the new storage type labels. I propose 
to add an example under ArchivalStorage.html#Configuration section:

{code}
  
dfs.data.dir
[DISK]file:///hddata/dn/disk0,  
[SSD]file:///hddata/dn/ssd0,[ARCHIVAL]file:///hddata/dn/archival0,
  
{code}

Also add a short description of [DISK/SSD/ARCHIVAL/RAM_DISK] options in 
hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
type is labeled in the data node storage location configuration. 



> Need document for storage type label of data node storage locations under 
> dfs.data.dir
> --
>
> Key: HADOOP-11577
> URL: https://issues.apache.org/jira/browse/HADOOP-11577
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN 
> as a collection of storages with different types. However, I can't find 
> document on how to label different storage types from the following two 
> documents. I found the information from the design spec.It will be good we 
> document this for admins and users to use the related Archival storage and 
> storage policy features. 
> http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html
> http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
> This JIRA is opened to add document for the new storage type labels. I 
> propose to add an example under ArchivalStorage.html#Configuration section:
> {code}
>   
> dfs.data.dir
> [DISK]file:///hddata/dn/disk0,  
> [SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0,
>   
> {code}
> Also add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
> hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
> type is labeled in the data node storage location configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11577) Need document for storage type label of data node storage locations under dfs.data.dir

2015-02-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-11577:

Description: 
HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN as 
a collection of storages with different types. However, I can't find document 
on how to label different storage types from the following two documents. I 
found the information from the design spec.It will be good we document this for 
admins and users to use the related Archival storage and storage policy 
features. 

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

This JIRA is opened to add document for the new storage type labels. I propose 
to add an example under ArchivalStorage.html#Configuration section:

{code}
  
dfs.data.dir
[DISK]file:///hddata/dn/disk0,  
[SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0
  
{code}

Also add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
type is labeled in the data node storage location configuration. 


  was:
HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN as 
a collection of storages with different types. However, I can't find document 
on how to label different storage types from the following two documents. I 
found the information from the design spec.It will be good we document this for 
admins and users to use the related Archival storage and storage policy 
features. 

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

This JIRA is opened to add document for the new storage type labels. I propose 
to add an example under ArchivalStorage.html#Configuration section:

{code}
  
dfs.data.dir
[DISK]file:///hddata/dn/disk0,  
[SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0,
  
{code}

Also add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
type is labeled in the data node storage location configuration. 



> Need document for storage type label of data node storage locations under 
> dfs.data.dir
> --
>
> Key: HADOOP-11577
> URL: https://issues.apache.org/jira/browse/HADOOP-11577
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> HDFS-2832 enable support for heterogeneous storages in HDFS, which allows DN 
> as a collection of storages with different types. However, I can't find 
> document on how to label different storage types from the following two 
> documents. I found the information from the design spec.It will be good we 
> document this for admins and users to use the related Archival storage and 
> storage policy features. 
> http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html
> http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
> This JIRA is opened to add document for the new storage type labels. I 
> propose to add an example under ArchivalStorage.html#Configuration section:
> {code}
>   
> dfs.data.dir
> [DISK]file:///hddata/dn/disk0,  
> [SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0
>   
> {code}
> Also add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
> hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
> type is labeled in the data node storage location configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10953) NetworkTopology#add calls NetworkTopology#toString without holding the netlock

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314824#comment-14314824
 ] 

Hadoop QA commented on HADOOP-10953:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12660965/HADOOP-10953.txt
  against trunk revision 3f5431a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5644//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5644//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5644//console

This message is automatically generated.

> NetworkTopology#add calls NetworkTopology#toString without holding the netlock
> --
>
> Key: HADOOP-10953
> URL: https://issues.apache.org/jira/browse/HADOOP-10953
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>Priority: Minor
> Attachments: HADOOP-10953.txt
>
>
> Found this issue while reading the related code. In 
> NetworkTopology.toString() method, there is no thread safety guarantee 
> directly, it's called by add/remove, and inside add/remove, most of 
> this.toString() calls are protected by rwlock, except a couple of error 
> handling codes, one possible fix is that moving them into lock as well, due 
> to not heavy operations, so no obvious downgration should be observed per my 
> current knowledge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11565) Add --slaves shell option

2015-02-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314813#comment-14314813
 ] 

Allen Wittenauer commented on HADOOP-11565:
---

(Note, I'll fix documentation later, after the markdown conversion.)

> Add --slaves shell option
> -
>
> Key: HADOOP-11565
> URL: https://issues.apache.org/jira/browse/HADOOP-11565
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11565-00.patch
>
>
> Add a --slaves shell option to hadoop-config.sh to trigger the given command 
> on slave nodes.  This is required to deprecate hadoop-daemons.sh and 
> yarn-daemons.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11565) Add --slaves shell option

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11565:
--
Attachment: HADOOP-11565-00.patch

-00: 
* Initial version
* hadoop, hdfs, mapred, yarn support for --slaves option
* deprecate sbin/yarn-daemons.sh, sbin/hadoop-daemons.sh

It's worth noting that --slaves supports a lot more options than current OR 
PREVIOUS incarnations of the *-daemons.sh code!

> Add --slaves shell option
> -
>
> Key: HADOOP-11565
> URL: https://issues.apache.org/jira/browse/HADOOP-11565
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11565-00.patch
>
>
> Add a --slaves shell option to hadoop-config.sh to trigger the given command 
> on slave nodes.  This is required to deprecate hadoop-daemons.sh and 
> yarn-daemons.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11467) KerberosAuthenticator can connect to a non-secure cluster

2015-02-10 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314798#comment-14314798
 ] 

Yongjun Zhang commented on HADOOP-11467:


Hi [~rkanter],

Thanks a lot for looking and catching that. Sorry for "losing" the file. I just 
uploaded rev 004 to add it.



> KerberosAuthenticator can connect to a non-secure cluster
> -
>
> Key: HADOOP-11467
> URL: https://issues.apache.org/jira/browse/HADOOP-11467
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HADOOP-11467.001.patch, HADOOP-11467.002.patch, 
> HADOOP-11467.003.patch, HADOOP-11467.004.patch
>
>
> While looking at HADOOP-10895, we discovered that the 
> {{KerberosAuthenticator}} can authenticate with a non-secure cluster, even 
> without falling back.
> The problematic code is here:
> {code:java}
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {// <- 
> A
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else if (isNegotiate()) {   // <- 
> B
> LOG.debug("Performing our own SPNEGO sequence.");
> doSpnegoSequence(token);
>   } else {  // <- 
> C
> LOG.debug("Using fallback authenticator sequence.");
> Authenticator auth = getFallBackAuthenticator();
> // Make sure that the fall back authenticator have the same
> // ConnectionConfigurator, since the method might be overridden.
> // Otherwise the fall back authenticator might not have the 
> information
> // to make the connection (e.g., SSL certificates)
> auth.setConnectionConfigurator(connConfigurator);
> auth.authenticate(url, token);
>   }
> }
> {code}
> Sometimes the JVM does the SPNEGO for us, and path A is used.  However, if 
> the {{KerberosAuthenticator}} tries to talk to a non-secure cluster, path A 
> also succeeds in this case.  
> More details can be found in this comment:
> https://issues.apache.org/jira/browse/HADOOP-10895?focusedCommentId=14247476&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14247476
> We've actually dealt with this before.  HADOOP-8883 tried to fix a related 
> problem by adding another condition to path A that would look for a header.  
> However, the JVM hides this header, making path A never occur.  We reverted 
> this change in HADOOP-10078, and didn't realize that there was still a 
> problem until now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11467) KerberosAuthenticator can connect to a non-secure cluster

2015-02-10 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-11467:
---
Attachment: HADOOP-11467.004.patch

> KerberosAuthenticator can connect to a non-secure cluster
> -
>
> Key: HADOOP-11467
> URL: https://issues.apache.org/jira/browse/HADOOP-11467
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HADOOP-11467.001.patch, HADOOP-11467.002.patch, 
> HADOOP-11467.003.patch, HADOOP-11467.004.patch
>
>
> While looking at HADOOP-10895, we discovered that the 
> {{KerberosAuthenticator}} can authenticate with a non-secure cluster, even 
> without falling back.
> The problematic code is here:
> {code:java}
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {// <- 
> A
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else if (isNegotiate()) {   // <- 
> B
> LOG.debug("Performing our own SPNEGO sequence.");
> doSpnegoSequence(token);
>   } else {  // <- 
> C
> LOG.debug("Using fallback authenticator sequence.");
> Authenticator auth = getFallBackAuthenticator();
> // Make sure that the fall back authenticator have the same
> // ConnectionConfigurator, since the method might be overridden.
> // Otherwise the fall back authenticator might not have the 
> information
> // to make the connection (e.g., SSL certificates)
> auth.setConnectionConfigurator(connConfigurator);
> auth.authenticate(url, token);
>   }
> }
> {code}
> Sometimes the JVM does the SPNEGO for us, and path A is used.  However, if 
> the {{KerberosAuthenticator}} tries to talk to a non-secure cluster, path A 
> also succeeds in this case.  
> More details can be found in this comment:
> https://issues.apache.org/jira/browse/HADOOP-10895?focusedCommentId=14247476&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14247476
> We've actually dealt with this before.  HADOOP-8883 tried to fix a related 
> problem by adding another condition to path A that would look for a header.  
> However, the JVM hides this header, making path A never occur.  We reverted 
> this change in HADOOP-10078, and didn't realize that there was still a 
> problem until now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11573) verify that s3a handles / in secret key

2015-02-10 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314686#comment-14314686
 ] 

Ravi Prakash commented on HADOOP-11573:
---

Hi Steve!
This has been a long standing issue: 
https://issues.apache.org/jira/browse/HADOOP-3733

> verify that s3a handles / in secret key
> ---
>
> Key: HADOOP-11573
> URL: https://issues.apache.org/jira/browse/HADOOP-11573
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
> Fix For: 2.7.0
>
>
> s3: and s3n: both get into trouble if if the AWS secret key has a "/" in it
> verify that s3a doesn't have this problem, and fix if it does



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9329) document native build dependencies in BUILDING.txt

2015-02-10 Thread Vijay Bhat (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314679#comment-14314679
 ] 

Vijay Bhat commented on HADOOP-9329:


I've built hadoop with native code on Ubuntu 14.04 LTS. Rick is correct that 
the Windows requirements section in BUILDING.txt addresses the native libraries 
needed. The only software / libraries I needed to install are:

* Java 1.7
* Maven 3.0.5
* Protobuf-2.5.0
* CMake 2.8.12.2
 
The command I ran for compiling native code was 

  mvn clean package -Pdist -DskipTests -Pnative

[~cmccabe], do you think I should add a section to BUILDING.txt that goes over 
requirements for building on Ubuntu? Or do you think we should close out this 
issue as is?

> document native build dependencies in BUILDING.txt
> --
>
> Key: HADOOP-9329
> URL: https://issues.apache.org/jira/browse/HADOOP-9329
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.1.0-beta
>Reporter: Colin Patrick McCabe
>Assignee: Vijay Bhat
>Priority: Trivial
>
> {{BUILDING.txt}} describes {{-Pnative}}, but it does not specify what native 
> libraries are needed for the build.  We should address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11543) Improve help message for hadoop/yarn command

2015-02-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314667#comment-14314667
 ] 

Allen Wittenauer commented on HADOOP-11543:
---

Created HADOOP-11576 to cover the generic shell opt usage bits.

> Improve help message for hadoop/yarn command
> 
>
> Key: HADOOP-11543
> URL: https://issues.apache.org/jira/browse/HADOOP-11543
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Jagadesh Kiran N
>Assignee: Brahma Reddy Battula
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HADOOP-11543-001.patch, HADOOP-11543-002.patch, 
> HADOOP-11543-002.patch, HADOOP-11543-branch-2-003.patch, 
> HADOOP-11543-branch-2.001.patch, HADOOP-11543-branch-2.002, 
> HADOOP-11543.patch, OrinYarncommnad.png, YARN-3128-01.patch, YARN-3128.patch
>
>
> Pls check the snapshot attached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-9329) document native build dependencies in BUILDING.txt

2015-02-10 Thread Vijay Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay Bhat reassigned HADOOP-9329:
--

Assignee: Vijay Bhat

> document native build dependencies in BUILDING.txt
> --
>
> Key: HADOOP-9329
> URL: https://issues.apache.org/jira/browse/HADOOP-9329
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.1.0-beta
>Reporter: Colin Patrick McCabe
>Assignee: Vijay Bhat
>Priority: Trivial
>
> {{BUILDING.txt}} describes {{-Pnative}}, but it does not specify what native 
> libraries are needed for the build.  We should address this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11576) genericize the shell options message for all hadoop_usage functions

2015-02-10 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11576:
-

 Summary: genericize the shell options message for all hadoop_usage 
functions
 Key: HADOOP-11576
 URL: https://issues.apache.org/jira/browse/HADOOP-11576
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Allen Wittenauer


Rather than have every shell command provide a list of \-\- options in its 
usage, they should be able to reference a generic function that lists all of 
them out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10544) Find command - add operator functions to find command

2015-02-10 Thread Jonathan Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Allen updated HADOOP-10544:

Status: Patch Available  (was: In Progress)

> Find command - add operator functions to find command
> -
>
> Key: HADOOP-10544
> URL: https://issues.apache.org/jira/browse/HADOOP-10544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Attachments: HADOOP-10544.patch, HADOOP-10544.patch, 
> HADOOP-10544.patch, HADOOP-10544.patch
>
>
> Add operator functions (OR, NOT) to the find command created under 
> HADOOP-8989.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11575) Daemon log documentation is misleading

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11575:
--
Component/s: documentation

> Daemon log documentation is misleading
> --
>
> Key: HADOOP-11575
> URL: https://issues.apache.org/jira/browse/HADOOP-11575
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jagadesh Kiran N
>Assignee: Naganarasimha G R
>
> a. Execute the command
> ./yarn daemonlog -setlevel xx.xx.xx.xxx:45020 ResourceManager DEBUG
> b. It is not reflecting in process logs even after performing client level 
> operations
> c. Log level is not changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11575) [YARN] Daemon log 'set level' and 'get level' is not reflecting in Process logs

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved YARN-3129 to HADOOP-11575:
-

Issue Type: Improvement  (was: Bug)
   Key: HADOOP-11575  (was: YARN-3129)
   Project: Hadoop Common  (was: Hadoop YARN)

> [YARN] Daemon log 'set level' and 'get level' is not reflecting in Process 
> logs 
> 
>
> Key: HADOOP-11575
> URL: https://issues.apache.org/jira/browse/HADOOP-11575
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jagadesh Kiran N
>Assignee: Naganarasimha G R
>
> a. Execute the command
> ./yarn daemonlog -setlevel xx.xx.xx.xxx:45020 ResourceManager DEBUG
> b. It is not reflecting in process logs even after performing client level 
> operations
> c. Log level is not changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11575) Daemon log documentation is misleading

2015-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11575:
--
Summary: Daemon log documentation is misleading  (was: [YARN] Daemon log 
'set level' and 'get level' is not reflecting in Process logs )

> Daemon log documentation is misleading
> --
>
> Key: HADOOP-11575
> URL: https://issues.apache.org/jira/browse/HADOOP-11575
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jagadesh Kiran N
>Assignee: Naganarasimha G R
>
> a. Execute the command
> ./yarn daemonlog -setlevel xx.xx.xx.xxx:45020 ResourceManager DEBUG
> b. It is not reflecting in process logs even after performing client level 
> operations
> c. Log level is not changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11467) KerberosAuthenticator can connect to a non-secure cluster

2015-02-10 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314646#comment-14314646
 ] 

Robert Kanter commented on HADOOP-11467:


[~yzhangal], it looks like TestAuthToken.java got lost from the 003 patch.

> KerberosAuthenticator can connect to a non-secure cluster
> -
>
> Key: HADOOP-11467
> URL: https://issues.apache.org/jira/browse/HADOOP-11467
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Yongjun Zhang
>Priority: Critical
> Attachments: HADOOP-11467.001.patch, HADOOP-11467.002.patch, 
> HADOOP-11467.003.patch
>
>
> While looking at HADOOP-10895, we discovered that the 
> {{KerberosAuthenticator}} can authenticate with a non-secure cluster, even 
> without falling back.
> The problematic code is here:
> {code:java}
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {// <- 
> A
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else if (isNegotiate()) {   // <- 
> B
> LOG.debug("Performing our own SPNEGO sequence.");
> doSpnegoSequence(token);
>   } else {  // <- 
> C
> LOG.debug("Using fallback authenticator sequence.");
> Authenticator auth = getFallBackAuthenticator();
> // Make sure that the fall back authenticator have the same
> // ConnectionConfigurator, since the method might be overridden.
> // Otherwise the fall back authenticator might not have the 
> information
> // to make the connection (e.g., SSL certificates)
> auth.setConnectionConfigurator(connConfigurator);
> auth.authenticate(url, token);
>   }
> }
> {code}
> Sometimes the JVM does the SPNEGO for us, and path A is used.  However, if 
> the {{KerberosAuthenticator}} tries to talk to a non-secure cluster, path A 
> also succeeds in this case.  
> More details can be found in this comment:
> https://issues.apache.org/jira/browse/HADOOP-10895?focusedCommentId=14247476&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14247476
> We've actually dealt with this before.  HADOOP-8883 tried to fix a related 
> problem by adding another condition to path A that would look for a header.  
> However, the JVM hides this header, making path A never occur.  We reverted 
> this change in HADOOP-10078, and didn't realize that there was still a 
> problem until now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11574) Uber-JIRA: improve Hadoop network resilience & diagnostics

2015-02-10 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11574:
---

 Summary: Uber-JIRA: improve Hadoop network resilience & diagnostics
 Key: HADOOP-11574
 URL: https://issues.apache.org/jira/browse/HADOOP-11574
 Project: Hadoop Common
  Issue Type: Task
  Components: net
Affects Versions: 2.6.0
Reporter: Steve Loughran


Improve Hadoop's resilience to bad network conditions/problems, including

* improving recognition of problem states
* improving diagnostics
* better handling of IPv6 addresses, even if the protocol is unsupported.
* better behaviour client-side when there are connectivity problems. (i.e while 
some errors you can spin on, DNS failures are not on the list)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9890) single cluster setup docs don't work

2015-02-10 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314438#comment-14314438
 ] 

Brahma Reddy Battula commented on HADOOP-9890:
--

I feel, this can be close now since it's not exists now..

> single cluster setup docs don't work
> 
>
> Key: HADOOP-9890
> URL: https://issues.apache.org/jira/browse/HADOOP-9890
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.1.1-beta
>Reporter: Steve Loughran
>Priority: Minor
>
> The instructions in the SingleClusterSetup.md to build a tarball don't work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11570) S3AInputStream.close() downloads the remaining bytes of the object from S3

2015-02-10 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314416#comment-14314416
 ] 

Thomas Demoor commented on HADOOP-11570:


Source looks good. High impact patch! 

Have you run the tests (by setting auth-keys.xml etc.)? I can give them a try 
in a couple of days: I want to finish up HADOOP-11522 and HADOOP-9565 first.

> S3AInputStream.close() downloads the remaining bytes of the object from S3
> --
>
> Key: HADOOP-11570
> URL: https://issues.apache.org/jira/browse/HADOOP-11570
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Dan Hecht
> Attachments: HADOOP-11570-001.patch
>
>
> Currently, S3AInputStream.close() calls S3Object.close().  But, 
> S3Object.close() will read the remaining bytes of the S3 object, potentially 
> transferring a lot of bytes from S3 that are discarded.  Instead, the wrapped 
> stream should be aborted to avoid transferring discarded bytes (unless the 
> preceding read() finished at contentLength).  For example, reading only the 
> first byte of a 1 GB object and then closing the stream will result in all 1 
> GB transferred from S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11570) S3AInputStream.close() downloads the remaining bytes of the object from S3

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314380#comment-14314380
 ] 

Hadoop QA commented on HADOOP-11570:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12697670/HADOOP-11570-001.patch
  against trunk revision e0ec071.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-aws.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5642//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5642//console

This message is automatically generated.

> S3AInputStream.close() downloads the remaining bytes of the object from S3
> --
>
> Key: HADOOP-11570
> URL: https://issues.apache.org/jira/browse/HADOOP-11570
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Dan Hecht
> Attachments: HADOOP-11570-001.patch
>
>
> Currently, S3AInputStream.close() calls S3Object.close().  But, 
> S3Object.close() will read the remaining bytes of the S3 object, potentially 
> transferring a lot of bytes from S3 that are discarded.  Instead, the wrapped 
> stream should be aborted to avoid transferring discarded bytes (unless the 
> preceding read() finished at contentLength).  For example, reading only the 
> first byte of a 1 GB object and then closing the stream will result in all 1 
> GB transferred from S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11192) Change old subversion links to git

2015-02-10 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314363#comment-14314363
 ] 

Brahma Reddy Battula commented on HADOOP-11192:
---

AFAIK old svn links are not present in hadoop-project/src/site/site.xml ,can we 
close this jira..?

> Change old subversion links to git
> --
>
> Key: HADOOP-11192
> URL: https://issues.apache.org/jira/browse/HADOOP-11192
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ravi Prakash
>
> e.g. hadoop-project/src/site/site.xml still references SVN. 
> We should probably check our wiki's and other documentation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11543) Improve help message for hadoop/yarn command

2015-02-10 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314331#comment-14314331
 ] 

Tsuyoshi OZAWA commented on HADOOP-11543:
-

+1 for branch-2 patch. I'll commit it to branch-2 in 1 or 2 days.

> Improve help message for hadoop/yarn command
> 
>
> Key: HADOOP-11543
> URL: https://issues.apache.org/jira/browse/HADOOP-11543
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Jagadesh Kiran N
>Assignee: Brahma Reddy Battula
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HADOOP-11543-001.patch, HADOOP-11543-002.patch, 
> HADOOP-11543-002.patch, HADOOP-11543-branch-2-003.patch, 
> HADOOP-11543-branch-2.001.patch, HADOOP-11543-branch-2.002, 
> HADOOP-11543.patch, OrinYarncommnad.png, YARN-3128-01.patch, YARN-3128.patch
>
>
> Pls check the snapshot attached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11543) Improve help message for hadoop/yarn command

2015-02-10 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314329#comment-14314329
 ] 

Tsuyoshi OZAWA commented on HADOOP-11543:
-

[~aw] Make sense. In the case of trunk, I think it's better and intuitive to 
add the message like [GENERIC SHELL OPTS]. As you mentioned, in the case of 
branch-2, --loglevel doesn't have useful information. Let's remove here.


> Improve help message for hadoop/yarn command
> 
>
> Key: HADOOP-11543
> URL: https://issues.apache.org/jira/browse/HADOOP-11543
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Jagadesh Kiran N
>Assignee: Brahma Reddy Battula
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HADOOP-11543-001.patch, HADOOP-11543-002.patch, 
> HADOOP-11543-002.patch, HADOOP-11543-branch-2-003.patch, 
> HADOOP-11543-branch-2.001.patch, HADOOP-11543-branch-2.002, 
> HADOOP-11543.patch, OrinYarncommnad.png, YARN-3128-01.patch, YARN-3128.patch
>
>
> Pls check the snapshot attached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314309#comment-14314309
 ] 

Hudson commented on HADOOP-11512:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2051 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2051/])
HADOOP-11512. Use getTrimmedStrings when reading serialization keys. 
Contributed by Ryan P. (harsh: rev e0ec0718d033e84bda2ebeab7beb00b7dbd990c0)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/serializer/TestSerializationFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Use getTrimmedStrings when reading serialization keys
> -
>
> Key: HADOOP-11512
> URL: https://issues.apache.org/jira/browse/HADOOP-11512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ryan P
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11512.branch-2.patch, HADOOP-11512.patch, 
> HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch
>
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
>  we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) 
> which does not trim the input. This could cause confusing user issues if 
> someone manually overrides the key in the XML files/Configuration object 
> without using the dynamic approach.
> The call should instead use Configuration#getTrimmedStrings(…), so the 
> whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11510) Expose truncate API via FileContext

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314311#comment-14314311
 ] 

Hudson commented on HADOOP-11510:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2051 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2051/])
HADOOP-11510. Expose truncate API via FileContext. (yliu) (yliu: rev 
1b56d1ce324165688d40c238858e1e19a1e60f7e)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHDFSFileContextMainOperations.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestAfsCheckPath.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java


> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.7.0
>
> Attachments: HADOOP-11510.001.patch, HADOOP-11510.002.patch, 
> HADOOP-11510.003.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11570) S3AInputStream.close() downloads the remaining bytes of the object from S3

2015-02-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314308#comment-14314308
 ] 

Ted Yu commented on HADOOP-11570:
-

lgtm

> S3AInputStream.close() downloads the remaining bytes of the object from S3
> --
>
> Key: HADOOP-11570
> URL: https://issues.apache.org/jira/browse/HADOOP-11570
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Dan Hecht
> Attachments: HADOOP-11570-001.patch
>
>
> Currently, S3AInputStream.close() calls S3Object.close().  But, 
> S3Object.close() will read the remaining bytes of the S3 object, potentially 
> transferring a lot of bytes from S3 that are discarded.  Instead, the wrapped 
> stream should be aborted to avoid transferring discarded bytes (unless the 
> preceding read() finished at contentLength).  For example, reading only the 
> first byte of a 1 GB object and then closing the stream will result in all 1 
> GB transferred from S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11570) S3AInputStream.close() downloads the remaining bytes of the object from S3

2015-02-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-11570:

Status: Patch Available  (was: Open)

> S3AInputStream.close() downloads the remaining bytes of the object from S3
> --
>
> Key: HADOOP-11570
> URL: https://issues.apache.org/jira/browse/HADOOP-11570
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Dan Hecht
> Attachments: HADOOP-11570-001.patch
>
>
> Currently, S3AInputStream.close() calls S3Object.close().  But, 
> S3Object.close() will read the remaining bytes of the S3 object, potentially 
> transferring a lot of bytes from S3 that are discarded.  Instead, the wrapped 
> stream should be aborted to avoid transferring discarded bytes (unless the 
> preceding read() finished at contentLength).  For example, reading only the 
> first byte of a 1 GB object and then closing the stream will result in all 1 
> GB transferred from S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314275#comment-14314275
 ] 

Hudson commented on HADOOP-11512:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #101 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/101/])
HADOOP-11512. Use getTrimmedStrings when reading serialization keys. 
Contributed by Ryan P. (harsh: rev e0ec0718d033e84bda2ebeab7beb00b7dbd990c0)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/serializer/TestSerializationFactory.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Use getTrimmedStrings when reading serialization keys
> -
>
> Key: HADOOP-11512
> URL: https://issues.apache.org/jira/browse/HADOOP-11512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ryan P
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11512.branch-2.patch, HADOOP-11512.patch, 
> HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch
>
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
>  we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) 
> which does not trim the input. This could cause confusing user issues if 
> someone manually overrides the key in the XML files/Configuration object 
> without using the dynamic approach.
> The call should instead use Configuration#getTrimmedStrings(…), so the 
> whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11510) Expose truncate API via FileContext

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314277#comment-14314277
 ] 

Hudson commented on HADOOP-11510:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #101 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/101/])
HADOOP-11510. Expose truncate API via FileContext. (yliu) (yliu: rev 
1b56d1ce324165688d40c238858e1e19a1e60f7e)
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestAfsCheckPath.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHDFSFileContextMainOperations.java


> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.7.0
>
> Attachments: HADOOP-11510.001.patch, HADOOP-11510.002.patch, 
> HADOOP-11510.003.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314121#comment-14314121
 ] 

Hudson commented on HADOOP-11512:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2032 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2032/])
HADOOP-11512. Use getTrimmedStrings when reading serialization keys. 
Contributed by Ryan P. (harsh: rev e0ec0718d033e84bda2ebeab7beb00b7dbd990c0)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/serializer/TestSerializationFactory.java


> Use getTrimmedStrings when reading serialization keys
> -
>
> Key: HADOOP-11512
> URL: https://issues.apache.org/jira/browse/HADOOP-11512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ryan P
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11512.branch-2.patch, HADOOP-11512.patch, 
> HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch
>
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
>  we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) 
> which does not trim the input. This could cause confusing user issues if 
> someone manually overrides the key in the XML files/Configuration object 
> without using the dynamic approach.
> The call should instead use Configuration#getTrimmedStrings(…), so the 
> whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8934) Shell command ls should include sort options

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314126#comment-14314126
 ] 

Hudson commented on HADOOP-8934:


FAILURE: Integrated in Hadoop-Hdfs-trunk #2032 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2032/])
HADOOP-8934. Shell command ls should include sort options (Jonathan Allen via 
aw) (aw: rev 30b797ee9df30260314eeadffc7d51492871b352)
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
* hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Ls.java
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-8934. Shell command ls should include sort options (Jonathan Allen via 
aw) (missed file) (aw: rev 576459801c4e21effc4e3bca796527896b6e4f4b)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestLs.java


> Shell command ls should include sort options
> 
>
> Key: HADOOP-8934
> URL: https://issues.apache.org/jira/browse/HADOOP-8934
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
> HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
> HADOOP-8934.patch
>
>
> The shell command ls should include options to sort the output similar to the 
> unix ls command.  The following options seem appropriate:
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : use access time rather than modification time for sort and display



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11510) Expose truncate API via FileContext

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314123#comment-14314123
 ] 

Hudson commented on HADOOP-11510:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2032 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2032/])
HADOOP-11510. Expose truncate API via FileContext. (yliu) (yliu: rev 
1b56d1ce324165688d40c238858e1e19a1e60f7e)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestAfsCheckPath.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHDFSFileContextMainOperations.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java


> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.7.0
>
> Attachments: HADOOP-11510.001.patch, HADOOP-11510.002.patch, 
> HADOOP-11510.003.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314102#comment-14314102
 ] 

Hadoop QA commented on HADOOP-11569:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12697710/HADOOP-11569-003.patch
  against trunk revision e0ec071.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5641//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5641//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5641//console

This message is automatically generated.

> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314035#comment-14314035
 ] 

Hudson commented on HADOOP-11512:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #834 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/834/])
HADOOP-11512. Use getTrimmedStrings when reading serialization keys. 
Contributed by Ryan P. (harsh: rev e0ec0718d033e84bda2ebeab7beb00b7dbd990c0)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/serializer/TestSerializationFactory.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java


> Use getTrimmedStrings when reading serialization keys
> -
>
> Key: HADOOP-11512
> URL: https://issues.apache.org/jira/browse/HADOOP-11512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ryan P
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11512.branch-2.patch, HADOOP-11512.patch, 
> HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch
>
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
>  we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) 
> which does not trim the input. This could cause confusing user issues if 
> someone manually overrides the key in the XML files/Configuration object 
> without using the dynamic approach.
> The call should instead use Configuration#getTrimmedStrings(…), so the 
> whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8934) Shell command ls should include sort options

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314040#comment-14314040
 ] 

Hudson commented on HADOOP-8934:


FAILURE: Integrated in Hadoop-Yarn-trunk #834 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/834/])
HADOOP-8934. Shell command ls should include sort options (Jonathan Allen via 
aw) (aw: rev 30b797ee9df30260314eeadffc7d51492871b352)
* hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Ls.java
HADOOP-8934. Shell command ls should include sort options (Jonathan Allen via 
aw) (missed file) (aw: rev 576459801c4e21effc4e3bca796527896b6e4f4b)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestLs.java


> Shell command ls should include sort options
> 
>
> Key: HADOOP-8934
> URL: https://issues.apache.org/jira/browse/HADOOP-8934
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
> HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
> HADOOP-8934.patch
>
>
> The shell command ls should include options to sort the output similar to the 
> unix ls command.  The following options seem appropriate:
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : use access time rather than modification time for sort and display



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11510) Expose truncate API via FileContext

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14314037#comment-14314037
 ] 

Hudson commented on HADOOP-11510:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #834 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/834/])
HADOOP-11510. Expose truncate API via FileContext. (yliu) (yliu: rev 
1b56d1ce324165688d40c238858e1e19a1e60f7e)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestAfsCheckPath.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHDFSFileContextMainOperations.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java


> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: 2.7.0
>
> Attachments: HADOOP-11510.001.patch, HADOOP-11510.002.patch, 
> HADOOP-11510.003.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-10 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-11569:
---
Attachment: HADOOP-11569-003.patch

Updated {{close()}} to make closed streams null.

> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14313976#comment-14313976
 ] 

Hudson commented on HADOOP-11512:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #100 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/100/])
HADOOP-11512. Use getTrimmedStrings when reading serialization keys. 
Contributed by Ryan P. (harsh: rev e0ec0718d033e84bda2ebeab7beb00b7dbd990c0)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/serializer/TestSerializationFactory.java


> Use getTrimmedStrings when reading serialization keys
> -
>
> Key: HADOOP-11512
> URL: https://issues.apache.org/jira/browse/HADOOP-11512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ryan P
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11512.branch-2.patch, HADOOP-11512.patch, 
> HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch
>
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
>  we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) 
> which does not trim the input. This could cause confusing user issues if 
> someone manually overrides the key in the XML files/Configuration object 
> without using the dynamic approach.
> The call should instead use Configuration#getTrimmedStrings(…), so the 
> whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >