[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature

2014-04-30 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986396#comment-13986396
 ] 

Akira AJISAKA commented on HADOOP-8989:
---

Thanks [~jonallen] for splitting the patch!
{quote}
I think you get the results as full paths
{code}
hdfs://127.0.0.1:59894/texttest
hdfs://127.0.0.1:59894/texttest/file..bz2
hdfs://127.0.0.1:59894/texttest/file.deflate
hdfs://127.0.0.1:59894/texttest/file.gz
{code}
This should be relative paths I think from where you are finding, no?
{quote}
Would you please reflect the above [~umamaheswararao]'s comment?

A new comment:
* The tests for -iname option is needed.

Two minor nits:
* Typo in TestAnd.java
{code}
// test both expressions failining
{code}
* There're some lines over 80 characters in TestFind.java

> hadoop dfs -find feature
> 
>
> Key: HADOOP-8989
> URL: https://issues.apache.org/jira/browse/HADOOP-8989
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Marco Nicosia
>Assignee: Jonathan Allen
> Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
> HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch
>
>
> Both sysadmins and users make frequent use of the unix 'find' command, but 
> Hadoop has no correlate. Without this, users are writing scripts which make 
> heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
> -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
> client side. Possibly an in-NameNode find operation would be only a bit more 
> taxing on the NameNode, but significantly faster from the client's point of 
> view?
> The minimum set of options I can think of which would make a Hadoop find 
> command generally useful is (in priority order):
> * -type (file or directory, for now)
> * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
> * -print0 (for piping to xargs -0)
> * -depth
> * -owner/-group (and -nouser/-nogroup)
> * -name (allowing for shell pattern, or even regex?)
> * -perm
> * -size
> One possible special case, but could possibly be really cool if it ran from 
> within the NameNode:
> * -delete
> The "hadoop dfs -lsr | hadoop dfs -rm" cycle is really, really slow.
> Lower priority, some people do use operators, mostly to execute -or searches 
> such as:
> * find / \(-nouser -or -nogroup\)
> Finally, I thought I'd include a link to the [Posix spec for 
> find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10556) Add toLowerCase support to auth_to_local rules for service name

2014-04-30 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986375#comment-13986375
 ] 

Alejandro Abdelnur commented on HADOOP-10556:
-

Adding a /L option (similar to the existing /g) we could handle lowercasing.

Because Java regexs don’t support /L 
(http://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html), we 
will have to handle that explicitly in the KerberosName rules handling logic.


> Add toLowerCase support to auth_to_local rules for service name
> ---
>
> Key: HADOOP-10556
> URL: https://issues.apache.org/jira/browse/HADOOP-10556
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>
> When using Vintela to integrate Linux with AD, principals are lowercased. If 
> the accounts in AD have uppercase characters (ie FooBar) the Kerberos 
> principals have also uppercase characters (ie FooBar/). Because of 
> this, when a service (Yarn/HDFS) extracts the service name from the Kerberos 
> principal (FooBar) and uses it for obtain groups the user is not found 
> because via Linux the user FooBar is unknown, it has been converted to foobar.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10556) Add toLowerCase support to auth_to_local rules for service name

2014-04-30 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-10556:
---

 Summary: Add toLowerCase support to auth_to_local rules for 
service name
 Key: HADOOP-10556
 URL: https://issues.apache.org/jira/browse/HADOOP-10556
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.4.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur


When using Vintela to integrate Linux with AD, principals are lowercased. If 
the accounts in AD have uppercase characters (ie FooBar) the Kerberos 
principals have also uppercase characters (ie FooBar/). Because of this, 
when a service (Yarn/HDFS) extracts the service name from the Kerberos 
principal (FooBar) and uses it for obtain groups the user is not found because 
via Linux the user FooBar is unknown, it has been converted to foobar.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10376) Refactor refresh*Protocols into a single generic refreshConfigProtocol

2014-04-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986355#comment-13986355
 ] 

Hadoop QA commented on HADOOP-10376:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12642725/RefreshFrameworkProposal.pdf
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3880//console

This message is automatically generated.

> Refactor refresh*Protocols into a single generic refreshConfigProtocol
> --
>
> Key: HADOOP-10376
> URL: https://issues.apache.org/jira/browse/HADOOP-10376
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Li
>Assignee: Chris Li
>Priority: Minor
> Attachments: RefreshFrameworkProposal.pdf
>
>
> See https://issues.apache.org/jira/browse/HADOOP-10285
> There are starting to be too many refresh*Protocols We can refactor them to 
> use a single protocol with a variable payload to choose what to do.
> Thereafter, we can return an indication of success or failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10389) Native RPCv9 client

2014-04-30 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10389.
---

Resolution: Fixed

committed to branch

> Native RPCv9 client
> ---
>
> Key: HADOOP-10389
> URL: https://issues.apache.org/jira/browse/HADOOP-10389
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-10388
>Reporter: Binglin Chang
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
> HADOOP-10389.004.patch, HADOOP-10389.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10389) Native RPCv9 client

2014-04-30 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986295#comment-13986295
 ] 

Colin Patrick McCabe commented on HADOOP-10389:
---

bq. I couldn't tell if this had already made it into the hadoop-common build 
system?

It's not hooked up to Maven yet, no.  We'll use the same method we hook CMake 
to the current Maven poms, I think.  Shouldn't be too difficult.

bq. I'm assuming this is a preliminary patch? I think that we'll need more test 
cases and a valgrind run?

I have run valgrind on it and come up clean.  There is one test case here, more 
will follow.

bq. I think in the long term the cmake file will need to be cleaned up. I saw a 
couple of just hacks (ie find_library(PROTOC_LIB NAMES libprotoc.so HINTS 
/opt/protobuf-2.5/lib64/).

Fixed, thanks

bq. Do we need a cap on how large '&reactor->inbox.pending_calls' can be? I'm 
concerned that some systems can be intensive enough to cause OOM exceptions. 
Also, client side throttling may set good boundaries?

I think the first thing to do is implement RPC timeouts, which we can do in a 
follow-on.  This should avoid having queues that grow too much, unless the 
client is truly unwise about making a ton of calls in a short period.

[~vicaya]: yes, we will do user / group security in a follow-up.  I think for 
the "plain" auth method, it will be as simple as just filling in the fields in 
the header.  SASL will be more work.  There is definitely lots of work for 
branch committers here :)

bq. +1

Will commit to branch.  And file follow-ups...

> Native RPCv9 client
> ---
>
> Key: HADOOP-10389
> URL: https://issues.apache.org/jira/browse/HADOOP-10389
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-10388
>Reporter: Binglin Chang
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
> HADOOP-10389.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10389) Native RPCv9 client

2014-04-30 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10389:
--

Attachment: HADOOP-10389.005.patch

> Native RPCv9 client
> ---
>
> Key: HADOOP-10389
> URL: https://issues.apache.org/jira/browse/HADOOP-10389
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-10388
>Reporter: Binglin Chang
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
> HADOOP-10389.004.patch, HADOOP-10389.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10555) add offset support to MurmurHash

2014-04-30 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986197#comment-13986197
 ] 

Sergey Shelukhin commented on HADOOP-10555:
---

[~t3rmin4t0r] fyi

> add offset support to MurmurHash
> 
>
> Key: HADOOP-10555
> URL: https://issues.apache.org/jira/browse/HADOOP-10555
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Trivial
> Attachments: HADOOP-10555.patch
>
>
> From HIVE-6430 code review



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10555) add offset support to MurmurHash

2014-04-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HADOOP-10555:
--

Status: Patch Available  (was: Open)

> add offset support to MurmurHash
> 
>
> Key: HADOOP-10555
> URL: https://issues.apache.org/jira/browse/HADOOP-10555
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Trivial
> Attachments: HADOOP-10555.patch
>
>
> From HIVE-6430 code review



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10555) add offset support to MurmurHash

2014-04-30 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HADOOP-10555:
--

Attachment: HADOOP-10555.patch

can someone please assign to me? I don't have permissions to assign

> add offset support to MurmurHash
> 
>
> Key: HADOOP-10555
> URL: https://issues.apache.org/jira/browse/HADOOP-10555
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Priority: Trivial
> Attachments: HADOOP-10555.patch
>
>
> From HIVE-6430 code review



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10555) add offset support to MurmurHash

2014-04-30 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HADOOP-10555:
-

 Summary: add offset support to MurmurHash
 Key: HADOOP-10555
 URL: https://issues.apache.org/jira/browse/HADOOP-10555
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Priority: Trivial


>From HIVE-6430 code review



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10554) Performance: Scan metrics for 2.4 are down compared to 0.23.9

2014-04-30 Thread patrick white (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

patrick white updated HADOOP-10554:
---

Summary: Performance: Scan metrics for 2.4 are down compared to 0.23.9  
(was: Performance: Scan metrics for 2.4 are notably down compared to 0.23.9)

> Performance: Scan metrics for 2.4 are down compared to 0.23.9
> -
>
> Key: HADOOP-10554
> URL: https://issues.apache.org/jira/browse/HADOOP-10554
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: patrick white
>
> Performance comparison benchmarks for Scan test's runtime and throughput 
> metrics are slightly out of 5% tolerance in 2.x compared against 0.23. The 
> trend is consistent across later releases in both lines, latest release 
> numbers are;
> Runtime:
> 2.4.0.0 ->73.6 seconds (avg 5 passes)
> 0.23.9.12 ->69.4 seconds (avg 5 passes)
> Diff: -5.7%
> Throughput:
> 2.4.0.0 ->   28.67 GB/s (avg 5 passes)
> 0.23.9.12 ->  30.41 GB/s (avg 5 passes)
> Diff: -6.1%
> Scan test is specifically measuring the average map's input read performance. 
> The diff is consistent when run on a larger (350 node) perf environment, we 
> are in process of seeing if this reproduces in a smaller cluster, using 
> appropriately scaled inputs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10554) Performance: Scan metrics for 2.4 are notably down compared to 0.23.9

2014-04-30 Thread patrick white (JIRA)
patrick white created HADOOP-10554:
--

 Summary: Performance: Scan metrics for 2.4 are notably down 
compared to 0.23.9
 Key: HADOOP-10554
 URL: https://issues.apache.org/jira/browse/HADOOP-10554
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: patrick white


Performance comparison benchmarks for Scan test's runtime and throughput 
metrics are slightly out of 5% tolerance in 2.x compared against 0.23. The 
trend is consistent across later releases in both lines, latest release numbers 
are;

Runtime:
2.4.0.0 ->73.6 seconds (avg 5 passes)
0.23.9.12 ->69.4 seconds (avg 5 passes)
Diff: -5.7%

Throughput:
2.4.0.0 ->   28.67 GB/s (avg 5 passes)
0.23.9.12 ->  30.41 GB/s (avg 5 passes)
Diff: -6.1%

Scan test is specifically measuring the average map's input read performance. 
The diff is consistent when run on a larger (350 node) perf environment, we are 
in process of seeing if this reproduces in a smaller cluster, using 
appropriately scaled inputs.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10553) Performance: AM scaleability is 10% slower in 2.4 compared to 0.23.9

2014-04-30 Thread patrick white (JIRA)
patrick white created HADOOP-10553:
--

 Summary: Performance: AM scaleability is 10% slower in 2.4 
compared to 0.23.9
 Key: HADOOP-10553
 URL: https://issues.apache.org/jira/browse/HADOOP-10553
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: patrick white


Performance comparison benchmarks from 2.x against 0.23 shows AM scalability 
benchmark's runtime is approximately 10% slower in 2.4.0. The trend is 
consistent across later releases in both lines, latest release numbers are:

2.4.0.0 runtime 255.6 seconds (avg 5 passes)
0.23.9.12 runtime 230.4 seconds (avg 5 passes)
Diff: -9.9% 

AM Scalability test is essentially a sleep job that measures time to launch and 
complete a large number of mappers.

The diff is consistent and has been reproduced in both a larger (350 node, 
100,000 mappers) perf environment, as well as a small (10 node, 2,900 mappers) 
demo cluster.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10285) Admin interface to swap callqueue at runtime

2014-04-30 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10285:
---

Assignee: Chris Li

> Admin interface to swap callqueue at runtime
> 
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
>Assignee: Chris Li
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10285.patch, HADOOP-10285.patch, 
> HADOOP-10285.patch, HADOOP-10285.patch, bisection-test.patch, 
> bisection-test.patch, bisection-test.patch, bisection-test.patch, 
> bisection-test.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10389) Native RPCv9 client

2014-04-30 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986134#comment-13986134
 ] 

Luke Lu commented on HADOOP-10389:
--

I assume this is a prototype/partial demo? as all the security stuff is 
missing. What else is missing (including not yet published)? Are there any work 
left for the newly minted branch committers? :) 

> Native RPCv9 client
> ---
>
> Key: HADOOP-10389
> URL: https://issues.apache.org/jira/browse/HADOOP-10389
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-10388
>Reporter: Binglin Chang
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
> HADOOP-10389.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10389) Native RPCv9 client

2014-04-30 Thread Abraham Elmahrek (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986118#comment-13986118
 ] 

Abraham Elmahrek commented on HADOOP-10389:
---

In general, +1. Please take a look at the following comments:
* I couldn't tell if this had already made it into the hadoop-common build 
system?
* I'm assuming this is a preliminary patch? I think that we'll need more test 
cases and a valgrind run?
* I think in the long term the cmake file will need to be cleaned up. I saw a 
couple of just hacks (ie find_library(PROTOC_LIB NAMES libprotoc.so HINTS 
/opt/protobuf-2.5/lib64/).
* Do we need a cap on how large '&reactor->inbox.pending_calls' can be? I'm 
concerned that some systems can be intensive enough to cause OOM exceptions. 
Also, client side throttling may set good boundaries?

> Native RPCv9 client
> ---
>
> Key: HADOOP-10389
> URL: https://issues.apache.org/jira/browse/HADOOP-10389
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-10388
>Reporter: Binglin Chang
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
> HADOOP-10389.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10376) Refactor refresh*Protocols into a single generic refreshConfigProtocol

2014-04-30 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10376:
--

Attachment: (was: HADOOP-10376.patch)

> Refactor refresh*Protocols into a single generic refreshConfigProtocol
> --
>
> Key: HADOOP-10376
> URL: https://issues.apache.org/jira/browse/HADOOP-10376
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Li
>Assignee: Chris Li
>Priority: Minor
> Attachments: RefreshFrameworkProposal.pdf
>
>
> See https://issues.apache.org/jira/browse/HADOOP-10285
> There are starting to be too many refresh*Protocols We can refactor them to 
> use a single protocol with a variable payload to choose what to do.
> Thereafter, we can return an indication of success or failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10376) Refactor refresh*Protocols into a single generic refreshConfigProtocol

2014-04-30 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10376:
--

Attachment: RefreshFrameworkProposal.pdf

Attached brief proposal for this feature

> Refactor refresh*Protocols into a single generic refreshConfigProtocol
> --
>
> Key: HADOOP-10376
> URL: https://issues.apache.org/jira/browse/HADOOP-10376
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Li
>Assignee: Chris Li
>Priority: Minor
> Attachments: RefreshFrameworkProposal.pdf
>
>
> See https://issues.apache.org/jira/browse/HADOOP-10285
> There are starting to be too many refresh*Protocols We can refactor them to 
> use a single protocol with a variable payload to choose what to do.
> Thereafter, we can return an indication of success or failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10433) Key Management Server based on KeyProvider API

2014-04-30 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986019#comment-13986019
 ] 

Alejandro Abdelnur commented on HADOOP-10433:
-

sure, i'll wait till Friday. Thanks again.

> Key Management Server based on KeyProvider API
> --
>
> Key: HADOOP-10433
> URL: https://issues.apache.org/jira/browse/HADOOP-10433
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10433.patch, HADOOP-10433.patch, 
> HADOOP-10433.patch, HADOOP-10433.patch, HADOOP-10433.patch, 
> HADOOP-10433.patch, HADOOP-10433.patch, HADOOP-10433.patch, 
> HADOOP-10433.patch, HADOOP-10433.patch, HADOOP-10433.patch, 
> HadoopKMSDocsv2.pdf, KMS-doc.pdf
>
>
> (from HDFS-6134 proposal)
> Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying 
> KMS. It provides an interface that works with existing Hadoop security 
> components (authenticatication, confidentiality).
> Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 
> and HADOOP-10177.
> Hadoop KMS will provide an additional implementation of the Hadoop 
> KeyProvider class. This implementation will be a client-server implementation.
> The client-server protocol will be secure:
> * Kerberos HTTP SPNEGO (authentication)
> * HTTPS for transport (confidentiality and integrity)
> * Hadoop ACLs (authorization)
> The Hadoop KMS implementation will not provide additional ACL to access 
> encrypted files. For sophisticated access control requirements, HDFS ACLs 
> (HDFS-4685) should be used.
> Basic key administration will be supported by the Hadoop KMS via the, already 
> available, Hadoop KeyShell command line tool
> There are minor changes that must be done in Hadoop KeyProvider functionality:
> The KeyProvider contract, and the existing implementations, must be 
> thread-safe
> KeyProvider API should have an API to generate the key material internally
> JavaKeyStoreProvider should use, if present, a password provided via 
> configuration
> KeyProvider Option and Metadata should include a label (for easier 
> cross-referencing)
> To avoid overloading the underlying KeyProvider implementation, the Hadoop 
> KMS will cache keys using a TTL policy.
> Scalability and High Availability of the Hadoop KMS can achieved by running 
> multiple instances behind a VIP/Load-Balancer. For High Availability, the 
> underlying KeyProvider implementation used by the Hadoop KMS must be High 
> Available.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10433) Key Management Server based on KeyProvider API

2014-04-30 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985782#comment-13985782
 ] 

Andrew Wang commented on HADOOP-10433:
--

+1 from me, thanks Tucu. Really nice work here. Let's wait a bit before 
committing to see if anyone else still has review comments.

> Key Management Server based on KeyProvider API
> --
>
> Key: HADOOP-10433
> URL: https://issues.apache.org/jira/browse/HADOOP-10433
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10433.patch, HADOOP-10433.patch, 
> HADOOP-10433.patch, HADOOP-10433.patch, HADOOP-10433.patch, 
> HADOOP-10433.patch, HADOOP-10433.patch, HADOOP-10433.patch, 
> HADOOP-10433.patch, HADOOP-10433.patch, HADOOP-10433.patch, 
> HadoopKMSDocsv2.pdf, KMS-doc.pdf
>
>
> (from HDFS-6134 proposal)
> Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying 
> KMS. It provides an interface that works with existing Hadoop security 
> components (authenticatication, confidentiality).
> Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 
> and HADOOP-10177.
> Hadoop KMS will provide an additional implementation of the Hadoop 
> KeyProvider class. This implementation will be a client-server implementation.
> The client-server protocol will be secure:
> * Kerberos HTTP SPNEGO (authentication)
> * HTTPS for transport (confidentiality and integrity)
> * Hadoop ACLs (authorization)
> The Hadoop KMS implementation will not provide additional ACL to access 
> encrypted files. For sophisticated access control requirements, HDFS ACLs 
> (HDFS-4685) should be used.
> Basic key administration will be supported by the Hadoop KMS via the, already 
> available, Hadoop KeyShell command line tool
> There are minor changes that must be done in Hadoop KeyProvider functionality:
> The KeyProvider contract, and the existing implementations, must be 
> thread-safe
> KeyProvider API should have an API to generate the key material internally
> JavaKeyStoreProvider should use, if present, a password provided via 
> configuration
> KeyProvider Option and Metadata should include a label (for easier 
> cross-referencing)
> To avoid overloading the underlying KeyProvider implementation, the Hadoop 
> KMS will cache keys using a TTL policy.
> Scalability and High Availability of the Hadoop KMS can achieved by running 
> multiple instances behind a VIP/Load-Balancer. For High Availability, the 
> underlying KeyProvider implementation used by the Hadoop KMS must be High 
> Available.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10519) In HDFS HA mode, Distcp/SLive with webhdfs on secure cluster fails with Client cannot authenticate via:[TOKEN, KERBEROS] error

2014-04-30 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985703#comment-13985703
 ] 

Daryn Sharp commented on HADOOP-10519:
--

I never liked the way hdfs tokens are managed.  There is no difference between 
a hdfs, (s)webhdfs, hftp, etc token so the token kind should be the same.  
Unfortunately the service field represents the issuer's address for renewal, as 
well as the key for token selection for connections so token duping hacks are 
currently used.  I've always meant to move to servers returning an opaque 
server-id for token selection that would make the protocol irrelevant...  For 
HA servers, the opaque server-id would be the HA logical name so the same token 
would work with both hdfs and webhdfs.  But I digress.

All that said, the short answer for now is the service for logical HA webhdfs 
tokens should be "ha-webhdfs:hostname".

> In HDFS HA mode, Distcp/SLive with webhdfs on secure cluster fails with 
> Client cannot authenticate via:[TOKEN, KERBEROS] error
> --
>
> Key: HADOOP-10519
> URL: https://issues.apache.org/jira/browse/HADOOP-10519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Jian He
>
> Opening on [~arpitgupta]'s behalf.
> We observed that, in HDFS HA mode, running Distcp/SLive with webhdfs will 
> fail on YARN.  In non-HA mode, it'll pass. 
> The reason is in HA mode, only webhdfs delegation token is generated for the 
> job, but YARN also requires the regular hdfs token to do localization, 
> log-aggregation etc.
> In non-HA mode, both tokens are generated for the job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10547) Give SaslPropertiesResolver.getDefaultProperties() public scope

2014-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985633#comment-13985633
 ] 

Hudson commented on HADOOP-10547:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1773 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1773/])
HADOOP-10547. Fix CHANGES.txt (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591098)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HADOOP-10547. Give SaslPropertiesResolver.getDefaultProperties() public scope. 
(Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591095)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPropertiesResolver.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Give SaslPropertiesResolver.getDefaultProperties() public scope
> ---
>
> Key: HADOOP-10547
> URL: https://issues.apache.org/jira/browse/HADOOP-10547
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Jason Dere
>Assignee: Benoy Antony
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HADOOP-10547.patch
>
>
> Trying to use SaslPropertiesResolver.getDefaultProperties() in Hive project 
> but the method has protected scope. Please make this a public method if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10543) RemoteException's unwrapRemoteException method failed for PathIOException

2014-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985629#comment-13985629
 ] 

Hudson commented on HADOOP-10543:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1773 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1773/])
HADOOP-10543. RemoteException's unwrapRemoteException method failed for 
PathIOException. Contributed by Yongjun Zhang. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591181)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java


> RemoteException's unwrapRemoteException method failed for PathIOException
> -
>
> Key: HADOOP-10543
> URL: https://issues.apache.org/jira/browse/HADOOP-10543
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.5.0
>
> Attachments: HADOOP-10543.001.patch
>
>
> If the cause of a RemoteException is PathIOException, RemoteException's 
> unwrapRemoteException methods would fail, because some PathIOException 
> constructors initialize the cause to null, which makes Throwable to throw 
> exception at
> {code}
> public synchronized Throwable initCause(Throwable cause) {
> if (this.cause != this)
> throw new IllegalStateException("Can't overwrite cause");
> {code} 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10547) Give SaslPropertiesResolver.getDefaultProperties() public scope

2014-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985544#comment-13985544
 ] 

Hudson commented on HADOOP-10547:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1747 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1747/])
HADOOP-10547. Fix CHANGES.txt (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591098)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HADOOP-10547. Give SaslPropertiesResolver.getDefaultProperties() public scope. 
(Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591095)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPropertiesResolver.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Give SaslPropertiesResolver.getDefaultProperties() public scope
> ---
>
> Key: HADOOP-10547
> URL: https://issues.apache.org/jira/browse/HADOOP-10547
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Jason Dere
>Assignee: Benoy Antony
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HADOOP-10547.patch
>
>
> Trying to use SaslPropertiesResolver.getDefaultProperties() in Hive project 
> but the method has protected scope. Please make this a public method if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10543) RemoteException's unwrapRemoteException method failed for PathIOException

2014-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985540#comment-13985540
 ] 

Hudson commented on HADOOP-10543:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1747 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1747/])
HADOOP-10543. RemoteException's unwrapRemoteException method failed for 
PathIOException. Contributed by Yongjun Zhang. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591181)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java


> RemoteException's unwrapRemoteException method failed for PathIOException
> -
>
> Key: HADOOP-10543
> URL: https://issues.apache.org/jira/browse/HADOOP-10543
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.5.0
>
> Attachments: HADOOP-10543.001.patch
>
>
> If the cause of a RemoteException is PathIOException, RemoteException's 
> unwrapRemoteException methods would fail, because some PathIOException 
> constructors initialize the cause to null, which makes Throwable to throw 
> exception at
> {code}
> public synchronized Throwable initCause(Throwable cause) {
> if (this.cause != this)
> throw new IllegalStateException("Can't overwrite cause");
> {code} 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10547) Give SaslPropertiesResolver.getDefaultProperties() public scope

2014-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985389#comment-13985389
 ] 

Hudson commented on HADOOP-10547:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #556 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/556/])
HADOOP-10547. Fix CHANGES.txt (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591098)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HADOOP-10547. Give SaslPropertiesResolver.getDefaultProperties() public scope. 
(Contributed by Benoy Antony) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591095)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPropertiesResolver.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Give SaslPropertiesResolver.getDefaultProperties() public scope
> ---
>
> Key: HADOOP-10547
> URL: https://issues.apache.org/jira/browse/HADOOP-10547
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.0
>Reporter: Jason Dere
>Assignee: Benoy Antony
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HADOOP-10547.patch
>
>
> Trying to use SaslPropertiesResolver.getDefaultProperties() in Hive project 
> but the method has protected scope. Please make this a public method if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10543) RemoteException's unwrapRemoteException method failed for PathIOException

2014-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985385#comment-13985385
 ] 

Hudson commented on HADOOP-10543:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #556 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/556/])
HADOOP-10543. RemoteException's unwrapRemoteException method failed for 
PathIOException. Contributed by Yongjun Zhang. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591181)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PathIOException.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestPathExceptions.java


> RemoteException's unwrapRemoteException method failed for PathIOException
> -
>
> Key: HADOOP-10543
> URL: https://issues.apache.org/jira/browse/HADOOP-10543
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.5.0
>
> Attachments: HADOOP-10543.001.patch
>
>
> If the cause of a RemoteException is PathIOException, RemoteException's 
> unwrapRemoteException methods would fail, because some PathIOException 
> constructors initialize the cause to null, which makes Throwable to throw 
> exception at
> {code}
> public synchronized Throwable initCause(Throwable cause) {
> if (this.cause != this)
> throw new IllegalStateException("Can't overwrite cause");
> {code} 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10552) Fix usage and example at FileSystemShell.apt.vm

2014-04-30 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985327#comment-13985327
 ] 

Harsh J commented on HADOOP-10552:
--

This is not really correct. The "hadoop fs" command is the right way to consume 
the FS Shell utilities. The "hdfs dfs" just aliases to it anyway and isn't to 
be replacing 'hadoop fs'.

See also HDFS-557.

> Fix usage and example at FileSystemShell.apt.vm
> ---
>
> Key: HADOOP-10552
> URL: https://issues.apache.org/jira/browse/HADOOP-10552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: Kenji Kikushima
>Priority: Trivial
> Attachments: HADOOP-10552.patch
>
>
> Usage at moveFromLocal needs "hdfs" command, and example for touchz should 
> use "hdfs dfs".



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10552) Fix usage and example at FileSystemShell.apt.vm

2014-04-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985261#comment-13985261
 ] 

Hadoop QA commented on HADOOP-10552:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12642610/HADOOP-10552.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3879//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3879//console

This message is automatically generated.

> Fix usage and example at FileSystemShell.apt.vm
> ---
>
> Key: HADOOP-10552
> URL: https://issues.apache.org/jira/browse/HADOOP-10552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: Kenji Kikushima
>Priority: Trivial
> Attachments: HADOOP-10552.patch
>
>
> Usage at moveFromLocal needs "hdfs" command, and example for touchz should 
> use "hdfs dfs".



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10552) Fix usage and example at FileSystemShell.apt.vm

2014-04-30 Thread Kenji Kikushima (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenji Kikushima updated HADOOP-10552:
-

Status: Patch Available  (was: Open)

> Fix usage and example at FileSystemShell.apt.vm
> ---
>
> Key: HADOOP-10552
> URL: https://issues.apache.org/jira/browse/HADOOP-10552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: Kenji Kikushima
>Priority: Trivial
> Attachments: HADOOP-10552.patch
>
>
> Usage at moveFromLocal needs "hdfs" command, and example for touchz should 
> use "hdfs dfs".



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10552) Fix usage and example at FileSystemShell.apt.vm

2014-04-30 Thread Kenji Kikushima (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenji Kikushima updated HADOOP-10552:
-

Attachment: HADOOP-10552.patch

Attached a patch.

> Fix usage and example at FileSystemShell.apt.vm
> ---
>
> Key: HADOOP-10552
> URL: https://issues.apache.org/jira/browse/HADOOP-10552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: Kenji Kikushima
>Priority: Trivial
> Attachments: HADOOP-10552.patch
>
>
> Usage at moveFromLocal needs "hdfs" command, and example for touchz should 
> use "hdfs dfs".



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10552) Fix usage and example at FileSystemShell.apt.vm

2014-04-30 Thread Kenji Kikushima (JIRA)
Kenji Kikushima created HADOOP-10552:


 Summary: Fix usage and example at FileSystemShell.apt.vm
 Key: HADOOP-10552
 URL: https://issues.apache.org/jira/browse/HADOOP-10552
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0
Reporter: Kenji Kikushima
Priority: Trivial


Usage at moveFromLocal needs "hdfs" command, and example for touchz should use 
"hdfs dfs".




--
This message was sent by Atlassian JIRA
(v6.2#6252)