[jira] [Assigned] (HADOOP-9473) typo in FileUtil copy() method

2013-04-13 Thread Glen Mazza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Mazza reassigned HADOOP-9473:
--

Assignee: (was: Glen Mazza)

Suresh, please don't assign this JIRA to me -- I'm just a user entering in a 
JIRA over a typo --the only people who should be assigned JIRAs are committers. 
 If the Hadoop project unfortunately requires committers to need to jump 
through a dozen hoops in order to fix a silly typo (no other Apache project I'm 
aware of requires JUnit tests for typos), that's a procedural problem you guys 
need to change, instead of just farming out such busywork to the people 
reporting the errors.  And the alternative of having me type up a justification 
of why a JUnit test is not necessary for a typo is too silly to consider.

If my further non-participation in this JIRA item means you need to close this 
item as a Won't Fix while keeping the misspelling in the code base, so be it.  
The Hadoop team is making it too crippling a process to resolve problems with 
their system, with the result that fewer and fewer people are going to bother 
reporting issues.

> typo in FileUtil copy() method
> --
>
> Key: HADOOP-9473
> URL: https://issues.apache.org/jira/browse/HADOOP-9473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0-alpha, 1.1.2
>Reporter: Glen Mazza
>Priority: Trivial
> Fix For: 1.2.0, 2.0.5-beta
>
> Attachments: HADOOP-9473.branch-1.patch, HADOOP-9473.patch
>
>
> typo:
> {code}
> Index: src/core/org/apache/hadoop/fs/FileUtil.java
> ===
> --- src/core/org/apache/hadoop/fs/FileUtil.java   (revision 1467295)
> +++ src/core/org/apache/hadoop/fs/FileUtil.java   (working copy)
> @@ -178,7 +178,7 @@
>  // Check if dest is directory
>  if (!dstFS.exists(dst)) {
>throw new IOException("`" + dst +"': specified destination directory " 
> +
> -"doest not exist");
> +"does not exist");
>  } else {
>FileStatus sdst = dstFS.getFileStatus(dst);
>if (!sdst.isDir()) 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method

2013-04-13 Thread Glen Mazza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13631160#comment-13631160
 ] 

Glen Mazza commented on HADOOP-9473:


OK, glad to hear you committed it, Suresh.  I *refuse* to type up JUnit test 
cases for typos.   :)

> typo in FileUtil copy() method
> --
>
> Key: HADOOP-9473
> URL: https://issues.apache.org/jira/browse/HADOOP-9473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0-alpha, 1.1.2
>Reporter: Glen Mazza
>Priority: Trivial
> Fix For: 1.2.0, 2.0.5-beta
>
> Attachments: HADOOP-9473.branch-1.patch, HADOOP-9473.patch
>
>
> typo:
> {code}
> Index: src/core/org/apache/hadoop/fs/FileUtil.java
> ===
> --- src/core/org/apache/hadoop/fs/FileUtil.java   (revision 1467295)
> +++ src/core/org/apache/hadoop/fs/FileUtil.java   (working copy)
> @@ -178,7 +178,7 @@
>  // Check if dest is directory
>  if (!dstFS.exists(dst)) {
>throw new IOException("`" + dst +"': specified destination directory " 
> +
> -"doest not exist");
> +"does not exist");
>  } else {
>FileStatus sdst = dstFS.getFileStatus(dst);
>if (!sdst.isDir()) 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method

2013-04-12 Thread Glen Mazza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13630416#comment-13630416
 ] 

Glen Mazza commented on HADOOP-9473:


Yes, I'll be quite happy to switch to the 2.0.x branch once HADOOP-9206 is 
fixed, and will attach patches from now on.  Thanks!

> typo in FileUtil copy() method
> --
>
> Key: HADOOP-9473
> URL: https://issues.apache.org/jira/browse/HADOOP-9473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0-alpha, 1.1.2
>Reporter: Glen Mazza
>Assignee: Suresh Srinivas
>Priority: Trivial
> Attachments: HADOOP-9473.branch-1.patch, HADOOP-9473.patch
>
>
> typo:
> {code}
> Index: src/core/org/apache/hadoop/fs/FileUtil.java
> ===
> --- src/core/org/apache/hadoop/fs/FileUtil.java   (revision 1467295)
> +++ src/core/org/apache/hadoop/fs/FileUtil.java   (working copy)
> @@ -178,7 +178,7 @@
>  // Check if dest is directory
>  if (!dstFS.exists(dst)) {
>throw new IOException("`" + dst +"': specified destination directory " 
> +
> -"doest not exist");
> +"does not exist");
>  } else {
>FileStatus sdst = dstFS.getFileStatus(dst);
>if (!sdst.isDir()) 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9206) "Setting up a Single Node Cluster" instructions need improvement in 0.23.5/2.0.2-alpha branches

2013-04-12 Thread Glen Mazza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Mazza updated HADOOP-9206:
---

Description: 
Hi, in contrast to the easy-to-follow 1.0.4 instructions 
(http://hadoop.apache.org/docs/r1.0.4/single_node_setup.html) the 0.23.5 and 
2.0.2-alpha instructions 
(http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html)
 need more clarification -- it seems to be written for people who already know 
and understand hadoop.  In particular, these points need clarification:

1.) Text: "You should be able to obtain the MapReduce tarball from the release."

Question: What is the MapReduce tarball?  What is its name?  I don't see such 
an object within the hadoop-0.23.5.tar.gz download.

2.) Quote: "NOTE: You will need protoc installed of version 2.4.1 or greater."

Protoc doesn't have a website you can link to (it's just mentioned offhand when 
you Google it) -- is it really the case today that Hadoop has a dependency on 
such a minor project?  At any rate, if you can have a link of where one goes to 
get/install Protoc that would be good.

3.) Quote: "Assuming you have installed hadoop-common/hadoop-hdfs and exported 
$HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME, untar hadoop mapreduce tarball and set 
environment variable $HADOOP_MAPRED_HOME to the untarred directory."

I'm not sure what you mean by the forward slashes: hadoop-common/hadoop-hdfs 
and $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME -- do you mean & (install both) or 
*or* just install one of the two?  This needs clarification--please remove the 
forward slash and replace it with what you're trying to say.  The audience here 
is complete newbie and they've been brought to this page from here: 
http://hadoop.apache.org/docs/r0.23.5/ (same with r2.0.2-alpha/) (quote: 
"Getting Started - The Hadoop documentation includes the information you need 
to get started using Hadoop. Begin with the Single Node Setup which shows you 
how to set up a single-node Hadoop installation."), they've downloaded 
hadoop-0.23.5.tar.gz and want to know what to do next.  Why are there 
potentially two applications -- hadoop-common and hadoop-hdfs and not just one? 
 (The download doesn't appear to have two separate apps) -- if there is indeed 
just one app can we remove the other from the above text to avoid confusion?

Again, I just downloaded hadoop-0.23.5.tar.gz -- do I need to download more?  
If so, let us know in the docs here.

Also, the fragment: "Assuming you have installed hadoop-common/hadoop-hdfs..."  
No, I haven't, that's what *this* page is supposed to explain to me how to do 
-- how do I install these two (or just one of these two)?

Also, what do I set $HADOOP_COMMON_HOME and/or $HADOOP_HDFS_HOME to?

4.) Quote: "NOTE: The following instructions assume you have hdfs running."  
No, I don't--how do I do this?  Again, this page is supposed to teach me that.

5.) Quote: "To start the ResourceManager and NodeManager, you will have to 
update the configs. Assuming your $HADOOP_CONF_DIR is the configuration 
directory..."

Could you clarify here what the "configuration directory" is, it doesn't exist 
in the 0.23.5 download.  I just see bin,etc,include,lib,libexec,sbin,share 
folders but no "conf" one.)

6.) Quote: "Assuming that the environment variables $HADOOP_COMMON_HOME, 
$HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $YARN_HOME, $JAVA_HOME and 
$HADOOP_CONF_DIR have been set appropriately."

We'll need to know what to set YARN_HOME to here.

Thanks!
Glen

  was:
Hi, in contrast to the easy-to-follow 1.0.4 instructions 
(http://hadoop.apache.org/docs/r1.0.4/single_node_setup.html) the 0.23.5 and 
2.0.2-alpha instructions 
(http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html)
 need more clarification -- it seems to be written for people who already know 
and understand hadoop.  In particular, these points need clarification:

1.) Text: "You should be able to obtain the MapReduce tarball from the release."

Question: What is the MapReduce tarball?  What is its name?  I don't see such 
an object within the hadoop-0.23.5.tar.gz download.

2.) Quote: "NOTE: You will need protoc installed of version 2.4.1 or greater."

Protoc doesn't have a website you can link to (it's just mentioned offhand when 
you Google it) -- is it really the case today that Hadoop has a dependency on 
such a minor project?  At any rate, if you can have a link of where one goes to 
get/install Protoc that would be good.

3.) Quote: "Assuming you have installed hadoop-common/hadoop-hdfs and exported 
$HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME, untar hadoop mapreduce tarball and set 
environment variable $HADOOP_MAPRED_HOME to the untarred directory."

I'm not sure what you mean by the forward slashes: hadoop-common/hadoop-hdfs 
and $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME -- do you mean & (install both) or 
*or* just install one of the two?

[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method

2013-04-12 Thread Glen Mazza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13630359#comment-13630359
 ] 

Glen Mazza commented on HADOOP-9473:


I don't know and don't care if the problem is in trunk, HADOOP-9206 makes trunk 
useless for me, forcing me to have to stay with the 1.1.x branch.


> typo in FileUtil copy() method
> --
>
> Key: HADOOP-9473
> URL: https://issues.apache.org/jira/browse/HADOOP-9473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.1.2
>Reporter: Glen Mazza
>Assignee: Glen Mazza
>Priority: Trivial
> Attachments: HADOOP-9473.patch
>
>
> typo:
> {code}
> Index: src/core/org/apache/hadoop/fs/FileUtil.java
> ===
> --- src/core/org/apache/hadoop/fs/FileUtil.java   (revision 1467295)
> +++ src/core/org/apache/hadoop/fs/FileUtil.java   (working copy)
> @@ -178,7 +178,7 @@
>  // Check if dest is directory
>  if (!dstFS.exists(dst)) {
>throw new IOException("`" + dst +"': specified destination directory " 
> +
> -"doest not exist");
> +"does not exist");
>  } else {
>FileStatus sdst = dstFS.getFileStatus(dst);
>if (!sdst.isDir()) 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9474) fs -put command doesn't work if selecting certain files from a local folder

2013-04-12 Thread Glen Mazza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Mazza updated HADOOP-9474:
---

Description: 
The following four commands (a) - (d) were run sequentially.  From (a) - (c) 
HDFS folder "inputABC" does not yet exist.

(a) and (b) are improperly refusing to put the files from conf/*.xml into 
inputABC because folder inputABC doesn't yet exist.  However, in (c) when I 
make the same request except with just "conf" (and not "conf/*.xml") HDFS will 
correctly create inputABC and copy the folders over.  We see that inputABC now 
exists in (d) when I subsequently try to copy the conf/*.xml folders, it 
correctly complains that the files already exist there.

IOW, I can put "conf" into a nonexisting HDFS folder and fs will create the 
folder for me, but I can't do the same with "conf/*.xml" -- but the latter 
should work equally as well.  The problem appears to be in 
org.apache.hadoop.fs.FileUtil, line 176, which properly routes "conf" to have 
its files copied but will have "conf/*.xml" subsequently return a "nonexisting 
folder" error.

{noformat}
a) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml 
inputABC
put: `inputABC': specified destination directory doest not exist
b) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml 
inputABC
put: `inputABC': specified destination directory doest not exist
c) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf 
inputABC
d) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml 
inputABC
put: Target inputABC/capacity-scheduler.xml already exists
Target inputABC/core-site.xml already exists
Target inputABC/fair-scheduler.xml already exists
Target inputABC/hadoop-policy.xml already exists
Target inputABC/hdfs-site.xml already exists
Target inputABC/mapred-queue-acls.xml already exists
Target inputABC/mapred-site.xml already exists
{noformat}

  was:
The following four commands (a) - (d) were run sequentially.  From (a) - (c) 
HDFS folder "inputABC" does not yet exist.

(a) and (b) are improperly refusing to put the files from conf/*.xml into 
inputABC because folder inputABC doesn't yet exist.  However, in (c) when I 
make the same request except with just "conf" (and not "conf/*.xml") HDFS will 
correctly create inputABC and copy the folders over.  We see that inputABC now 
exists in (d) when I subsequently try to copy the conf/*.xml folders, it 
complains that its files already exist there.

IOW, I can put "conf" into a nonexisting HDFS folder and fs will create the 
folder for me, but I can't do the same with "conf/*.xml" -- but the latter 
should work equally as well.  The problem appears to be in 
org.apache.hadoop.fs.FileUtil, line 176, which properly routes "conf" to have 
its files copied but will have "conf/*.xml" subsequently return a "nonexisting 
folder" error.

{noformat}
a) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml 
inputABC
put: `inputABC': specified destination directory doest not exist
b) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml 
inputABC
put: `inputABC': specified destination directory doest not exist
c) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf 
inputABC
d) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml 
inputABC
put: Target inputABC/capacity-scheduler.xml already exists
Target inputABC/core-site.xml already exists
Target inputABC/fair-scheduler.xml already exists
Target inputABC/hadoop-policy.xml already exists
Target inputABC/hdfs-site.xml already exists
Target inputABC/mapred-queue-acls.xml already exists
Target inputABC/mapred-site.xml already exists
{noformat}

Summary: fs -put command doesn't work if selecting certain files from a 
local folder  (was: fs -put command doesn't work if I selecting certain files 
from a local folder)

> fs -put command doesn't work if selecting certain files from a local folder
> ---
>
> Key: HADOOP-9474
> URL: https://issues.apache.org/jira/browse/HADOOP-9474
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.1.2
>Reporter: Glen Mazza
>
> The following four commands (a) - (d) were run sequentially.  From (a) - (c) 
> HDFS folder "inputABC" does not yet exist.
> (a) and (b) are improperly refusing to put the files from conf/*.xml into 
> inputABC because folder inputABC doesn't yet exist.  However, in (c) when I 
> make the same request except with just "conf" (and not "conf/*.xml") HDFS 
> will correctly create inputABC and copy the folders over.  We see that 
> inputABC now exists in (d) when I subsequently try to copy the conf/*.xml 
> folders, it correctly complains that the files already exist the

[jira] [Commented] (HADOOP-9473) typo in FileUtil copy() method

2013-04-12 Thread Glen Mazza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13630303#comment-13630303
 ] 

Glen Mazza commented on HADOOP-9473:


Suresh, I just posted the patch, it's in the description.

> typo in FileUtil copy() method
> --
>
> Key: HADOOP-9473
> URL: https://issues.apache.org/jira/browse/HADOOP-9473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.1.2
>Reporter: Glen Mazza
>Assignee: Glen Mazza
>Priority: Trivial
>
> typo:
> {code}
> Index: src/core/org/apache/hadoop/fs/FileUtil.java
> ===
> --- src/core/org/apache/hadoop/fs/FileUtil.java   (revision 1467295)
> +++ src/core/org/apache/hadoop/fs/FileUtil.java   (working copy)
> @@ -178,7 +178,7 @@
>  // Check if dest is directory
>  if (!dstFS.exists(dst)) {
>throw new IOException("`" + dst +"': specified destination directory " 
> +
> -"doest not exist");
> +"does not exist");
>  } else {
>FileStatus sdst = dstFS.getFileStatus(dst);
>if (!sdst.isDir()) 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9474) fs -put command doesn't work if I selecting certain files from a local folder

2013-04-12 Thread Glen Mazza (JIRA)
Glen Mazza created HADOOP-9474:
--

 Summary: fs -put command doesn't work if I selecting certain files 
from a local folder
 Key: HADOOP-9474
 URL: https://issues.apache.org/jira/browse/HADOOP-9474
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.1.2
Reporter: Glen Mazza


The following four commands (a) - (d) were run sequentially.  From (a) - (c) 
HDFS folder "inputABC" does not yet exist.

(a) and (b) are improperly refusing to put the files from conf/*.xml into 
inputABC because folder inputABC doesn't yet exist.  However, in (c) when I 
make the same request except with just "conf" (and not "conf/*.xml") HDFS will 
correctly create inputABC and copy the folders over.  We see that inputABC now 
exists in (d) when I subsequently try to copy the conf/*.xml folders, it 
complains that its files already exist there.

IOW, I can put "conf" into a nonexisting HDFS folder and fs will create the 
folder for me, but I can't do the same with "conf/*.xml" -- but the latter 
should work equally as well.  The problem appears to be in 
org.apache.hadoop.fs.FileUtil, line 176, which properly routes "conf" to have 
its files copied but will have "conf/*.xml" subsequently return a "nonexisting 
folder" error.

{noformat}
a) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml 
inputABC
put: `inputABC': specified destination directory doest not exist
b) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml 
inputABC
put: `inputABC': specified destination directory doest not exist
c) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf 
inputABC
d) gmazza@gmazza-work:/media/work1/hadoop-1.1.2$ bin/hadoop fs -put conf/*.xml 
inputABC
put: Target inputABC/capacity-scheduler.xml already exists
Target inputABC/core-site.xml already exists
Target inputABC/fair-scheduler.xml already exists
Target inputABC/hadoop-policy.xml already exists
Target inputABC/hdfs-site.xml already exists
Target inputABC/mapred-queue-acls.xml already exists
Target inputABC/mapred-site.xml already exists
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9473) typo in FileUtil copy() method

2013-04-12 Thread Glen Mazza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Mazza updated HADOOP-9473:
---

Description: 
typo:
{code}
Index: src/core/org/apache/hadoop/fs/FileUtil.java
===
--- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295)
+++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy)
@@ -178,7 +178,7 @@
 // Check if dest is directory
 if (!dstFS.exists(dst)) {
   throw new IOException("`" + dst +"': specified destination directory " +
-"doest not exist");
+"does not exist");
 } else {
   FileStatus sdst = dstFS.getFileStatus(dst);
   if (!sdst.isDir()) 
{code}

  was:
typo:

Index: src/core/org/apache/hadoop/fs/FileUtil.java
===
--- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295)
+++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy)
@@ -178,7 +178,7 @@
 // Check if dest is directory
 if (!dstFS.exists(dst)) {
   throw new IOException("`" + dst +"': specified destination directory " +
-"doest not exist");
+"does not exist");
 } else {
   FileStatus sdst = dstFS.getFileStatus(dst);
   if (!sdst.isDir()) 



> typo in FileUtil copy() method
> --
>
> Key: HADOOP-9473
> URL: https://issues.apache.org/jira/browse/HADOOP-9473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.1.2
>Reporter: Glen Mazza
>Priority: Trivial
>
> typo:
> {code}
> Index: src/core/org/apache/hadoop/fs/FileUtil.java
> ===
> --- src/core/org/apache/hadoop/fs/FileUtil.java   (revision 1467295)
> +++ src/core/org/apache/hadoop/fs/FileUtil.java   (working copy)
> @@ -178,7 +178,7 @@
>  // Check if dest is directory
>  if (!dstFS.exists(dst)) {
>throw new IOException("`" + dst +"': specified destination directory " 
> +
> -"doest not exist");
> +"does not exist");
>  } else {
>FileStatus sdst = dstFS.getFileStatus(dst);
>if (!sdst.isDir()) 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9473) typo in FileUtil copy() method

2013-04-12 Thread Glen Mazza (JIRA)
Glen Mazza created HADOOP-9473:
--

 Summary: typo in FileUtil copy() method
 Key: HADOOP-9473
 URL: https://issues.apache.org/jira/browse/HADOOP-9473
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.1.2
Reporter: Glen Mazza
Priority: Trivial


typo:

Index: src/core/org/apache/hadoop/fs/FileUtil.java
===
--- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295)
+++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy)
@@ -178,7 +178,7 @@
 // Check if dest is directory
 if (!dstFS.exists(dst)) {
   throw new IOException("`" + dst +"': specified destination directory " +
-"doest not exist");
+"does not exist");
 } else {
   FileStatus sdst = dstFS.getFileStatus(dst);
   if (!sdst.isDir()) 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9206) "Setting up a Single Node Cluster" instructions need improvement in 0.23.5/2.0.2-alpha branches

2013-01-14 Thread Glen Mazza (JIRA)
Glen Mazza created HADOOP-9206:
--

 Summary: "Setting up a Single Node Cluster" instructions need 
improvement in 0.23.5/2.0.2-alpha branches
 Key: HADOOP-9206
 URL: https://issues.apache.org/jira/browse/HADOOP-9206
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.23.5, 2.0.2-alpha
Reporter: Glen Mazza


Hi, in contrast to the easy-to-follow 1.0.4 instructions 
(http://hadoop.apache.org/docs/r1.0.4/single_node_setup.html) the 0.23.5 and 
2.0.2-alpha instructions 
(http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html)
 need more clarification -- it seems to be written for people who already know 
and understand hadoop.  In particular, these points need clarification:

1.) Text: "You should be able to obtain the MapReduce tarball from the release."

Question: What is the MapReduce tarball?  What is its name?  I don't see such 
an object within the hadoop-0.23.5.tar.gz download.

2.) Quote: "NOTE: You will need protoc installed of version 2.4.1 or greater."

Protoc doesn't have a website you can link to (it's just mentioned offhand when 
you Google it) -- is it really the case today that Hadoop has a dependency on 
such a minor project?  At any rate, if you can have a link of where one goes to 
get/install Protoc that would be good.

3.) Quote: "Assuming you have installed hadoop-common/hadoop-hdfs and exported 
$HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME, untar hadoop mapreduce tarball and set 
environment variable $HADOOP_MAPRED_HOME to the untarred directory."

I'm not sure what you mean by the forward slashes: hadoop-common/hadoop-hdfs 
and $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME -- do you mean & (install both) or 
*or* just install one of the two?  This needs clarification--please remove the 
forward slash and replace it with what you're trying to say.  The audience here 
is complete newbie and they've been brought to this page from here: 
http://hadoop.apache.org/docs/r0.23.5/ (same with r2.0.2-alpha/) (quote: 
"Getting Started - The Hadoop documentation includes the information you need 
to get started using Hadoop. Begin with the Single Node Setup which shows you 
how to set up a single-node Hadoop installation."), they've downloaded 
hadoop-0.23.5.tar.gz and want to know what to do next.  Why are there 
potentially two applications -- hadoop-common and hadoop-hdfs and not just one? 
 (The download doesn't appear to have two separate apps) -- if there is indeed 
just one app and we remove the other from the above text to avoid confusion?

Again, I just downloaded hadoop-0.23.5.tar.gz -- do I need to download more?  
If so, let us know in the docs here.

Also, the fragment: "Assuming you have installed hadoop-common/hadoop-hdfs..."  
No, I haven't, that's what *this* page is supposed to explain to me how to do 
-- how do I install these two (or just one of these two)?

Also, what do I set $HADOOP_COMMON_HOME and/or $HADOOP_HDFS_HOME to?

4.) Quote: "NOTE: The following instructions assume you have hdfs running."  
No, I don't--how do I do this?  Again, this page is supposed to teach me that.

5.) Quote: "To start the ResourceManager and NodeManager, you will have to 
update the configs. Assuming your $HADOOP_CONF_DIR is the configuration 
directory..."

Could you clarify here what the "configuration directory" is, it doesn't exist 
in the 0.23.5 download.  I just see bin,etc,include,lib,libexec,sbin,share 
folders but no "conf" one.)

6.) Quote: "Assuming that the environment variables $HADOOP_COMMON_HOME, 
$HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $YARN_HOME, $JAVA_HOME and 
$HADOOP_CONF_DIR have been set appropriately."

We'll need to know what to set YARN_HOME to here.

Thanks!
Glen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9197) Some little confusion in official documentation

2013-01-14 Thread Glen Mazza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13552896#comment-13552896
 ] 

Glen Mazza commented on HADOOP-9197:


I don't see the problem--who says the documentation for different versions of 
Hadoop must be the same?  That's as nonsensical as saying the source code has 
to be identical for different versions.  What's the purpose of versions if you 
can't improve the source code and documentation over time?  It doesn't matter 
that version 1.0 says "X", version 2.0 says "Y', and version 3.0 says "Z"--it 
only matters if version 3.0's Z is incorrect.  Jason needs to pick a single 
version of Hadoop he wishes to work on and focus on that version's 
documentation and ignore the others.  If he finds bugs in that version's docs 
then to submit a JIRA over them--not a JIRA because the documentation, like the 
source code, has changed across versions.

> Some little confusion in official documentation
> ---
>
> Key: HADOOP-9197
> URL: https://issues.apache.org/jira/browse/HADOOP-9197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jason Lee
>Priority: Trivial
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am just a newbie to Hadoop. recently i self-study hadoop. when i reading 
> the official documentations, i find that them is a little confusion by 
> beginners like me. for example, look at the documents about HDFS shell guide:
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
> As a beginner, i think reading them is suffering.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira