[jira] Updated: (HADOOP-6536) FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as the argument

2010-07-19 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated HADOOP-6536:


Status: Open  (was: Patch Available)

Changes look good.

bq. I would also add that fullyDelete should delete dangling links (it 
currently does but we should add a test). 
Can you add a test for deleting dangling links also?

 FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as 
 the argument
 

 Key: HADOOP-6536
 URL: https://issues.apache.org/jira/browse/HADOOP-6536
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: HADOOP-6536.patch, HADOOP-6536.v1.patch


 FileUtil.fullyDelete(dir) deletes contents of sym-linked directory when we 
 pass a symlink. If this is the behavior, it should be documented as so. 
 Or it should be changed not to delete the contents of the sym-linked 
 directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6866) Tool interface should also support getUsage()

2010-07-19 Thread Amar Kamat (JIRA)
Tool interface should also support getUsage()
-

 Key: HADOOP-6866
 URL: https://issues.apache.org/jira/browse/HADOOP-6866
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Amar Kamat


Currently each and every _tool_ implementing the {{Tool}} interface is forced 
to manage their usage string. Since its a common piece of code, its better we 
factor it out. This can be useful in the following ways
# A proper lib like support for usage strings
# Forcing _tools_ (implementers of {{Tool}}) to expose their usage string
# Test cases can now use these well defined and exposed usage strings to test

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6865) there will be ant error if ant ran without network connected

2010-07-19 Thread Evan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889790#action_12889790
 ] 

Evan Wang commented on HADOOP-6865:
---

There is a method to slove this problem. When ant built the project, 
ivy-2.0.0-rc2.jar would be rewritten. If network was not connected, this jar 
would have some error that cannot be repaired automatically. But you can use a 
original ivy-2.0.0-rc2.jar to replace it, then ant building will be ok.

 there will be ant error if ant ran without network connected
 

 Key: HADOOP-6865
 URL: https://issues.apache.org/jira/browse/HADOOP-6865
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.20.2
 Environment: centos 5.4
Reporter: Evan Wang

 If you run `ant` without network connected, there will be an error below. And 
 even if you connect your network, the error will exist.
 ivy-init-antlib:
   [typedef] java.util.zip.ZipException: error in opening zip file
   [typedef] at java.util.zip.ZipFile.open(Native Method)
   [typedef] at java.util.zip.ZipFile.init(ZipFile.java:114)
   [typedef] at java.util.zip.ZipFile.init(ZipFile.java:131)
   [typedef] at 
 org.apache.tools.ant.AntClassLoader.getResourceURL(AntClassLoader.java:1028)
   [typedef] at 
 org.apache.tools.ant.AntClassLoader$ResourceEnumeration.findNextResource(AntClassLoader.java:147)
   [typedef] at 
 org.apache.tools.ant.AntClassLoader$ResourceEnumeration.init(AntClassLoader.java:109)
   [typedef] at 
 org.apache.tools.ant.AntClassLoader.findResources(AntClassLoader.java:975)
   [typedef] at java.lang.ClassLoader.getResources(ClassLoader.java:1016)
   [typedef] at 
 org.apache.tools.ant.taskdefs.Definer.resourceToURLs(Definer.java:364)
   [typedef] at 
 org.apache.tools.ant.taskdefs.Definer.execute(Definer.java:256)
   [typedef] at 
 org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
   [typedef] at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
   [typedef] at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   [typedef] at java.lang.reflect.Method.invoke(Method.java:597)
   [typedef] at 
 org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
   [typedef] at org.apache.tools.ant.Task.perform(Task.java:348)
   [typedef] at org.apache.tools.ant.Target.execute(Target.java:357)
   [typedef] at org.apache.tools.ant.Target.performTasks(Target.java:385)
   [typedef] at 
 org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337)
   [typedef] at 
 org.apache.tools.ant.Project.executeTarget(Project.java:1306)
   [typedef] at 
 org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
   [typedef] at 
 org.apache.tools.ant.Project.executeTargets(Project.java:1189)
   [typedef] at org.apache.tools.ant.Main.runBuild(Main.java:758)
   [typedef] at org.apache.tools.ant.Main.startAnt(Main.java:217)
   [typedef] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257)
   [typedef] at 
 org.apache.tools.ant.launch.Launcher.main(Launcher.java:104)
   [typedef] Could not load definitions from resource 
 org/apache/ivy/ant/antlib.xml. It could not be found.
 BUILD FAILED
 /opt/hadoop-0.20.2/build.xml:1644: You need Apache Ivy 2.0 or later from 
 http://ant.apache.org/
   It could not be loaded from 
 http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.0.0-rc2/ivy-2.0.0-rc2.jar

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6866) Tool interface should also support getUsage()

2010-07-19 Thread Jeff Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889792#action_12889792
 ] 

Jeff Zhang commented on HADOOP-6866:


This is a good point, but I concern that it will bring in incompatibility 
problem.  It will force the users who use the old Tool interface to add the 
getUsage() method.




 Tool interface should also support getUsage()
 -

 Key: HADOOP-6866
 URL: https://issues.apache.org/jira/browse/HADOOP-6866
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Amar Kamat

 Currently each and every _tool_ implementing the {{Tool}} interface is forced 
 to manage their usage string. Since its a common piece of code, its better we 
 factor it out. This can be useful in the following ways
 # A proper lib like support for usage strings
 # Forcing _tools_ (implementers of {{Tool}}) to expose their usage string
 # Test cases can now use these well defined and exposed usage strings to test

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6536) FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as the argument

2010-07-19 Thread Ravi Gummadi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Gummadi updated HADOOP-6536:
-

Attachment: HADOOP-6536.v1.1.patch

Attaching patch adding more testcases.

 FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as 
 the argument
 

 Key: HADOOP-6536
 URL: https://issues.apache.org/jira/browse/HADOOP-6536
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: HADOOP-6536.patch, HADOOP-6536.v1.1.patch, 
 HADOOP-6536.v1.patch


 FileUtil.fullyDelete(dir) deletes contents of sym-linked directory when we 
 pass a symlink. If this is the behavior, it should be documented as so. 
 Or it should be changed not to delete the contents of the sym-linked 
 directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6536) FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as the argument

2010-07-19 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated HADOOP-6536:


Status: Patch Available  (was: Open)

Patch looks good.

 FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as 
 the argument
 

 Key: HADOOP-6536
 URL: https://issues.apache.org/jira/browse/HADOOP-6536
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: HADOOP-6536.patch, HADOOP-6536.v1.1.patch, 
 HADOOP-6536.v1.patch


 FileUtil.fullyDelete(dir) deletes contents of sym-linked directory when we 
 pass a symlink. If this is the behavior, it should be documented as so. 
 Or it should be changed not to delete the contents of the sym-linked 
 directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6536) FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as the argument

2010-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889822#action_12889822
 ] 

Hadoop QA commented on HADOOP-6536:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12449821/HADOOP-6536.v1.1.patch
  against trunk revision 964993.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated 1 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/623/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/623/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/623/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/623/console

This message is automatically generated.

 FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as 
 the argument
 

 Key: HADOOP-6536
 URL: https://issues.apache.org/jira/browse/HADOOP-6536
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: HADOOP-6536.patch, HADOOP-6536.v1.1.patch, 
 HADOOP-6536.v1.patch


 FileUtil.fullyDelete(dir) deletes contents of sym-linked directory when we 
 pass a symlink. If this is the behavior, it should be documented as so. 
 Or it should be changed not to delete the contents of the sym-linked 
 directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6349) Implement FastLZCodec for fastlz/lzo algorithm

2010-07-19 Thread Nicholas Carlini (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889893#action_12889893
 ] 

Nicholas Carlini commented on HADOOP-6349:
--

Eli - did you make an updated patch? If you haven't that's okay -- I can rebase 
it on trunk if you haven't yet.

 Implement FastLZCodec for fastlz/lzo algorithm
 --

 Key: HADOOP-6349
 URL: https://issues.apache.org/jira/browse/HADOOP-6349
 Project: Hadoop Common
  Issue Type: New Feature
  Components: io
Reporter: William Kinney
 Attachments: HADOOP-6349-TestFastLZCodec.patch, HADOOP-6349.patch, 
 TestCodecPerformance.java, TestCodecPerformance.java, testCodecPerfResults.tsv


 Per  [HADOOP-4874|http://issues.apache.org/jira/browse/HADOOP-4874], FastLZ 
 is a good (speed, license) alternative to LZO. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6805) add buildDTServiceName method to SecurityUtil (as part of MAPREDUCE-1718)

2010-07-19 Thread Boris Shkolnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889914#action_12889914
 ] 

Boris Shkolnik commented on HADOOP-6805:


javadoc - are the old javadoc warnings (about sun private pacakges) introduced 
elswhere..
no test - this is not a new code, this patch moves a method from one class to 
another.

 add buildDTServiceName method to SecurityUtil (as part of MAPREDUCE-1718)
 -

 Key: HADOOP-6805
 URL: https://issues.apache.org/jira/browse/HADOOP-6805
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Attachments: HADOOP-6805.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6805) add buildDTServiceName method to SecurityUtil (as part of MAPREDUCE-1718)

2010-07-19 Thread Boris Shkolnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Shkolnik updated HADOOP-6805:
---


committed to trunk

 add buildDTServiceName method to SecurityUtil (as part of MAPREDUCE-1718)
 -

 Key: HADOOP-6805
 URL: https://issues.apache.org/jira/browse/HADOOP-6805
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Attachments: HADOOP-6805.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6805) add buildDTServiceName method to SecurityUtil (as part of MAPREDUCE-1718)

2010-07-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889936#action_12889936
 ] 

Hudson commented on HADOOP-6805:


Integrated in Hadoop-Common-trunk-Commit #330 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk-Commit/330/])
HADOOP-6805. add buildDTServiceName method to SecurityUtil (as part of 
MAPREDUCE-1718)


 add buildDTServiceName method to SecurityUtil (as part of MAPREDUCE-1718)
 -

 Key: HADOOP-6805
 URL: https://issues.apache.org/jira/browse/HADOOP-6805
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Attachments: HADOOP-6805.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6837) Support for LZMA compression

2010-07-19 Thread Nicholas Carlini (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Carlini updated HADOOP-6837:
-

Attachment: HADOOP-6837-lzma-c-20100719.patch

Uploaded C code with LzmaNativeInputStream and LzmaNativeOutputStream. Testing 
is the same as that for the Java code. The documentation is limited on the C 
side, and there are still (commented out) debug statements scattered all over.

 Support for LZMA compression
 

 Key: HADOOP-6837
 URL: https://issues.apache.org/jira/browse/HADOOP-6837
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Nicholas Carlini
Assignee: Nicholas Carlini
 Attachments: HADOOP-6837-lzma-c-20100719.patch, 
 HADOOP-6837-lzma-java-20100623.patch


 Add support for LZMA (http://www.7-zip.org/sdk.html) compression, which 
 generally achieves higher compression ratios than both gzip and bzip2.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6855) Add ability to get groups for ACLs from 'getent netgroup'

2010-07-19 Thread Erik Steffl (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890041#action_12890041
 ] 

Erik Steffl commented on HADOOP-6855:
-

This patch is for people who want to use 'getent netgroup $group' command to 
provide groups to user mapping. It does not affect people who do not explicitly 
configure usage of the group mapping service added in this patch 
(ShellBasedUnixGroupsNetgroupMapping).

There is another patch coming that will provide JNI (or JNA) based 
implementation see https://issues.apache.org/jira/browse/HADOOP-6864

 Add ability to get groups for ACLs from 'getent netgroup'
 -

 Key: HADOOP-6855
 URL: https://issues.apache.org/jira/browse/HADOOP-6855
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 0.22.0
Reporter: Erik Steffl
 Fix For: 0.22.0

 Attachments: HADOOP-6855-0.20-1.patch, HADOOP-6855-0.20-2.patch, 
 HADOOP-6855-0.20.patch


 Add ability to specify netgroups in ACLs (see class AccessControlList.java). 
 Membership of users in netgroups will be determined by running 'getent 
 negroups $groupName'. Netgroups will be differentiated from regular unix 
 groups by having '@' as a first character.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6837) Support for LZMA compression

2010-07-19 Thread Hong Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890043#action_12890043
 ] 

Hong Tang commented on HADOOP-6837:
---

@nicolas, per our offline conversation last week, have you looked into whether 
the licensing of liblzma is suitable for inclusion in Hadoop? Liblzma seems 
better in the sense that its API resembles closely the APIs of other 
compression libraries like bzip or zlib and should shrink the amount of coding 
work needed to support C (and Java over JNI).

 Support for LZMA compression
 

 Key: HADOOP-6837
 URL: https://issues.apache.org/jira/browse/HADOOP-6837
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Nicholas Carlini
Assignee: Nicholas Carlini
 Attachments: HADOOP-6837-lzma-c-20100719.patch, 
 HADOOP-6837-lzma-java-20100623.patch


 Add support for LZMA (http://www.7-zip.org/sdk.html) compression, which 
 generally achieves higher compression ratios than both gzip and bzip2.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6837) Support for LZMA compression

2010-07-19 Thread Nicholas Carlini (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890051#action_12890051
 ] 

Nicholas Carlini commented on HADOOP-6837:
--

I spoke with Greg about it just now and he said it would probably be better for 
me to work on FastLZ first, and come back to doing that latter.

 Support for LZMA compression
 

 Key: HADOOP-6837
 URL: https://issues.apache.org/jira/browse/HADOOP-6837
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Nicholas Carlini
Assignee: Nicholas Carlini
 Attachments: HADOOP-6837-lzma-c-20100719.patch, 
 HADOOP-6837-lzma-java-20100623.patch


 Add support for LZMA (http://www.7-zip.org/sdk.html) compression, which 
 generally achieves higher compression ratios than both gzip and bzip2.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6857) FsShell should report raw disk usage including replication factor

2010-07-19 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-6857:


 Hadoop Flags: [Incompatible change]
Fix Version/s: 0.22.0
Affects Version/s: (was: 0.20.2)

Hey Aaron,

Patch looks good.  Mind creating o.a.h.fs.TestFsShell.java and adding a test 
that shows files with two different replication levels works? (might need to 
mock up the replication level).  Also, please test that TestShell and 
TestHDFSCLI in HDFS still pass for sanity.

Wrt to rationale I think this change is kosher since FileStatus#getReplication 
is not hdfs-specific. 

Marking the jira as an incompatible change since IIRC the FsShell is considered 
a public API.

Thanks,
Eli

 FsShell should report raw disk usage including replication factor
 -

 Key: HADOOP-6857
 URL: https://issues.apache.org/jira/browse/HADOOP-6857
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Alex Kozlov
 Fix For: 0.22.0

 Attachments: show-space-consumed.txt


 Currently FsShell report HDFS usage with hadoop fs -dus path command.  
 Since replication level is per file level, it would be nice to add raw disk 
 usage including the replication factor (maybe hadoop fs -dus -raw path?). 
  This will allow to assess resource usage more accurately.  -- Alex K

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6857) FsShell should report raw disk usage including replication factor

2010-07-19 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890070#action_12890070
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-6857:


We already have fs -count path which counts bytes including replications. 
Is it good enough?

 FsShell should report raw disk usage including replication factor
 -

 Key: HADOOP-6857
 URL: https://issues.apache.org/jira/browse/HADOOP-6857
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Alex Kozlov
 Fix For: 0.22.0

 Attachments: show-space-consumed.txt


 Currently FsShell report HDFS usage with hadoop fs -dus path command.  
 Since replication level is per file level, it would be nice to add raw disk 
 usage including the replication factor (maybe hadoop fs -dus -raw path?). 
  This will allow to assess resource usage more accurately.  -- Alex K

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6632) Support for using different Kerberos keys for different instances of Hadoop services

2010-07-19 Thread Kan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kan Zhang updated HADOOP-6632:
--

Attachment: c6632-07.patch

Uploading a new patch that simply merges with latest trunk changes. No semantic 
change from previous patch.

 Support for using different Kerberos keys for different instances of Hadoop 
 services
 

 Key: HADOOP-6632
 URL: https://issues.apache.org/jira/browse/HADOOP-6632
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: 6632.mr.patch, c6632-05.patch, c6632-07.patch, 
 HADOOP-6632-Y20S-18.patch, HADOOP-6632-Y20S-22.patch


 We tested using the same Kerberos key for all datanodes in a HDFS cluster or 
 the same Kerberos key for all TaskTarckers in a MapRed cluster. But it 
 doesn't work. The reason is that when datanodes try to authenticate to the 
 namenode all at once, the Kerberos authenticators they send to the namenode 
 may have the same timestamp and will be rejected as replay requests. This 
 JIRA makes it possible to use a unique key for each service instance.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6632) Support for using different Kerberos keys for different instances of Hadoop services

2010-07-19 Thread Kan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kan Zhang updated HADOOP-6632:
--

Status: Open  (was: Patch Available)

 Support for using different Kerberos keys for different instances of Hadoop 
 services
 

 Key: HADOOP-6632
 URL: https://issues.apache.org/jira/browse/HADOOP-6632
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: 6632.mr.patch, c6632-05.patch, c6632-07.patch, 
 HADOOP-6632-Y20S-18.patch, HADOOP-6632-Y20S-22.patch


 We tested using the same Kerberos key for all datanodes in a HDFS cluster or 
 the same Kerberos key for all TaskTarckers in a MapRed cluster. But it 
 doesn't work. The reason is that when datanodes try to authenticate to the 
 namenode all at once, the Kerberos authenticators they send to the namenode 
 may have the same timestamp and will be rejected as replay requests. This 
 JIRA makes it possible to use a unique key for each service instance.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6632) Support for using different Kerberos keys for different instances of Hadoop services

2010-07-19 Thread Kan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kan Zhang updated HADOOP-6632:
--

Status: Patch Available  (was: Open)

 Support for using different Kerberos keys for different instances of Hadoop 
 services
 

 Key: HADOOP-6632
 URL: https://issues.apache.org/jira/browse/HADOOP-6632
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: 6632.mr.patch, c6632-05.patch, c6632-07.patch, 
 HADOOP-6632-Y20S-18.patch, HADOOP-6632-Y20S-22.patch


 We tested using the same Kerberos key for all datanodes in a HDFS cluster or 
 the same Kerberos key for all TaskTarckers in a MapRed cluster. But it 
 doesn't work. The reason is that when datanodes try to authenticate to the 
 namenode all at once, the Kerberos authenticators they send to the namenode 
 may have the same timestamp and will be rejected as replay requests. This 
 JIRA makes it possible to use a unique key for each service instance.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6632) Support for using different Kerberos keys for different instances of Hadoop services

2010-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890104#action_12890104
 ] 

Hadoop QA commented on HADOOP-6632:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12449900/c6632-07.patch
  against trunk revision 965556.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated 1 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/624/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/624/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/624/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/624/console

This message is automatically generated.

 Support for using different Kerberos keys for different instances of Hadoop 
 services
 

 Key: HADOOP-6632
 URL: https://issues.apache.org/jira/browse/HADOOP-6632
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: 6632.mr.patch, c6632-05.patch, c6632-07.patch, 
 HADOOP-6632-Y20S-18.patch, HADOOP-6632-Y20S-22.patch


 We tested using the same Kerberos key for all datanodes in a HDFS cluster or 
 the same Kerberos key for all TaskTarckers in a MapRed cluster. But it 
 doesn't work. The reason is that when datanodes try to authenticate to the 
 namenode all at once, the Kerberos authenticators they send to the namenode 
 may have the same timestamp and will be rejected as replay requests. This 
 JIRA makes it possible to use a unique key for each service instance.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6632) Support for using different Kerberos keys for different instances of Hadoop services

2010-07-19 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6632:


   Status: Resolved  (was: Patch Available)
Fix Version/s: 0.22.0
   Resolution: Fixed

I just committed this. Thanks, Kan  Jitendra!

 Support for using different Kerberos keys for different instances of Hadoop 
 services
 

 Key: HADOOP-6632
 URL: https://issues.apache.org/jira/browse/HADOOP-6632
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang
 Fix For: 0.22.0

 Attachments: 6632.mr.patch, c6632-05.patch, c6632-07.patch, 
 HADOOP-6632-Y20S-18.patch, HADOOP-6632-Y20S-22.patch


 We tested using the same Kerberos key for all datanodes in a HDFS cluster or 
 the same Kerberos key for all TaskTarckers in a MapRed cluster. But it 
 doesn't work. The reason is that when datanodes try to authenticate to the 
 namenode all at once, the Kerberos authenticators they send to the namenode 
 may have the same timestamp and will be rejected as replay requests. This 
 JIRA makes it possible to use a unique key for each service instance.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6632) Support for using different Kerberos keys for different instances of Hadoop services

2010-07-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890112#action_12890112
 ] 

Hudson commented on HADOOP-6632:


Integrated in Hadoop-Common-trunk-Commit #331 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk-Commit/331/])
HADOOP-6632. Adds support for using different keytabs for different servers 
in a Hadoop cluster. In the earier implementation, all servers of a certain 
type \(like TaskTracker\), would have the same keytab and the same principal. 
Now the principal name is a pattern that has _HOST in it. Contributed by Kan 
Zhang  Jitendra Pandey.


 Support for using different Kerberos keys for different instances of Hadoop 
 services
 

 Key: HADOOP-6632
 URL: https://issues.apache.org/jira/browse/HADOOP-6632
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang
 Fix For: 0.22.0

 Attachments: 6632.mr.patch, c6632-05.patch, c6632-07.patch, 
 HADOOP-6632-Y20S-18.patch, HADOOP-6632-Y20S-22.patch


 We tested using the same Kerberos key for all datanodes in a HDFS cluster or 
 the same Kerberos key for all TaskTarckers in a MapRed cluster. But it 
 doesn't work. The reason is that when datanodes try to authenticate to the 
 namenode all at once, the Kerberos authenticators they send to the namenode 
 may have the same timestamp and will be rejected as replay requests. This 
 JIRA makes it possible to use a unique key for each service instance.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6349) Implement FastLZCodec for fastlz/lzo algorithm

2010-07-19 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-6349:


Attachment: hadoop-6349-1.patch

Patch attached, this was as far as I got before getting distracted.
* Re-based on trunk. Cleans up carriage returns and indenting, removes most 
gratuitous uses of this
* Fixes some bugs in TestCodecPerformance.java
* Got JFastLZCompressor working, at least for the seeds that were breaking (the 
finished method was incorrect)
* Made some progress on JFastLZDecompressor but found a seed (see the one used 
in TestCodecPerformance) that causes a checksum missmatch and getRemaining 
needs to be implemented.

 Implement FastLZCodec for fastlz/lzo algorithm
 --

 Key: HADOOP-6349
 URL: https://issues.apache.org/jira/browse/HADOOP-6349
 Project: Hadoop Common
  Issue Type: New Feature
  Components: io
Reporter: William Kinney
 Attachments: hadoop-6349-1.patch, HADOOP-6349-TestFastLZCodec.patch, 
 HADOOP-6349.patch, TestCodecPerformance.java, TestCodecPerformance.java, 
 testCodecPerfResults.tsv


 Per  [HADOOP-4874|http://issues.apache.org/jira/browse/HADOOP-4874], FastLZ 
 is a good (speed, license) alternative to LZO. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6349) Implement FastLZCodec for fastlz/lzo algorithm

2010-07-19 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890128#action_12890128
 ] 

Eli Collins commented on HADOOP-6349:
-

Here's some performance data I collected a while back. I compressed and 
uncompressed a 135mb file (~ hdfs block size) of json data using the default 
command line utility for the codec, compiled with -Wall -O3 
-fomit-frame-pointer on a nehalem-based system. I used an in-memory file system 
so the times were stable and took the best of 5 runs. These should be taken 
with a grain of salt since the same command line utility was not used with each 
codec. According to [1] zippy is 22% faster than lzo, fastlz is also 22% faster 
than lzo in the data below so ballpark performance is reasonable. We could 
optimize the native version further, curious to see what the overhead of 
calling out to jni is.

||codec ||fastlz(1) ||fastlz(2)||quicklz(1) ||quicklz(2) ||quicklz(3)||lzo 
(default)  ||lzo (-9)||lzf||
|size   |70m|63M  |58M  |49M |48M|61M   
  |47M|69M|
|compress   |1.092s |1.113s   |0.913s   |1.409s  |5.343s 
|1.414s  |20.495s|1.110s|
|decompress |0.649s |0.639s   |0.697s   |0.699s  |0.486s 
|0.630s  |0.665s|0.729s|

1. http://feedblog.org/2008/10/12/google-bigtable-compression-zippy-and-bmdiff

 Implement FastLZCodec for fastlz/lzo algorithm
 --

 Key: HADOOP-6349
 URL: https://issues.apache.org/jira/browse/HADOOP-6349
 Project: Hadoop Common
  Issue Type: New Feature
  Components: io
Reporter: William Kinney
 Attachments: hadoop-6349-1.patch, HADOOP-6349-TestFastLZCodec.patch, 
 HADOOP-6349.patch, TestCodecPerformance.java, TestCodecPerformance.java, 
 testCodecPerfResults.tsv


 Per  [HADOOP-4874|http://issues.apache.org/jira/browse/HADOOP-4874], FastLZ 
 is a good (speed, license) alternative to LZO. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6865) there will be ant error if ant ran without network connected

2010-07-19 Thread Evan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890134#action_12890134
 ] 

Evan Wang commented on HADOOP-6865:
---

Yep, it is possible that ivy-2.0.0-rc2.jar is the broken lcy zip file. I remove 
it and put a original ivy-2.0.0-rc2.jar and ant works well. But how to revise 
ant build to do it automatically and how about building without network 
supporting. By the way, I am just a new user of ant, looking for a solution.

 there will be ant error if ant ran without network connected
 

 Key: HADOOP-6865
 URL: https://issues.apache.org/jira/browse/HADOOP-6865
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.20.2
 Environment: centos 5.4
Reporter: Evan Wang

 If you run `ant` without network connected, there will be an error below. And 
 even if you connect your network, the error will exist.
 ivy-init-antlib:
   [typedef] java.util.zip.ZipException: error in opening zip file
   [typedef] at java.util.zip.ZipFile.open(Native Method)
   [typedef] at java.util.zip.ZipFile.init(ZipFile.java:114)
   [typedef] at java.util.zip.ZipFile.init(ZipFile.java:131)
   [typedef] at 
 org.apache.tools.ant.AntClassLoader.getResourceURL(AntClassLoader.java:1028)
   [typedef] at 
 org.apache.tools.ant.AntClassLoader$ResourceEnumeration.findNextResource(AntClassLoader.java:147)
   [typedef] at 
 org.apache.tools.ant.AntClassLoader$ResourceEnumeration.init(AntClassLoader.java:109)
   [typedef] at 
 org.apache.tools.ant.AntClassLoader.findResources(AntClassLoader.java:975)
   [typedef] at java.lang.ClassLoader.getResources(ClassLoader.java:1016)
   [typedef] at 
 org.apache.tools.ant.taskdefs.Definer.resourceToURLs(Definer.java:364)
   [typedef] at 
 org.apache.tools.ant.taskdefs.Definer.execute(Definer.java:256)
   [typedef] at 
 org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
   [typedef] at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
   [typedef] at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   [typedef] at java.lang.reflect.Method.invoke(Method.java:597)
   [typedef] at 
 org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
   [typedef] at org.apache.tools.ant.Task.perform(Task.java:348)
   [typedef] at org.apache.tools.ant.Target.execute(Target.java:357)
   [typedef] at org.apache.tools.ant.Target.performTasks(Target.java:385)
   [typedef] at 
 org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337)
   [typedef] at 
 org.apache.tools.ant.Project.executeTarget(Project.java:1306)
   [typedef] at 
 org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
   [typedef] at 
 org.apache.tools.ant.Project.executeTargets(Project.java:1189)
   [typedef] at org.apache.tools.ant.Main.runBuild(Main.java:758)
   [typedef] at org.apache.tools.ant.Main.startAnt(Main.java:217)
   [typedef] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257)
   [typedef] at 
 org.apache.tools.ant.launch.Launcher.main(Launcher.java:104)
   [typedef] Could not load definitions from resource 
 org/apache/ivy/ant/antlib.xml. It could not be found.
 BUILD FAILED
 /opt/hadoop-0.20.2/build.xml:1644: You need Apache Ivy 2.0 or later from 
 http://ant.apache.org/
   It could not be loaded from 
 http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.0.0-rc2/ivy-2.0.0-rc2.jar

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6632) Support for using different Kerberos keys for different instances of Hadoop services

2010-07-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890132#action_12890132
 ] 

Hudson commented on HADOOP-6632:


Integrated in Hadoop-Hdfs-trunk-Commit #346 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/346/])
HDFS-1201. The HDFS component for HADOOP-6632. Contributed by Kan Zhang  
Jitendra Pandey.


 Support for using different Kerberos keys for different instances of Hadoop 
 services
 

 Key: HADOOP-6632
 URL: https://issues.apache.org/jira/browse/HADOOP-6632
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang
 Fix For: 0.22.0

 Attachments: 6632.mr.patch, c6632-05.patch, c6632-07.patch, 
 HADOOP-6632-Y20S-18.patch, HADOOP-6632-Y20S-22.patch


 We tested using the same Kerberos key for all datanodes in a HDFS cluster or 
 the same Kerberos key for all TaskTarckers in a MapRed cluster. But it 
 doesn't work. The reason is that when datanodes try to authenticate to the 
 namenode all at once, the Kerberos authenticators they send to the namenode 
 may have the same timestamp and will be rejected as replay requests. This 
 JIRA makes it possible to use a unique key for each service instance.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6867) Using socket address for datanode registry breaks multihoming

2010-07-19 Thread Jordan Sissel (JIRA)
Using socket address for datanode registry breaks multihoming
-

 Key: HADOOP-6867
 URL: https://issues.apache.org/jira/browse/HADOOP-6867
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
 Environment: hadoop-0.20-0.20.2+228-1, centos 5, distcp
Reporter: Jordan Sissel


Related: 
* https://issues.apache.org/jira/browse/HADOOP-985
* https://issues.apache.org/jira/secure/attachment/12350813/HADOOP-985-1.patch
* http://old.nabble.com/public-IP-for-datanode-on-EC2-td19336240.html
* 
http://www.cloudera.com/blog/2008/12/securing-a-hadoop-cluster-through-a-gateway/
 

Datanodes register using their dns name (even configurable with 
dfs.datanode.dns.interface). However, the Namenode only really uses the source 
address that the registration came from when sharing it to clients wanting to 
write to HDFS.

Specific environment that causes this problem:
* Datanode and Namenode multihomed on two networks.
* Datanode registers to namenode using dns name on network #1
* Client (distcp) connects to namenode on network #2 (*) and is told to write 
to datanodes on network #1, which doesn't work for us.

(*) Allowing contact to the namenode on multiple networks was achieved with a 
socat proxy hack that tunnels network#2 to network#1 port 8020. This is 
unrelated to the issue at hand.


The cloudera link above recommends proxying for other reasons than multihoming, 
but it would work, but it doesn't sound like it would well (bandwidth, 
multiplicity, multitenant, etc).

Our specific scenario is wanting to distcp over a different network interface 
than the datanodes register themselves on, but it would be nice if both (all) 
interfaces worked. We are internally going to patch hadoop to roll back parts 
of the patch mentioned above so that we rely the datanode name rather than the 
socket address it uses to talk to the namenode. The alternate option is to push 
config changes to all nodes that force them to listen/register on one specific 
interface only. This helps us work around our specific problem, but doesn't 
really help with multihoming. 

I would propose that datanodes register all interface addresses during the 
registration/heartbeat/whatever process does this and hdfs clients would be 
given all addresses for a specific node to perform operations against and they 
could select accordingly (or 'whichever worked first') just like round-robin 
dns does.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6536) FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as the argument

2010-07-19 Thread Ravi Gummadi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Gummadi updated HADOOP-6536:
-

Status: Open  (was: Patch Available)

 FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as 
 the argument
 

 Key: HADOOP-6536
 URL: https://issues.apache.org/jira/browse/HADOOP-6536
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: HADOOP-6536.patch, HADOOP-6536.v1.1.patch, 
 HADOOP-6536.v1.patch


 FileUtil.fullyDelete(dir) deletes contents of sym-linked directory when we 
 pass a symlink. If this is the behavior, it should be documented as so. 
 Or it should be changed not to delete the contents of the sym-linked 
 directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6536) FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as the argument

2010-07-19 Thread Ravi Gummadi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Gummadi updated HADOOP-6536:
-

Status: Patch Available  (was: Open)

The failed test TestTrash passes on my local machine. Allowing to go through 
Hudson again...

 FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as 
 the argument
 

 Key: HADOOP-6536
 URL: https://issues.apache.org/jira/browse/HADOOP-6536
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: HADOOP-6536.patch, HADOOP-6536.v1.1.patch, 
 HADOOP-6536.v1.patch


 FileUtil.fullyDelete(dir) deletes contents of sym-linked directory when we 
 pass a symlink. If this is the behavior, it should be documented as so. 
 Or it should be changed not to delete the contents of the sym-linked 
 directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6536) FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as the argument

2010-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890180#action_12890180
 ] 

Hadoop QA commented on HADOOP-6536:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12449821/HADOOP-6536.v1.1.patch
  against trunk revision 965696.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated 1 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/625/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/625/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/625/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/625/console

This message is automatically generated.

 FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as 
 the argument
 

 Key: HADOOP-6536
 URL: https://issues.apache.org/jira/browse/HADOOP-6536
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: HADOOP-6536.patch, HADOOP-6536.v1.1.patch, 
 HADOOP-6536.v1.patch


 FileUtil.fullyDelete(dir) deletes contents of sym-linked directory when we 
 pass a symlink. If this is the behavior, it should be documented as so. 
 Or it should be changed not to delete the contents of the sym-linked 
 directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6536) FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as the argument

2010-07-19 Thread Ravi Gummadi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890183#action_12890183
 ] 

Ravi Gummadi commented on HADOOP-6536:
--

 -1 javadoc. The javadoc tool appears to have generated 1 warning messages.
No javadoc warning is introduced in this patch.

 FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as 
 the argument
 

 Key: HADOOP-6536
 URL: https://issues.apache.org/jira/browse/HADOOP-6536
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: HADOOP-6536.patch, HADOOP-6536.v1.1.patch, 
 HADOOP-6536.v1.patch


 FileUtil.fullyDelete(dir) deletes contents of sym-linked directory when we 
 pass a symlink. If this is the behavior, it should be documented as so. 
 Or it should be changed not to delete the contents of the sym-linked 
 directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6536) FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as the argument

2010-07-19 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated HADOOP-6536:


  Status: Resolved  (was: Patch Available)
Hadoop Flags: [Reviewed]
  Resolution: Fixed

I just committed this. Thanks Ravi!

 FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as 
 the argument
 

 Key: HADOOP-6536
 URL: https://issues.apache.org/jira/browse/HADOOP-6536
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: HADOOP-6536.patch, HADOOP-6536.v1.1.patch, 
 HADOOP-6536.v1.patch


 FileUtil.fullyDelete(dir) deletes contents of sym-linked directory when we 
 pass a symlink. If this is the behavior, it should be documented as so. 
 Or it should be changed not to delete the contents of the sym-linked 
 directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6536) FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as the argument

2010-07-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12890188#action_12890188
 ] 

Hudson commented on HADOOP-6536:


Integrated in Hadoop-Common-trunk-Commit #332 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk-Commit/332/])
HADOOP-6536. Fixes FileUtil.fullyDelete() not to delete the contents of the 
sym-linked directory. Contributed by Ravi Gummadi


 FileUtil.fullyDelete(dir) behavior is not defined when we pass a symlink as 
 the argument
 

 Key: HADOOP-6536
 URL: https://issues.apache.org/jira/browse/HADOOP-6536
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Amareshwari Sriramadasu
Assignee: Ravi Gummadi
 Fix For: 0.22.0

 Attachments: HADOOP-6536.patch, HADOOP-6536.v1.1.patch, 
 HADOOP-6536.v1.patch


 FileUtil.fullyDelete(dir) deletes contents of sym-linked directory when we 
 pass a symlink. If this is the behavior, it should be documented as so. 
 Or it should be changed not to delete the contents of the sym-linked 
 directory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.