[jira] Updated: (HADOOP-6857) FsShell should report raw disk usage including replication factor

2010-07-17 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-6857:
---

Status: Patch Available  (was: Open)

 FsShell should report raw disk usage including replication factor
 -

 Key: HADOOP-6857
 URL: https://issues.apache.org/jira/browse/HADOOP-6857
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.2
Reporter: Alex Kozlov
 Attachments: show-space-consumed.txt


 Currently FsShell report HDFS usage with hadoop fs -dus path command.  
 Since replication level is per file level, it would be nice to add raw disk 
 usage including the replication factor (maybe hadoop fs -dus -raw path?). 
  This will allow to assess resource usage more accurately.  -- Alex K

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6857) FsShell should report raw disk usage including replication factor

2010-07-17 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-6857:
---

Attachment: show-space-consumed.txt

This patch adds a new column to the output of hadoop fs -du and hadoop fs 
-dus which shows the disk space consumed (file size * per-file replication 
factor) of the paths matched by these commands.

 FsShell should report raw disk usage including replication factor
 -

 Key: HADOOP-6857
 URL: https://issues.apache.org/jira/browse/HADOOP-6857
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.2
Reporter: Alex Kozlov
 Attachments: show-space-consumed.txt


 Currently FsShell report HDFS usage with hadoop fs -dus path command.  
 Since replication level is per file level, it would be nice to add raw disk 
 usage including the replication factor (maybe hadoop fs -dus -raw path?). 
  This will allow to assess resource usage more accurately.  -- Alex K

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6857) FsShell should report raw disk usage including replication factor

2010-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889481#action_12889481
 ] 

Hadoop QA commented on HADOOP-6857:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12449749/show-space-consumed.txt
  against trunk revision 964993.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated 1 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/622/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/622/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/622/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/622/console

This message is automatically generated.

 FsShell should report raw disk usage including replication factor
 -

 Key: HADOOP-6857
 URL: https://issues.apache.org/jira/browse/HADOOP-6857
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.2
Reporter: Alex Kozlov
 Attachments: show-space-consumed.txt


 Currently FsShell report HDFS usage with hadoop fs -dus path command.  
 Since replication level is per file level, it would be nice to add raw disk 
 usage including the replication factor (maybe hadoop fs -dus -raw path?). 
  This will allow to assess resource usage more accurately.  -- Alex K

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6670) UserGroupInformation doesn't support use in hash tables

2010-07-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889493#action_12889493
 ] 

Hudson commented on HADOOP-6670:


Integrated in Hadoop-Common-trunk #395 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/395/])
HADOOP-6670. Use the UserGroupInformation's Subject as the criteria for 
equals and hashCode. Contributed by Owen O'Malley and Kan Zhang.


 UserGroupInformation doesn't support use in hash tables
 ---

 Key: HADOOP-6670
 URL: https://issues.apache.org/jira/browse/HADOOP-6670
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.22.0

 Attachments: 6670-gridmix-test.bugfix.patch, c6670-02.patch, 
 c6670-03.patch, fs-close.patch, fs-close.patch, fs-close.patch


 The UserGroupInformation objects are mutable, but they are used as keys in 
 hash tables. This leads to serious problems in the FileSystem cache and RPC 
 connection cache. We need to change the hashCode to be the identity hash code 
 of the Subject and change equals to use == between the Subjects.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6834) TFile.append compares initial key against null lastKey

2010-07-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889494#action_12889494
 ] 

Hudson commented on HADOOP-6834:


Integrated in Hadoop-Common-trunk #395 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/395/])
HADOOP-6834. TFile.append compares initial key against null lastKey (hong 
tang via mahadev)


 TFile.append compares initial key against null lastKey  
 

 Key: HADOOP-6834
 URL: https://issues.apache.org/jira/browse/HADOOP-6834
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.1, 0.20.2
Reporter: Ahad Rana
Assignee: Hong Tang
 Fix For: 0.22.0

 Attachments: hadoop-6834-20100715.patch


 The following code in TFile.KeyReigster.close: 
 byte[] lastKey = lastKeyBufferOS.getBuffer();
 int lastLen = lastKeyBufferOS.size();
 if (tfileMeta.getComparator().compare(key, 0, len, lastKey, 0,
 lastLen)  0) {
   throw new IOException(Keys are not added in sorted order);
 }
 compares the initial  key (passed in via  TFile.Writer.append) against a 
 technically NULL lastKey. lastKey is not initialized until after the first 
 call to TFile.Writer.append. The underlying RawComparator interface used for 
 comparisons does not stipulate the proper behavior when either length 1  or 
 length 2 is zero. In the case of LongWritable, its WritableComparator 
 implementation does an unsafe read on the passed in byte arrays b1 and b2. 
 Since TFile pre-allocates the buffer used for storing lastKey, this passes a 
 valid buffer with zero count to LongWritable's comparator, which ignores 
 length and thus produces incorrect results. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6860) 'compile-fault-inject' should never be called directly.

2010-07-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889495#action_12889495
 ] 

Hudson commented on HADOOP-6860:


Integrated in Hadoop-Common-trunk #395 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/395/])
HADOOP-6860. 'compile-fault-inject' should never be called directly. 
Contributed by Konstantin Boudnik.


  'compile-fault-inject' should never be called directly.
 

 Key: HADOOP-6860
 URL: https://issues.apache.org/jira/browse/HADOOP-6860
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.21.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
Priority: Minor
 Fix For: 0.21.0

 Attachments: HADOOP-6860.patch


 Similar to HDFS-1299 a  helper target  'compile-fault-inject' should never be 
 called directly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6865) there will be ant error if ant ran without network connected

2010-07-17 Thread Evan Wang (JIRA)
there will be ant error if ant ran without network connected


 Key: HADOOP-6865
 URL: https://issues.apache.org/jira/browse/HADOOP-6865
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.20.2
 Environment: centos 5.4
Reporter: Evan Wang


If you run `ant` without network connected, there will be an error below. And 
even if you connect your network, the error will exist.

ivy-init-antlib:
  [typedef] java.util.zip.ZipException: error in opening zip file
  [typedef] at java.util.zip.ZipFile.open(Native Method)
  [typedef] at java.util.zip.ZipFile.init(ZipFile.java:114)
  [typedef] at java.util.zip.ZipFile.init(ZipFile.java:131)
  [typedef] at 
org.apache.tools.ant.AntClassLoader.getResourceURL(AntClassLoader.java:1028)
  [typedef] at 
org.apache.tools.ant.AntClassLoader$ResourceEnumeration.findNextResource(AntClassLoader.java:147)
  [typedef] at 
org.apache.tools.ant.AntClassLoader$ResourceEnumeration.init(AntClassLoader.java:109)
  [typedef] at 
org.apache.tools.ant.AntClassLoader.findResources(AntClassLoader.java:975)
  [typedef] at java.lang.ClassLoader.getResources(ClassLoader.java:1016)
  [typedef] at 
org.apache.tools.ant.taskdefs.Definer.resourceToURLs(Definer.java:364)
  [typedef] at 
org.apache.tools.ant.taskdefs.Definer.execute(Definer.java:256)
  [typedef] at 
org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
  [typedef] at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
  [typedef] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  [typedef] at java.lang.reflect.Method.invoke(Method.java:597)
  [typedef] at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
  [typedef] at org.apache.tools.ant.Task.perform(Task.java:348)
  [typedef] at org.apache.tools.ant.Target.execute(Target.java:357)
  [typedef] at org.apache.tools.ant.Target.performTasks(Target.java:385)
  [typedef] at 
org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337)
  [typedef] at org.apache.tools.ant.Project.executeTarget(Project.java:1306)
  [typedef] at 
org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
  [typedef] at 
org.apache.tools.ant.Project.executeTargets(Project.java:1189)
  [typedef] at org.apache.tools.ant.Main.runBuild(Main.java:758)
  [typedef] at org.apache.tools.ant.Main.startAnt(Main.java:217)
  [typedef] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257)
  [typedef] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104)
  [typedef] Could not load definitions from resource 
org/apache/ivy/ant/antlib.xml. It could not be found.

BUILD FAILED
/opt/hadoop-0.20.2/build.xml:1644: You need Apache Ivy 2.0 or later from 
http://ant.apache.org/
  It could not be loaded from 
http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.0.0-rc2/ivy-2.0.0-rc2.jar


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6863) Compile-native still fails on Mac OS X.

2010-07-17 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12889540#action_12889540
 ] 

Allen Wittenauer commented on HADOOP-6863:
--

I haven't had a chance to try the patch yet, but it should be noted that it 
looks it was built with newer automake and I'm guessing newer libtool than the 
base version we have documented...

 Compile-native still fails on Mac OS X.
 ---

 Key: HADOOP-6863
 URL: https://issues.apache.org/jira/browse/HADOOP-6863
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hong Tang
Assignee: Hong Tang
 Attachments: hadoop-6863-20100715.patch


 I am still getting failures when I try to compile native library on Mac OS X 
 (10.5.8 Leopard). The problems are two fold:
 - Although aclocal.m4 is changed after HADOOP-3659 is committed, the 
 corresponding configure script is not re-generated.
 - It cannot find libjvm.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.