[jira] [Commented] (HADOOP-14907) Memory leak in FileSystem cache

2017-09-26 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181120#comment-16181120
 ] 

Thomas Graves commented on HADOOP-14907:


Can you give more details on where the heap dump is from?  It looks like you 
are running Spark.  Are you using the --keytab option?

> Memory leak in FileSystem cache
> ---
>
> Key: HADOOP-14907
> URL: https://issues.apache.org/jira/browse/HADOOP-14907
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.4
>Reporter: cen yuhai
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> There is a memory leak in FileSystem cache. It will take a lot of memory.I 
> think the root cause is that the equals function in class Key is not right. 
> You can see in the screenshot-1.png, the same user etl is in different key... 
> And also FileSystem cache should be a LRU cache



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-07 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318477#comment-15318477
 ] 

Thomas Graves commented on HADOOP-13184:


my vote would be option 4.

> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>Assignee: Abhishek
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2014-06-27 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-10615:
---

Target Version/s: 2.5.0  (was: 0.23.11, 2.5.0)

 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
 Attachments: HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10661) Ineffective user/passsword check in FTPFileSystem#initialize()

2014-06-27 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-10661:
---

Target Version/s: 2.5.0  (was: 0.23.11, 2.5.0)

 Ineffective user/passsword check in FTPFileSystem#initialize()
 --

 Key: HADOOP-10661
 URL: https://issues.apache.org/jira/browse/HADOOP-10661
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
 Attachments: HADOOP-10661.patch


 Here is related code:
 {code}
   userAndPassword = (conf.get(fs.ftp.user. + host, null) + : + conf
   .get(fs.ftp.password. + host, null));
   if (userAndPassword == null) {
 throw new IOException(Invalid user/passsword specified);
   }
 {code}
 The intention seems to be checking that username / password should not be 
 null.
 But due to the presence of colon, the above check is not effective.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10506) LimitedPrivate annotation not useful

2014-05-20 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003318#comment-14003318
 ] 

Thomas Graves commented on HADOOP-10506:


Sorry for my delay, I somehow missed your comment go by. 

One of the main ones is UserGroupInformation.  As I mentioned above, you can 
tell just by how many components are listed in the LimitedPrivate clause. 

I've filed separate jira in YARN land for a few there also.  Vinod nicely 
bundled them under https://issues.apache.org/jira/browse/YARN-1953.

I've been distracted by other things recently though and haven't finished 
trying to convert everything to public interfaces so there are likely a few 
more. 

Since there haven't been any other disagreements with this perhaps I will file 
a jira to atleast update the docs about LimitedPrivate for the 2 bullets I 
mention above:

1) clarify documentation what limitedPrivate is. This means us internally 
agreeing on what it really means. 
2) No new classes/interfaces should use this tag. They should be properly 
classified as either public or private. If there was a bug and class not 
properly tagged its fine to use there.

 LimitedPrivate annotation not useful
 

 Key: HADOOP-10506
 URL: https://issues.apache.org/jira/browse/HADOOP-10506
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Thomas Graves

 The LimitedPrivate annotation isn't useful.  The intention seems to have been 
 those interfaces were only intended to be used by these components.  But in 
 many cases those components are separate from core hadoop.  This means any 
 changes to them will break backwards compatibility with those, which breaks 
 the new compatibility rules in Hadoop.  
 Note that many of the annotation are also not marked properly, or have fallen 
 out of date.  I see Public Interfaces that use LimitedPrivate classes in the 
 api.  (TokenCache using Credentials is an example). 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10506) LimitedPrivate annotation not useful

2014-04-15 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-10506:
--

 Summary: LimitedPrivate annotation not useful
 Key: HADOOP-10506
 URL: https://issues.apache.org/jira/browse/HADOOP-10506
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0, 3.0.0
Reporter: Thomas Graves






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10506) LimitedPrivate annotation not useful

2014-04-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13970085#comment-13970085
 ] 

Thomas Graves commented on HADOOP-10506:


So you are sayings it purely informational and if I'm a closely related 
product I can use it and should update the annotation? Backwards compatibility 
guarantees are by the InterfaceStability tag but in the LimitedPrivate we will 
contact/negotiate with the components listed before making any changes. 

So I guess the question is what is a closely related product.  Do all apache 
products fall into that category?

One example of this that seems a bit ridiculous is UserGroupInformation:
 @InterfaceAudience.LimitedPrivate({HDFS, MapReduce, HBase, Hive, 
Oozie})


 LimitedPrivate annotation not useful
 

 Key: HADOOP-10506
 URL: https://issues.apache.org/jira/browse/HADOOP-10506
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Thomas Graves

 The LimitedPrivate annotation isn't useful.  The intention seems to have been 
 those interfaces were only intended to be used by these components.  But in 
 many cases those components are separate from core hadoop.  This means any 
 changes to them will break backwards compatibility with those, which breaks 
 the new compatibility rules in Hadoop.  
 Note that many of the annotation are also not marked properly, or have fallen 
 out of date.  I see Public Interfaces that use LimitedPrivate classes in the 
 api.  (TokenCache using Credentials is an example). 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10506) LimitedPrivate annotation not useful

2014-04-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13970243#comment-13970243
 ] 

Thomas Graves commented on HADOOP-10506:


Also to clarify,  I do understand that the limitedprivate was probably added 
for historical reasons (the api's weren't properly categorized before) so 
components used them.  We have made great improvements to get them categorized 
for the official 2.2 release and we couldn't get everything fixed up due to 
timing.

So perhaps the title of the jira isn't correct and should be updated.  
Personally I think the following should be done:

1) clarify documentation what limitedPrivate is. This means us internally 
agreeing on what it really means. 
2) No new classes/interfaces should use this tag. They should be properly 
classified as either public or private. If there was a bug and class not 
properly tagged its fine to use there.
3) All existing classes with this tag should be eventually be deprecated and we 
should work towards that.  I realize this isn't going to happen immediately as 
everyone has to balance this with features and other bug fixes.

If one component is using the interface more then likely its useful for other 
components and more then likely other components are already using it.   Thus 
in my opinion the documentation of listing the component its private to isn't 
very useful. 

The longer we wait with this the more applications (in the case of yarn) and 
components will use these api's and it will be harder to change them.  Yes we 
can say it was marked a certain way and the other components shouldn't have 
used it but when its ambiguous like this and we don't provide an equivalent 
Public api I don't see how we can defend that.  Also as more and more 
components are created it gets harder and harder to upgrade/deploy them all at 
once.  Hence why all the push for HA and rolling upgrades. Customers don't want 
downtime.

I do agree we need to have the option for moving forward.  But I think this can 
be done with a proper set of Public apis.

The reason I ran into this was because I am writing an application to run on 
yarn and also another one to just read HDFS files.  Its almost impossible (if 
not impossible) to do that properly (with security) without using classes 
marked LimitedPrivate right now.  If it is possible it would require copying 
lots of code. We should be making writing applications as easy as possible.

 LimitedPrivate annotation not useful
 

 Key: HADOOP-10506
 URL: https://issues.apache.org/jira/browse/HADOOP-10506
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Thomas Graves

 The LimitedPrivate annotation isn't useful.  The intention seems to have been 
 those interfaces were only intended to be used by these components.  But in 
 many cases those components are separate from core hadoop.  This means any 
 changes to them will break backwards compatibility with those, which breaks 
 the new compatibility rules in Hadoop.  
 Note that many of the annotation are also not marked properly, or have fallen 
 out of date.  I see Public Interfaces that use LimitedPrivate classes in the 
 api.  (TokenCache using Credentials is an example). 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8746) TestNativeIO fails when run with jdk7

2014-04-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966471#comment-13966471
 ] 

Thomas Graves commented on HADOOP-8746:
---

If you are seeing the failure on 2.5 go ahead and move it.  Otherwise go ahead 
and close it.

 TestNativeIO fails when run with jdk7
 -

 Key: HADOOP-8746
 URL: https://issues.apache.org/jira/browse/HADOOP-8746
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 2.0.2-alpha
Reporter: Thomas Graves
Assignee: Thomas Graves
  Labels: java7

 TestNativeIo fails when run with jdk7.
 Test set: org.apache.hadoop.io.nativeio.TestNativeIO
 ---
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.232 sec  
 FAILURE!
 testSyncFileRange(org.apache.hadoop.io.nativeio.TestNativeIO)  Time elapsed: 
 0.166 sec   ERROR!
 EINVAL: Invalid argument
 at org.apache.hadoop.io.nativeio.NativeIO.sync_file_range(Native 
 Method)
 at 
 org.apache.hadoop.io.nativeio.TestNativeIO.testSyncFileRange(TestNativeIO.java:254)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10081) Client.setupIOStreams can leak socket resources on exception or error

2013-12-02 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-10081:
---

Target Version/s: 3.0.0, 0.23.11, 2.3.0  (was: 3.0.0, 2.4.0, 0.23.11)

 Client.setupIOStreams can leak socket resources on exception or error
 -

 Key: HADOOP-10081
 URL: https://issues.apache.org/jira/browse/HADOOP-10081
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.9, 2.2.0
Reporter: Jason Lowe
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: HADOOP-10081.1.patch


 The setupIOStreams method in org.apache.hadoop.ipc.Client can leak socket 
 resources if an exception is thrown before the inStream and outStream local 
 variables are assigned to this.in and this.out, respectively.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9757) Har metadata cache can grow without limit

2013-07-25 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9757:
--

Assignee: Cristina L. Abad

 Har metadata cache can grow without limit
 -

 Key: HADOOP-9757
 URL: https://issues.apache.org/jira/browse/HADOOP-9757
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.4-alpha, 0.23.9
Reporter: Jason Lowe
Assignee: Cristina L. Abad

 MAPREDUCE-2459 added a metadata cache to the har filesystem, but the cache 
 has no upper limits.  A long-running process that accesses many har archives 
 will eventually run out of memory due to a har metadata cache that never 
 retires entries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-07-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9438:
--

Target Version/s: 3.0.0, 2.1.0-beta, 0.23.10  (was: 3.0.0, 2.1.0-beta, 
0.23.9)

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-9438.20130501.1.patch, 
 HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch


 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9504) MetricsDynamicMBeanBase has concurrency issues in createMBeanInfo

2013-07-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9504:
--

Target Version/s: 0.23.8, 2.1.0-beta, 1.2.1  (was: 2.1.0-beta, 1.2.1, 
0.23.9)

 MetricsDynamicMBeanBase has concurrency issues in createMBeanInfo
 -

 Key: HADOOP-9504
 URL: https://issues.apache.org/jira/browse/HADOOP-9504
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Critical
 Fix For: 2.1.0-beta, 0.23.8

 Attachments: HADOOP-9504-branch-1.txt, HADOOP-9504.txt, 
 HADOOP-9504-v2.txt


 Please see HBASE-8416 for detail information.
 we need to take care of the synchronization for HashMap put(), otherwise it 
 may lead to spin loop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9317) User cannot specify a kerberos keytab for commands

2013-07-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9317:
--

Target Version/s: 3.0.0, 2.1.0-beta, 0.23.10  (was: 3.0.0, 2.1.0-beta, 
0.23.9)

 User cannot specify a kerberos keytab for commands
 --

 Key: HADOOP-9317
 URL: https://issues.apache.org/jira/browse/HADOOP-9317
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-9317.branch-23.patch, 
 HADOOP-9317.branch-23.patch, HADOOP-9317.patch, HADOOP-9317.patch, 
 HADOOP-9317.patch, HADOOP-9317.patch


 {{UserGroupInformation}} only allows kerberos users to be logged in via the 
 ticket cache when running hadoop commands.  {{UGI}} allows a keytab to be 
 used, but it's only exposed programatically.  This forces keytab-based users 
 running hadoop commands to periodically issue a kinit from the keytab.  A 
 race condition exists during the kinit when the ticket cache is deleted and 
 re-created.  Hadoop commands will fail when the ticket cache does not 
 momentarily exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9317) User cannot specify a kerberos keytab for commands

2013-05-28 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9317:
--

Target Version/s: 3.0.0, 2.0.5-beta, 0.23.9  (was: 3.0.0, 2.0.5-beta, 
0.23.8)

 User cannot specify a kerberos keytab for commands
 --

 Key: HADOOP-9317
 URL: https://issues.apache.org/jira/browse/HADOOP-9317
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-9317.branch-23.patch, 
 HADOOP-9317.branch-23.patch, HADOOP-9317.patch, HADOOP-9317.patch, 
 HADOOP-9317.patch, HADOOP-9317.patch


 {{UserGroupInformation}} only allows kerberos users to be logged in via the 
 ticket cache when running hadoop commands.  {{UGI}} allows a keytab to be 
 used, but it's only exposed programatically.  This forces keytab-based users 
 running hadoop commands to periodically issue a kinit from the keytab.  A 
 race condition exists during the kinit when the ticket cache is deleted and 
 re-created.  Hadoop commands will fail when the ticket cache does not 
 momentarily exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-05-28 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9438:
--

Target Version/s: 3.0.0, 2.0.5-beta, 0.23.9  (was: 3.0.0, 2.0.5-beta, 
0.23.8)

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-9438.20130501.1.patch, 
 HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch


 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9504) MetricsDynamicMBeanBase has concurrency issues in createMBeanInfo

2013-05-28 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9504:
--

Target Version/s: 2.0.5-beta, 1.2.1, 0.23.9  (was: 2.0.5-beta, 0.23.8, 
1.2.1)

 MetricsDynamicMBeanBase has concurrency issues in createMBeanInfo
 -

 Key: HADOOP-9504
 URL: https://issues.apache.org/jira/browse/HADOOP-9504
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Critical
 Fix For: 2.0.5-beta, 0.23.8

 Attachments: HADOOP-9504-branch-1.txt, HADOOP-9504.txt, 
 HADOOP-9504-v2.txt


 Please see HBASE-8416 for detail information.
 we need to take care of the synchronization for HashMap put(), otherwise it 
 may lead to spin loop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9514) Hadoop CLI's have inconsistent usages

2013-04-26 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-9514:
-

 Summary: Hadoop CLI's have inconsistent usages
 Key: HADOOP-9514
 URL: https://issues.apache.org/jira/browse/HADOOP-9514
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.4-alpha, 3.0.0
Reporter: Thomas Graves


Many of the hadoop command line interfaces (yarn/mapred/hdfs/hadoop) and 
subcommands have inconsistent usages, in many cases have options that don't 
apply (-archives/-files/-jt), and due to the usage of GenericOptionsParser 
print the usage as bin/hadoop command [genericOptions] [commandOptions] even 
though you were running yarn or hdfs commands.

This makes for a bad user experience and its confusing as to what options are 
really available and how to use them.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9514) Hadoop CLI's have inconsistent usages

2013-04-26 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13642856#comment-13642856
 ] 

Thomas Graves commented on HADOOP-9514:
---

It would be nice if we could make the usage across the commands consistent.  
Its nice to have a GenericOptionsParser but we either need to pull out the 
things that aren't truly generic across the commands or better yet have a way 
for different commands to specify which apply and perhaps add to it so that the 
usage comes out nicely to differentiate subcommands from generic options.We 
also need to remove the general usage of bin/hadoop command [genericOptions] 
[commandOptions]

Note that this came about from discussion on YARN-126.

Note that even within subcommands of particular CLI's are inconsistent. For 
example with yarn:
$ yarn rmadmin
Usage: java RMAdmin
   [-refreshQueues]
   [-refreshNodes]
   [-refreshUserToGroupsMappings]
   [-refreshSuperUserGroupsConfiguration]
   [-refreshAdminAcls]
   [-refreshServiceAcl]
   [-getGroups [username]]
   [-help [cmd]]

$yarn application
usage: application
 -kill arg Kills the application.
 -list   Lists all the Applications from RM.
 -status arg   Prints the status of the application.


Another examples is hdfs dfsadmin and then look at hdfs fsck (which happens to 
be printing generic usage options twice on trunk/branch-2)


 Hadoop CLI's have inconsistent usages
 -

 Key: HADOOP-9514
 URL: https://issues.apache.org/jira/browse/HADOOP-9514
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Thomas Graves
  Labels: usability

 Many of the hadoop command line interfaces (yarn/mapred/hdfs/hadoop) and 
 subcommands have inconsistent usages, in many cases have options that don't 
 apply (-archives/-files/-jt), and due to the usage of GenericOptionsParser 
 print the usage as bin/hadoop command [genericOptions] [commandOptions] 
 even though you were running yarn or hdfs commands.
 This makes for a bad user experience and its confusing as to what options are 
 really available and how to use them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9514) Hadoop CLI's have inconsistent usages

2013-04-26 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643034#comment-13643034
 ] 

Thomas Graves commented on HADOOP-9514:
---

I agree.  I am kind of hoping we don't have to break backwards compatibility 
with 0.23 but lets try to do it right and then see what comes up.

 Hadoop CLI's have inconsistent usages
 -

 Key: HADOOP-9514
 URL: https://issues.apache.org/jira/browse/HADOOP-9514
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Thomas Graves
  Labels: usability

 Many of the hadoop command line interfaces (yarn/mapred/hdfs/hadoop) and 
 subcommands have inconsistent usages, in many cases have options that don't 
 apply (-archives/-files/-jt), and due to the usage of GenericOptionsParser 
 print the usage as bin/hadoop command [genericOptions] [commandOptions] 
 even though you were running yarn or hdfs commands.
 This makes for a bad user experience and its confusing as to what options are 
 really available and how to use them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9469) mapreduce/yarn source jars not included in dist tarball

2013-04-19 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13636897#comment-13636897
 ] 

Thomas Graves commented on HADOOP-9469:
---

+1. Thanks Rob!

 mapreduce/yarn source jars not included in dist tarball
 ---

 Key: HADOOP-9469
 URL: https://issues.apache.org/jira/browse/HADOOP-9469
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Thomas Graves
Assignee: Robert Parker
 Attachments: HADOOP-9469-branch-0.23.patch, 
 HADOOP-9469-branch-2.patch, HADOOP-9469.patch, HADOOP-9469.patch


 the mapreduce and yarn sources jars don't get included into the distribution 
 tarball.  It seems they get built by default just aren't assembled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9469) mapreduce/yarn source jars not included in dist tarball

2013-04-19 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9469:
--

   Resolution: Fixed
Fix Version/s: 0.23.8
   2.0.5-beta
   3.0.0
   Status: Resolved  (was: Patch Available)

 mapreduce/yarn source jars not included in dist tarball
 ---

 Key: HADOOP-9469
 URL: https://issues.apache.org/jira/browse/HADOOP-9469
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Thomas Graves
Assignee: Robert Parker
 Fix For: 3.0.0, 2.0.5-beta, 0.23.8

 Attachments: HADOOP-9469-branch-0.23.patch, 
 HADOOP-9469-branch-2.patch, HADOOP-9469.patch, HADOOP-9469.patch


 the mapreduce and yarn sources jars don't get included into the distribution 
 tarball.  It seems they get built by default just aren't assembled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9469) mapreduce/yarn source jars not included in dist tarball

2013-04-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13628957#comment-13628957
 ] 

Thomas Graves commented on HADOOP-9469:
---

Thanks Rob, a few comments:

in mapreduce is looks like we aren't packaging the sources for:

hadoop-mapreduce-client-hs-plugins-3.0.0-SNAPSHOT.jar
hadoop-mapreduce-client-app-3.0.0-SNAPSHOT.jar

You might also package the hadoop-tools sources jars.

 mapreduce/yarn source jars not included in dist tarball
 ---

 Key: HADOOP-9469
 URL: https://issues.apache.org/jira/browse/HADOOP-9469
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Thomas Graves
Assignee: Robert Parker
 Attachments: HADOOP-9469.patch


 the mapreduce and yarn sources jars don't get included into the distribution 
 tarball.  It seems they get built by default just aren't assembled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9469) mapreduce/yarn source jars not included in dist tarball

2013-04-11 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9469:
--

Target Version/s: 3.0.0, 2.0.5-beta, 0.23.8  (was: 3.0.0, 0.23.7, 
2.0.5-beta)

 mapreduce/yarn source jars not included in dist tarball
 ---

 Key: HADOOP-9469
 URL: https://issues.apache.org/jira/browse/HADOOP-9469
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Thomas Graves
Assignee: Robert Parker
 Attachments: HADOOP-9469.patch


 the mapreduce and yarn sources jars don't get included into the distribution 
 tarball.  It seems they get built by default just aren't assembled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-04-09 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9199:
--

Assignee: Vadim Bondarev

 Cover package org.apache.hadoop.io with unit tests
 --

 Key: HADOOP-9199
 URL: https://issues.apache.org/jira/browse/HADOOP-9199
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
Assignee: Vadim Bondarev
 Attachments: HADOOP-9199-branch-0.23-a.patch, 
 HADOOP-9199-branch-0.23-b.patch, HADOOP-9199-branch-0.23-c.patch, 
 HADOOP-9199-branch-0.23-e.patch, HADOOP-9199-branch-2-a.patch, 
 HADOOP-9199-branch-2-b.patch, HADOOP-9199-branch-2-c.patch, 
 HADOOP-9199-branch-2-e.patch, HADOOP-9199-trunk-a.patch, 
 HADOOP-9199-trunk-b.patch, HADOOP-9199-trunk-c.patch, 
 HADOOP-9199-trunk-e.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9225) Cover package org.apache.hadoop.compress.Snappy

2013-04-09 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9225:
--

Assignee: Vadim Bondarev

 Cover package org.apache.hadoop.compress.Snappy
 ---

 Key: HADOOP-9225
 URL: https://issues.apache.org/jira/browse/HADOOP-9225
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
Assignee: Vadim Bondarev
 Attachments: HADOOP-9225-branch-0.23-a.patch, 
 HADOOP-9225-branch-2-a.patch, HADOOP-9225-branch-2-b.patch, 
 HADOOP-9225-branch-2-c.patch, HADOOP-9225-trunk-a.patch, 
 HADOOP-9225-trunk-b.patch, HADOOP-9225-trunk-c.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9219) coverage fixing for org.apache.hadoop.tools.rumen

2013-04-09 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9219:
--

Assignee: Aleksey Gorshkov

 coverage fixing for org.apache.hadoop.tools.rumen
 -

 Key: HADOOP-9219
 URL: https://issues.apache.org/jira/browse/HADOOP-9219
 Project: Hadoop Common
  Issue Type: Test
  Components: tools
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Aleksey Gorshkov
Assignee: Aleksey Gorshkov
 Attachments: HADOOP-9219-trunk-a.patch, HADOOP-9219-trunk-b.patch, 
 HADOOP-9219-trunk.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 coverage fixing for org.apache.hadoop.tools.rumen 
 HADOOP-9219-trunk.patch for trunk, brunch-2 and branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9254) Cover packages org.apache.hadoop.util.bloom, org.apache.hadoop.util.hash

2013-04-09 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9254:
--

Assignee: Vadim Bondarev

 Cover packages org.apache.hadoop.util.bloom, org.apache.hadoop.util.hash
 

 Key: HADOOP-9254
 URL: https://issues.apache.org/jira/browse/HADOOP-9254
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
Assignee: Vadim Bondarev
 Attachments: HADOOP-9254-branch-0.23-a.patch, 
 HADOOP-9254-branch-0.23-c.patch, HADOOP-9254-branch-2-a.patch, 
 HADOOP-9254-branch-2-b.patch, HADOOP-9254-trunk-a.patch, 
 HADOOP-9254-trunk-b.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9345) fix coverage org.apache.hadoop.fs.ftp

2013-04-09 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9345:
--

Assignee: Aleksey Gorshkov

 fix coverage  org.apache.hadoop.fs.ftp
 --

 Key: HADOOP-9345
 URL: https://issues.apache.org/jira/browse/HADOOP-9345
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 0.23.7, 2.0.5-beta
 Environment: fix coverage  org.apache.hadoop.fs.ftp
Reporter: Aleksey Gorshkov
Assignee: Aleksey Gorshkov
 Attachments: YARN-434-trunk.patch


 fix coverage  org.apache.hadoop.fs.ftp
 patch YARN-434-trunk.patch for trunk, branch-2, branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9360) Coverage fix for org.apache.hadoop.fs.s3

2013-04-09 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9360:
--

Assignee: Aleksey Gorshkov

 Coverage fix for org.apache.hadoop.fs.s3
 

 Key: HADOOP-9360
 URL: https://issues.apache.org/jira/browse/HADOOP-9360
 Project: Hadoop Common
  Issue Type: Test
  Components: fs/s3
Affects Versions: 3.0.0, 0.23.7, 2.0.5-beta
Reporter: Aleksey Gorshkov
Assignee: Aleksey Gorshkov
 Attachments: HADOOP-9360-branch-0.23-a.patch, 
 HADOOP-9360-branch-0.23.patch, HADOOP-9360-trunk-a.patch, 
 HADOOP-9360-trunk.patch


 Coverage fix for org.apache.hadoop.fs.s3
 patch HADOOP-9360-trunk.patch for trunk and branch-2 
 HADOOP-9360-branch-0.23.patch for branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-03-29 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13617419#comment-13617419
 ] 

Thomas Graves commented on HADOOP-9253:
---

Can this jira be moved back to resolved then?  Looks like work was done in 
HADOOP-9379.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 0.23.7, 2.0.5-beta

 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9405) TestGridmixSummary#testExecutionSummarizer is broken

2013-03-26 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9405:
--

Fix Version/s: 2.0.5-beta

this was broken in branch-2 also so I merged this back.

 TestGridmixSummary#testExecutionSummarizer is broken
 

 Key: HADOOP-9405
 URL: https://issues.apache.org/jira/browse/HADOOP-9405
 Project: Hadoop Common
  Issue Type: Bug
  Components: test, tools
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: hdfs-4599-1.patch


 HADOOP-9252 changed how human readable numbers are printed, and required 
 updating a number of test cases. This one was missed because the Jenkins 
 precommit job apparently isn't running the tests in hadoop-tools.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9406) hadoop-client leaks dependency on JDK tools jar

2013-03-15 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9406:
--

Fix Version/s: 0.23.7

 hadoop-client leaks dependency on JDK tools jar
 ---

 Key: HADOOP-9406
 URL: https://issues.apache.org/jira/browse/HADOOP-9406
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 0.23.7, 2.0.4-alpha

 Attachments: HADOOP-9406.patch


 hadoop-client leaks out JDK tools jar as dependency. 
 JDK tools jar is defined as a system dependency for 
 hadoop-annotation/jdiff/javadocs purposes, if not done javadoc generation 
 fails.
 The problem is that in the way it is defined now, this dependency ends up 
 leaking to hadoop-client and downstream projects that depend on hadoop-client 
 may end up including/bundling JDK tools JAR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9379) capture the ulimit info after printing the log to the console

2013-03-12 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9379:
--

Fix Version/s: 0.23.7

merged into branch-0.23

 capture the ulimit info after printing the log to the console
 -

 Key: HADOOP-9379
 URL: https://issues.apache.org/jira/browse/HADOOP-9379
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0, 2.0.4-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Trivial
 Fix For: 1.2.0, 0.23.7, 2.0.5-beta

 Attachments: HADOOP-9379.branch-1.patch, HADOOP-9379.patch


 Based on the discussions in HADOOP-9253 people prefer if we dont print the 
 ulimit info to the console but still have it in the logs.
 Just need to move the head statement to before the capture of ulimit code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9154) SortedMapWritable#putAll() doesn't add key/value classes to the map

2013-02-14 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9154:
--

Fix Version/s: 0.23.7

 SortedMapWritable#putAll() doesn't add key/value classes to the map
 ---

 Key: HADOOP-9154
 URL: https://issues.apache.org/jira/browse/HADOOP-9154
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 1.2.0, 0.23.7, 2.0.4-beta

 Attachments: HADOOP-9124.patch, hadoop-9154-branch1.patch, 
 hadoop-9154-draft.patch, hadoop-9154-draft.patch, hadoop-9154.patch, 
 hadoop-9154.patch, hadoop-9154.patch, hadoop-9154.patch, hadoop-9154.patch


 In the following code from {{SortedMapWritable}}, #putAll() doesn't add 
 key/value classes to the class-id maps.
 {code}
   @Override
   public Writable put(WritableComparable key, Writable value) {
 addToMap(key.getClass());
 addToMap(value.getClass());
 return instance.put(key, value);
   }
   @Override
   public void putAll(Map? extends WritableComparable, ? extends Writable t){
 for (Map.Entry? extends WritableComparable, ? extends Writable e:
   t.entrySet()) {
   
   instance.put(e.getKey(), e.getValue());
 }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9302) HDFS docs not linked from top level

2013-02-13 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves moved HDFS-4460 to HADOOP-9302:
-

  Component/s: (was: documentation)
   documentation
Affects Version/s: (was: 0.23.7)
   (was: 2.0.3-alpha)
   (was: 3.0.0)
   0.23.7
   2.0.3-alpha
   3.0.0
  Key: HADOOP-9302  (was: HDFS-4460)
  Project: Hadoop Common  (was: Hadoop HDFS)

 HDFS docs not linked from top level
 ---

 Key: HADOOP-9302
 URL: https://issues.apache.org/jira/browse/HADOOP-9302
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Attachments: hdfs4460-1.txt, hdfs4460-2.patch, hdfs4460.txt


 HADOOP-9221 and others converted docs to apt format. After that they aren't 
 linked to the top level menu like: http://hadoop.apache.org/docs/current/
 I only see the hadoop commands manual and the Filesystem shell. It used to be 
 you clicked on say the commands manual and you would go to the old style 
 documentation where it had a menu with links to the Superusers, native 
 libraries, etc, but I don't see that any more since converted. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9302) HDFS docs not linked from top level

2013-02-13 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577604#comment-13577604
 ] 

Thomas Graves commented on HADOOP-9302:
---

Moved to Hadoop common jira bucket since it touches the top level project.

 HDFS docs not linked from top level
 ---

 Key: HADOOP-9302
 URL: https://issues.apache.org/jira/browse/HADOOP-9302
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Attachments: hdfs4460-1.txt, hdfs4460-2.patch, hdfs4460.txt


 HADOOP-9221 and others converted docs to apt format. After that they aren't 
 linked to the top level menu like: http://hadoop.apache.org/docs/current/
 I only see the hadoop commands manual and the Filesystem shell. It used to be 
 you clicked on say the commands manual and you would go to the old style 
 documentation where it had a menu with links to the Superusers, native 
 libraries, etc, but I don't see that any more since converted. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9303) command manual dfsadmin missing entry for restoreFailedStorage option

2013-02-13 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves moved HDFS-4459 to HADOOP-9303:
-

Target Version/s: 0.23.7, 2.0.4-beta  (was: 2.0.3-alpha, 0.23.7)
 Key: HADOOP-9303  (was: HDFS-4459)
 Project: Hadoop Common  (was: Hadoop HDFS)

 command manual dfsadmin missing entry for restoreFailedStorage option
 -

 Key: HADOOP-9303
 URL: https://issues.apache.org/jira/browse/HADOOP-9303
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Attachments: hdfs4459.txt


 Generating the latest site docs it doesn't show the -restoreFailedStorage 
 option under the dfsadmin section of commands_manual.html
 Also it appears the table header is concatenated with the first row:
 COMMAND_OPTION -report

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9303) command manual dfsadmin missing entry for restoreFailedStorage option

2013-02-13 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577608#comment-13577608
 ] 

Thomas Graves commented on HADOOP-9303:
---

move to common since that is where changes are.  

 Looks good. +1 Thanks Andy!

 command manual dfsadmin missing entry for restoreFailedStorage option
 -

 Key: HADOOP-9303
 URL: https://issues.apache.org/jira/browse/HADOOP-9303
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Attachments: hdfs4459.txt


 Generating the latest site docs it doesn't show the -restoreFailedStorage 
 option under the dfsadmin section of commands_manual.html
 Also it appears the table header is concatenated with the first row:
 COMMAND_OPTION -report

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9302) HDFS docs not linked from top level

2013-02-13 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9302:
--

  Resolution: Fixed
   Fix Version/s: 2.0.4-beta
  0.23.7
  3.0.0
Target Version/s: 0.23.7, 2.0.4-beta
  Status: Resolved  (was: Patch Available)

 HDFS docs not linked from top level
 ---

 Key: HADOOP-9302
 URL: https://issues.apache.org/jira/browse/HADOOP-9302
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: hdfs4460-1.txt, hdfs4460-2.patch, hdfs4460.txt


 HADOOP-9221 and others converted docs to apt format. After that they aren't 
 linked to the top level menu like: http://hadoop.apache.org/docs/current/
 I only see the hadoop commands manual and the Filesystem shell. It used to be 
 you clicked on say the commands manual and you would go to the old style 
 documentation where it had a menu with links to the Superusers, native 
 libraries, etc, but I don't see that any more since converted. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9303) command manual dfsadmin missing entry for restoreFailedStorage option

2013-02-13 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577634#comment-13577634
 ] 

Thomas Graves commented on HADOOP-9303:
---

Note made minor addition at checkin to add -restoreFailedStorage to the usage 
above the Command Options table.

 command manual dfsadmin missing entry for restoreFailedStorage option
 -

 Key: HADOOP-9303
 URL: https://issues.apache.org/jira/browse/HADOOP-9303
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Attachments: hdfs4459.txt


 Generating the latest site docs it doesn't show the -restoreFailedStorage 
 option under the dfsadmin section of commands_manual.html
 Also it appears the table header is concatenated with the first row:
 COMMAND_OPTION -report

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9303) command manual dfsadmin missing entry for restoreFailedStorage option

2013-02-13 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9303:
--

   Resolution: Fixed
Fix Version/s: 2.0.4-beta
   0.23.7
   3.0.0
   Status: Resolved  (was: Patch Available)

 command manual dfsadmin missing entry for restoreFailedStorage option
 -

 Key: HADOOP-9303
 URL: https://issues.apache.org/jira/browse/HADOOP-9303
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: hdfs4459.txt


 Generating the latest site docs it doesn't show the -restoreFailedStorage 
 option under the dfsadmin section of commands_manual.html
 Also it appears the table header is concatenated with the first row:
 COMMAND_OPTION -report

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8075) Lower native-hadoop library log from info to debug

2013-02-11 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8075:
--

Fix Version/s: 0.23.7

 Lower native-hadoop library log from info to debug 
 ---

 Key: HADOOP-8075
 URL: https://issues.apache.org/jira/browse/HADOOP-8075
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Hızır Sefa İrken
  Labels: newbie
 Fix For: 2.0.2-alpha, 0.23.7

 Attachments: HDFS-8075.patch


 The following log shows up in stderr all commands. We've already got a 
 warning if the native library can't be loaded, don't need to log this every 
 time at info level.
 {noformat}
 [eli@centos6 ~]$ hadoop fs -cat /user/eli/foo
 12/02/12 20:10:20 INFO util.NativeCodeLoader: Loaded the native-hadoop library
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-02-08 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9253:
--

Fix Version/s: 0.23.7

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 0.23.7, 2.0.4-beta

 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9278) HarFileSystem may leak file handle

2013-02-06 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9278:
--

Fix Version/s: 0.23.7

 HarFileSystem may leak file handle
 --

 Key: HADOOP-9278
 URL: https://issues.apache.org/jira/browse/HADOOP-9278
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.0.3-alpha, trunk-win, 0.23.7

 Attachments: HADOOP-9278.1.patch


 TestHarFileSystemBasics fails on Windows due to invalid HAR URI and file 
 handle leak.  We need to change the tests to use valid HAR URIs and fix the 
 file handle leak.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9219) coverage fixing for org.apache.hadoop.tools.rumen

2013-02-06 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9219:
--

Fix Version/s: (was: 0.23.6)
   0.23.7

 coverage fixing for org.apache.hadoop.tools.rumen
 -

 Key: HADOOP-9219
 URL: https://issues.apache.org/jira/browse/HADOOP-9219
 Project: Hadoop Common
  Issue Type: Test
  Components: tools
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Aleksey Gorshkov
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-9219-trunk.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 coverage fixing for org.apache.hadoop.tools.rumen 
 HADOOP-9219-trunk.patch for trunk, brunch-2 and branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9219) coverage fixing for org.apache.hadoop.tools.rumen

2013-02-06 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9219:
--

Target Version/s: 3.0.0, 2.0.3-alpha, 0.23.7  (was: 3.0.0, 2.0.3-alpha, 
0.23.6)
   Fix Version/s: (was: 0.23.7)
  (was: 2.0.3-alpha)
  (was: 3.0.0)

 coverage fixing for org.apache.hadoop.tools.rumen
 -

 Key: HADOOP-9219
 URL: https://issues.apache.org/jira/browse/HADOOP-9219
 Project: Hadoop Common
  Issue Type: Test
  Components: tools
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Aleksey Gorshkov
 Attachments: HADOOP-9219-trunk.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 coverage fixing for org.apache.hadoop.tools.rumen 
 HADOOP-9219-trunk.patch for trunk, brunch-2 and branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9282) Java 7 support

2013-02-05 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13571363#comment-13571363
 ] 

Thomas Graves commented on HADOOP-9282:
---

Many people have been testing and possibly running with java7 so I think its 
safe to say its supported.  As you say we should update the docs and  as Eli 
mentioned in one of the email threads create a jenkins build to run with java7.

 Java 7 support
 --

 Key: HADOOP-9282
 URL: https://issues.apache.org/jira/browse/HADOOP-9282
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Kevin Lyda

 The Hadoop Java Versions page makes no mention of Java 7.
 http://wiki.apache.org/hadoop/HadoopJavaVersions
 Java 6 is EOL as of this month ( 
 http://www.java.com/en/download/faq/java_6.xml ) and that's after extending 
 the date twice: https://blogs.oracle.com/henrik/entry/java_6_eol_h_h While 
 Oracle has recently released a number of security patches, chances are more 
 security issues will come up and we'll be left running clusters we can't 
 patch if we stay with Java 6.
 Does Hadoop support Java 7 and if so could the docs be changed to indicate 
 that?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9063) enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil

2013-02-05 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9063:
--

Attachment: HADOOP-9063-trunk--c.patch

attach same patch to kick jenkins


 enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil
 -

 Key: HADOOP-9063
 URL: https://issues.apache.org/jira/browse/HADOOP-9063
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9063--b.patch, HADOOP-9063-branch-0.23--b.patch, 
 HADOOP-9063-branch-0.23--c.patch, HADOOP-9063.patch, 
 HADOOP-9063-trunk--c.patch, HADOOP-9063-trunk--c.patch


 Some methods of class org.apache.hadoop.fs.FileUtil are covered by unit-tests 
 poorly or not covered at all. Enhance the coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9282) Java 7 support

2013-02-05 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13571621#comment-13571621
 ] 

Thomas Graves commented on HADOOP-9282:
---

I personally haven't run them recently.  Probably.  We had most cleaned up at 
one point but some probably broke again - thats why we need jenkins with it.

 Java 7 support
 --

 Key: HADOOP-9282
 URL: https://issues.apache.org/jira/browse/HADOOP-9282
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Kevin Lyda

 The Hadoop Java Versions page makes no mention of Java 7.
 http://wiki.apache.org/hadoop/HadoopJavaVersions
 Java 6 is EOL as of this month ( 
 http://www.java.com/en/download/faq/java_6.xml ) and that's after extending 
 the date twice: https://blogs.oracle.com/henrik/entry/java_6_eol_h_h While 
 Oracle has recently released a number of security patches, chances are more 
 security issues will come up and we'll be left running clusters we can't 
 patch if we stay with Java 6.
 Does Hadoop support Java 7 and if so could the docs be changed to indicate 
 that?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9193) hadoop script can inadvertently expand wildcard arguments when delegating to hdfs script

2013-02-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9193:
--

Fix Version/s: 0.23.7

 hadoop script can inadvertently expand wildcard arguments when delegating to 
 hdfs script
 

 Key: HADOOP-9193
 URL: https://issues.apache.org/jira/browse/HADOOP-9193
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.2-alpha, 0.23.5
Reporter: Jason Lowe
Assignee: Andy Isaacson
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.7

 Attachments: hadoop9193.diff


 The hadoop front-end script will print a deprecation warning and defer to the 
 hdfs front-end script for certain commands, like fsck, dfs.  If a wildcard 
 appears as an argument then it can be inadvertently expanded by the shell to 
 match a local filesystem path before being sent to the hdfs script, which can 
 be very confusing to the end user.
 For example, the following two commands usually perform very different 
 things, even though they should be equivalent:
 {code}
 hadoop fs -ls /tmp/\*
 hadoop dfs -ls /tmp/\*
 {code}
 The former lists everything in the default filesystem under /tmp, while the 
 latter expands /tmp/\* into everything in the *local* filesystem under /tmp 
 and passes those as arguments to try to list in the default filesystem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2013-02-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8214:
--

Fix Version/s: 0.23.7

 make hadoop script recognize a full set of deprecated commands
 --

 Key: HADOOP-8214
 URL: https://issues.apache.org/jira/browse/HADOOP-8214
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0-alpha, 0.23.7

 Attachments: HADOOP-8214.patch.txt


 bin/hadoop launcher script does a nice job of recognizing deprecated usage 
 and vectoring users towards the proper command line tools (hdfs, mapred). It 
 would be nice if we can take care of the following deprecated commands that 
 don't get the same special treatment:
 {noformat}
   oiv  apply the offline fsimage viewer to an fsimage
   dfsgroupsget the groups which users belong to on the Name Node
   mrgroups get the groups which users belong to on the Job Tracker
   mradmin  run a Map-Reduce admin client
   jobtracker   run the MapReduce job Tracker node
   tasktracker  run a MapReduce task Tracker node
 {noformat}
 Here's what I propos to do with them:
   # oiv-- issue DEPRECATED warning and run hdfs oiv
   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
   # mrgroups   -- issue DEPRECATED warning and run mapred groups
   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
   # jobtracker -- issue DEPRECATED warning and do nothing
   # tasktracker-- issue DEPRECATED warning and do nothing
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8857) hadoop.http.authentication.signature.secret.file docs should not state that secret is randomly generated

2013-02-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8857:
--

Fix Version/s: 0.23.7

 hadoop.http.authentication.signature.secret.file docs should not state that 
 secret is randomly generated
 

 Key: HADOOP-8857
 URL: https://issues.apache.org/jira/browse/HADOOP-8857
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
Priority: Minor
 Fix For: 0.23.7

 Attachments: HADOOP-8857.patch


 The docs and default.xml state that the secret is randomly generated if the 
 secret.file is not present, this is incorrect as the secret must be shared 
 across all nodes in the cluster as it is used to verify the signature of the 
 hadoop.auth cookie. If randomly generated it would be diff in all nodes.
 ORIGINAL DESCRIPTION:
 AuthenticationFilterInitializer#initFilter fails if the configured 
 {{hadoop.http.authentication.signature.secret.file}} does not exist, eg:
 {noformat}
 java.lang.RuntimeException: Could not read HTTP signature secret file: 
 /var/lib/hadoop-hdfs/hadoop-http-auth-signature-secret
 {noformat}
 Creating /var/lib/hadoop-hdfs/hadoop-http-auth-signature-secret (populated 
 with a string) fixes the issue. Per the auth docs If a secret is not 
 provided a random secret is generated at start up time., which sounds like 
 it means the file should be generated at startup with a random secrete, which 
 doesn't seem to be the case. Also the instructions in the docs should be more 
 clear in this regard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9231) Parametrize staging URL for the uniformity of distributionManagement

2013-02-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9231:
--

Fix Version/s: 0.23.7

 Parametrize staging URL for the uniformity of distributionManagement
 

 Key: HADOOP-9231
 URL: https://issues.apache.org/jira/browse/HADOOP-9231
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-9231.patch, HADOOP-9231.patch


 The build's {{distributionManagement}} section currently uses parametrization 
 for the snapshot repository. It is convenient and allows to override the 
 value from a developer's custom profile.
 The same isn't available for release artifacts to make the parametrization 
 symmetric for both types.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9124) SortedMapWritable violates contract of Map interface for equals() and hashCode()

2013-02-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9124:
--

Fix Version/s: 0.23.7

I merged this into branch-0.23

 SortedMapWritable violates contract of Map interface for equals() and 
 hashCode()
 

 Key: HADOOP-9124
 URL: https://issues.apache.org/jira/browse/HADOOP-9124
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Patrick Hunt
Assignee: Surenkumar Nihalani
Priority: Minor
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: hadoop-9124-branch1.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, HADOOP-9124.patch, 
 HADOOP-9124.patch


 This issue is similar to HADOOP-7153. It was found when using MRUnit - see 
 MRUNIT-158, specifically 
 https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985
 --
 o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it 
 does not define an implementation of the equals() or hashCode() methods; 
 instead the default implementations in java.lang.Object are used.
 This violates the contract of the Map interface which defines different 
 behaviour for equals() and hashCode() than Object does. More information 
 here: 
 http://download.oracle.com/javase/6/docs/api/java/util/Map.html#equals(java.lang.Object)
 The practical consequence is that SortedMapWritables containing equal entries 
 cannot be compared properly. We were bitten by this when trying to write an 
 MRUnit test for a Mapper that outputs MapWritables; the MRUnit driver cannot 
 test the equality of the expected and actual MapWritable objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-6941) Support non-SUN JREs in UserGroupInformation

2013-01-31 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-6941:
--

Fix Version/s: 0.23.7

 Support non-SUN JREs in UserGroupInformation
 

 Key: HADOOP-6941
 URL: https://issues.apache.org/jira/browse/HADOOP-6941
 Project: Hadoop Common
  Issue Type: Bug
 Environment: SLES 11, Apache Harmony 6 and SLES 11, IBM Java 6
Reporter: Stephen Watt
Assignee: Devaraj Das
 Fix For: 1.0.3, 2.0.0-alpha, 0.23.7

 Attachments: 6941-1.patch, 6941-branch1.patch, hadoop-6941.patch, 
 HADOOP-6941.patch


 Attempting to format the namenode or attempting to start Hadoop using Apache 
 Harmony or the IBM Java JREs results in the following exception:
 10/09/07 16:35:05 ERROR namenode.NameNode: java.lang.NoClassDefFoundError: 
 com.sun.security.auth.UnixPrincipal
   at 
 org.apache.hadoop.security.UserGroupInformation.clinit(UserGroupInformation.java:223)
   at java.lang.J9VMInternals.initializeImpl(Native Method)
   at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setConfigurationParameters(FSNamesystem.java:420)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:391)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1240)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
 Caused by: java.lang.ClassNotFoundException: 
 com.sun.security.auth.UnixPrincipal
   at java.net.URLClassLoader.findClass(URLClassLoader.java:421)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:652)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:346)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:618)
   ... 8 more
 This is a negative regression as previous versions of Hadoop worked with 
 these JREs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8251) SecurityUtil.fetchServiceTicket broken after HADOOP-6941

2013-01-31 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8251:
--

Target Version/s: 2.0.0-alpha, 1.0.3  (was: 1.0.3, 2.0.0-alpha)
   Fix Version/s: 0.23.7

 SecurityUtil.fetchServiceTicket broken after HADOOP-6941
 

 Key: HADOOP-8251
 URL: https://issues.apache.org/jira/browse/HADOOP-8251
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.0, 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 1.0.3, 2.0.0-alpha, 0.23.7

 Attachments: hadoop-8251-b1.txt, hadoop-8251.txt


 HADOOP-6941 replaced direct references to some classes with reflective access 
 so as to support other JDKs. Unfortunately there was a mistake in the name of 
 the Krb5Util class, which broke fetchServiceTicket. This manifests itself as 
 the inability to run checkpoints or other krb5-SSL HTTP-based transfers:
 java.lang.ClassNotFoundException: sun.security.jgss.krb5

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2013-01-31 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8878:
--

Fix Version/s: 0.23.7

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.1.1, 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.branch-1.patch, HADOOP-8878.patch, HADOOP-8878.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-30 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13566702#comment-13566702
 ] 

Thomas Graves commented on HADOOP-9221:
---

Are all the new docs linked from somewhere? 

I expected to see them linked from the top level menu like 
http://hadoop.apache.org/docs/current/ but I only see the hadoop commands 
manual and the Filesystem shell.  It used to be you clicked on say the commands 
manual and you would go to the old style documentation where it had a menu with 
links to the Superusers, native libraries, etc, but I don't see that any more 
since converted.  Is there perhaps a new menu I'm missing?

 Convert remaining xdocs to APT
 --

 Key: HADOOP-9221
 URL: https://issues.apache.org/jira/browse/HADOOP-9221
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
 Fix For: 2.0.3-alpha

 Attachments: hadoop9221-1.txt, hadoop9221-2.txt, hadoop9221.txt


 The following Forrest XML documents are still present in trunk:
 {noformat}
 hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
 hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
 hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
 hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
 hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
 {noformat}
 Several of them are leftover cruft, and all of them are out of date to one 
 degree or another, but it's easiest to simply convert them all to APT and 
 move forward with editing thereafter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-30 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9221:
--

Fix Version/s: 0.23.7

 Convert remaining xdocs to APT
 --

 Key: HADOOP-9221
 URL: https://issues.apache.org/jira/browse/HADOOP-9221
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: hadoop9221-1.txt, hadoop9221-2.txt, hadoop9221.txt


 The following Forrest XML documents are still present in trunk:
 {noformat}
 hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
 hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
 hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
 hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
 hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
 hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
 {noformat}
 Several of them are leftover cruft, and all of them are out of date to one 
 degree or another, but it's easiest to simply convert them all to APT and 
 move forward with editing thereafter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9190) packaging docs is broken

2013-01-29 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9190:
--

Fix Version/s: 0.23.7
   2.0.3-alpha

I merged this to branch-2 and branch-0.23

 packaging docs is broken
 

 Key: HADOOP-9190
 URL: https://issues.apache.org/jira/browse/HADOOP-9190
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.7

 Attachments: hadoop9190-1.txt, hadoop9190.txt


 It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
 site package -Pdist,docs no longer works.   If you run mvn site or mvn 
 site:stage by itself they work fine, its when you go to package it that it 
 breaks.
 The error is with broken links, here is one of them:
 broken-links
   link 
 message=hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
  (No such file or directory) uri=HttpAuthentication.html
 referrer uri=linkmap.html/
 referrer uri=index.html/
 referrer uri=single_node_setup.html/
 referrer uri=native_libraries.html/
 referrer uri=Superusers.html/
 referrer uri=service_level_auth.html/
 referrer uri=deployment_layout.html/
   /link

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-9255:
-

 Summary: relnotes.py missing last jira
 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical


generating the release notes for 0.23.6 via  python ./dev-support/relnotes.py 
-v 0.23.6  misses the last jira that was committed.  In this case it was 
YARN-354.




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564398#comment-13564398
 ] 

Thomas Graves commented on HADOOP-9255:
---

This might be due to the query line:
project in (YARN) and fixVersion in ('+' , '.join(versions)+') and 
resolution = Fixed, 'startAt':at+1, 'maxResults':count}

For some reason it starts at: at+1.  If I remove the +1 then YARN-354 shows up. 

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical

 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9255:
--

Attachment: HADOOP-9255.patch

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-9255.patch


 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9255:
--

Status: Patch Available  (was: Open)

Fix by simply removing the +1 from at+1.  Now it gets entries 0 up to max 100, 
then 100, up to max 200, etc.

I tested by generating the notes for 0.23.6 which is  100 and then I generated 
them for 0.23.3 which has 266 jira.  I compared it to manually doing the query 
through jira.

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-9255.patch


 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9255:
--

   Resolution: Fixed
Fix Version/s: 0.23.7
   0.23.6
   2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6, 0.23.7

 Attachments: HADOOP-9255.patch


 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7886) Add toString to FileStatus

2013-01-17 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-7886:
--

Fix Version/s: 0.23.7
   2.0.3-alpha

I merged this to branch-0.23, it also previously got pulled into branch-2 with 
HADOOP-9147.

 Add toString to FileStatus
 --

 Key: HADOOP-7886
 URL: https://issues.apache.org/jira/browse/HADOOP-7886
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: SreeHari
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.7

 Attachments: HDFS-2215_common.patch, HDFS-2215.patch


 It would be nice if FileStatus had a reasonable toString, for debugging 
 purposes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9147) Add missing fields to FIleStatus.toString

2013-01-17 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9147:
--

Fix Version/s: 0.23.7

 Add missing fields to FIleStatus.toString
 -

 Key: HADOOP-9147
 URL: https://issues.apache.org/jira/browse/HADOOP-9147
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Jonathan Allen
Assignee: Jonathan Allen
Priority: Trivial
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-9147.patch, HADOOP-9147.patch, HADOOP-9147.patch, 
 HADOOP-9147.patch


 The FileStatus.toString method is missing the following fields:
 - modification_time
 - access_time
 - symlink
 These should be added in to aid debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9155) FsPermission should have different default value, 777 for directory and 666 for file

2013-01-17 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9155:
--

Fix Version/s: 0.23.7

merged to branch-0.23

 FsPermission should have different default value, 777 for directory and 666 
 for file
 

 Key: HADOOP-9155
 URL: https://issues.apache.org/jira/browse/HADOOP-9155
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-9155.patch, HADOOP-9155.v2.patch, 
 HADOOP-9155.v3.patch, HADOOP-9155.v3.patch, HADOOP-9155.v3.patch


 The default permission for {{FileSystem#create}} is the same default as for 
 {{FileSystem#mkdirs}}, namely {{0777}}. It would make more sense for the 
 default to be {{0666}} for files and {{0777}} for directories.  The current 
 default leads to a lot of files being created with the executable bit that 
 really should not be.  One example is anything created with FsShell's 
 copyToLocal.
 For reference, {{fopen}} creates files with a mode of {{0666}} (minus 
 whatever bits are set in the umask; usually {{0022}}.  This seems to be the 
 standard behavior and we should follow it.  This is also a regression since 
 branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-17 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9212:
--

Fix Version/s: 0.23.7

merged to branch-0.23

 Potential deadlock in FileSystem.Cache/IPC/UGI
 --

 Key: HADOOP-9212
 URL: https://issues.apache.org/jira/browse/HADOOP-9212
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch, 
 HADOOP-9212.patch


 jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9216) CompressionCodecFactory#getCodecClasses should trim the result of parsing by Configuration.

2013-01-17 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9216:
--

Fix Version/s: 0.23.7

merged to branch-0.23

 CompressionCodecFactory#getCodecClasses should trim the result of parsing by 
 Configuration.
 ---

 Key: HADOOP-9216
 URL: https://issues.apache.org/jira/browse/HADOOP-9216
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-9216.patch


 CompressionCodecFactory#getCodecClasses doesn't trim its input.
 This can confuse users of CompressionCodecFactory. For example, The setting 
 as follows can cause error because of spaces in the values.
 {quote}
  conf.set(io.compression.codecs, 
   org.apache.hadoop.io.compress.GzipCodec ,  +
  org.apache.hadoop.io.compress.DefaultCodec  ,  +
 org.apache.hadoop.io.compress.BZip2Codec   );
 {quote}
 This ticket deals with this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-16 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13555124#comment-13555124
 ] 

Thomas Graves commented on HADOOP-9215:
---

Thanks Colin for taking this on.  So I now see the libhadoop.so and libhdfs.so 
in :

./hadoop-common-project/hadoop-common/target/native/libhadoop.so
./hadoop-hdfs-project/hadoop-hdfs/target/native/libhdfs.so

However looking more I don't see any *.so* in the tarball or hadoop-dist that 
is generated.  I'm pretty sure without this change I saw these:
./hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0
./hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libhdfs.so.0.0.0


 libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
 

 Key: HADOOP-9215
 URL: https://issues.apache.org/jira/browse/HADOOP-9215
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Thomas Graves
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-9215.001.patch


 Looks like none of the .so files are being built. They all have .so.1.0.0 but 
 no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
 This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8139) Path does not allow metachars to be escaped

2013-01-16 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8139:
--

 Target Version/s:   (was: 0.23.3, 0.24.0)
Affects Version/s: (was: 0.24.0)
   (was: 0.23.0)
   3.0.0
   0.23.3

 Path does not allow metachars to be escaped
 ---

 Key: HADOOP-8139
 URL: https://issues.apache.org/jira/browse/HADOOP-8139
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.3, 3.0.0
Reporter: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-8139-2.patch, HADOOP-8139-3.patch, 
 HADOOP-8139-4.patch, HADOOP-8139-5.patch, HADOOP-8139-6.patch, 
 HADOOP-8139.patch, HADOOP-8139.patch


 Path converts \ into /, probably for windows support?  This means it's 
 impossible for the user to escape metachars in a path name.  Glob expansion 
 can have deadly results.
 Here are the most egregious examples. A user accidentally creates a path like 
 /user/me/*/file.  Now they want to remove it.
 {noformat}hadoop fs -rmr -skipTrash '/user/me/\*' becomes...
 hadoop fs -rmr -skipTrash /user/me/*{noformat}
 * User/Admin: Nuked their home directory or any given directory
 {noformat}hadoop fs -rmr -skipTrash '\*' becomes...
 hadoop fs -rmr -skipTrash /*{noformat}
 * User:  Deleted _everything_ they have access to on the cluster
 * Admin: *Nukes the entire cluster*
 Note: FsShell is shown for illustrative purposes, however the problem is in 
 the Path object, not FsShell.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) when using cmake-2.6, libhadoop.so doesn't get created (only libhadoop.so.1.0.0)

2013-01-16 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13555661#comment-13555661
 ] 

Thomas Graves commented on HADOOP-9215:
---

I now see all the *.so files I expect.  so I'm +1 if Todd's good with it.  
Thanks Colin!

 when using cmake-2.6, libhadoop.so doesn't get created (only 
 libhadoop.so.1.0.0)
 

 Key: HADOOP-9215
 URL: https://issues.apache.org/jira/browse/HADOOP-9215
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Thomas Graves
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch, 
 HADOOP-9215.003.patch


 Looks like none of the .so files are being built. They all have .so.1.0.0 but 
 no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
 This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553839#comment-13553839
 ] 

Thomas Graves commented on HADOOP-9097:
---

Todd, I filed HDFS-4399 to handle.  I would be greatful for a review if you 
have time.



 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
 HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9208) Fix release audit warnings

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553851#comment-13553851
 ] 

Thomas Graves commented on HADOOP-9208:
---

where are you seeing these?  The precommit builds were complaining about the 
hdfs*.odg files and HDFS-4399 is taking care of those.  If run manually what 
version, what os, etc..

 Fix release audit warnings
 --

 Key: HADOOP-9208
 URL: https://issues.apache.org/jira/browse/HADOOP-9208
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu

 The following files should be excluded from rat check:
 ./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.odg
 ./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.odg
 ./hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/FI-framework.odg
 ./hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsarchitecture.odg
 ./hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsdatanodes.odg

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553909#comment-13553909
 ] 

Thomas Graves commented on HADOOP-9205:
---

I ran a quick job using jdk1.7.0_10 and it loads the native libraries fine. 
This was using jdk1.7.0_10 for execution, jars were still built with jdk1.6.

Also I tried to reproduce with method you stated.  On trunk I wasn't able to 
reproduce.  Note I built all source code with jdk1.7.0_10 and then ran the 
test.  I did have to create a symlink from 
hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
 to 
hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so.1.0.0.
 

I'm not sure what happened to libhadoop.so I'll have to investigate.  I need to 
look at the jdk release notes in more detail but at a glance it says Java 
applications invoking JDK 7 from a legacy JDK must be careful to clean up the 
LD_LIBRARY_PATH environment variable before executing JDK 7 which makes me 
wonder if it applies. 

 Java7: path to native libraries should be passed to tests via 
 -Djava.library.path rather than env.LD_LIBRARY_PATH
 -

 Key: HADOOP-9205
 URL: https://issues.apache.org/jira/browse/HADOOP-9205
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9205.patch


 Currently the path to native libraries is passed to unit tests via 
 environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
 work for Java7, since Java7 ignores this environment variable.
 So, to run the tests with native implementation on Java7 one needs to pass 
 the paths to native libs via -Djava.library.path system property rather than 
 the LD_LIBRARY_PATH env variable.
 The suggested patch fixes the problem via setting the paths to native libs 
 using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
 tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-15 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-9215:
-

 Summary: libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
 Key: HADOOP-9215
 URL: https://issues.apache.org/jira/browse/HADOOP-9215
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Thomas Graves
Priority: Blocker


Looks like none of the .so files are being built. They all have .so.1.0.0 but 
no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.

This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553993#comment-13553993
 ] 

Thomas Graves commented on HADOOP-9205:
---

I'm running on rhel5.6 with maven 3.0.3 and cmake version 2.6-patch 4. 

 Java7: path to native libraries should be passed to tests via 
 -Djava.library.path rather than env.LD_LIBRARY_PATH
 -

 Key: HADOOP-9205
 URL: https://issues.apache.org/jira/browse/HADOOP-9205
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9205.patch


 Currently the path to native libraries is passed to unit tests via 
 environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
 work for Java7, since Java7 ignores this environment variable.
 So, to run the tests with native implementation on Java7 one needs to pass 
 the paths to native libs via -Djava.library.path system property rather than 
 the LD_LIBRARY_PATH env variable.
 The suggested patch fixes the problem via setting the paths to native libs 
 using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
 tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553996#comment-13553996
 ] 

Thomas Graves commented on HADOOP-9215:
---

Note I'm using cmake version 2.6-patch 4.  Someone on different jira mentioned 
using 2.8 fixes this issue, I can't easily install that to test.

 libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
 

 Key: HADOOP-9215
 URL: https://issues.apache.org/jira/browse/HADOOP-9215
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Thomas Graves
Priority: Blocker

 Looks like none of the .so files are being built. They all have .so.1.0.0 but 
 no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
 This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554205#comment-13554205
 ] 

Thomas Graves commented on HADOOP-9215:
---

I also have rhel6 boxes which have cmake 2.6 on them so its not just centOs 5. 
Also taking a quick look at centos 6.3 I see cmake-2.6.4-5.el6.src.rpm.  (from 
here http://vault.centos.org/6.3/os/Source/SPackages/)  What version of CentOs 
has cmake 2.8?  

# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.3 (Santiago)
# rpm -qa | grep cmake
cmake-2.6.4-5.el6.x86_64


What Jira introduced this dependency?  Personally I don't think we should be 
mandating cmake 2.8 version if its not in or easily available for rhel5 or 
rhel6/CentOs6. I'll go look some more to see if there is easier way for me to 
get it but it doesn't currently easily come up in yum list for me.

In the very least I think we should have it fail as Charles mentioned.



 libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
 

 Key: HADOOP-9215
 URL: https://issues.apache.org/jira/browse/HADOOP-9215
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Thomas Graves
Priority: Blocker

 Looks like none of the .so files are being built. They all have .so.1.0.0 but 
 no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
 This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-14 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
 HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9181) Set daemon flag for HttpServer's QueuedThreadPool

2013-01-14 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9181:
--

Fix Version/s: 0.23.6

merged to branch-0.23

 Set daemon flag for HttpServer's QueuedThreadPool
 -

 Key: HADOOP-9181
 URL: https://issues.apache.org/jira/browse/HADOOP-9181
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: liang xie
Assignee: liang xie
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9181.txt


 we hit HBASE-6031 again, after looking into thread dump, it was caused by the 
 threads from QueuedThreadPool are user thread, not daemon thread, so the 
 hbase shutdownhook never be called and the hbase instance was hung.
 Furthermore, i saw daemon be set in fb-20 branch, let's set in trunk codebase 
 as well, it should be safe:)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551176#comment-13551176
 ] 

Thomas Graves commented on HADOOP-9097:
---

the test has been failing on other builds and isn't related to this.  The 
release audit warnings are due to needing the other 3 jira.

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551257#comment-13551257
 ] 

Thomas Graves commented on HADOOP-9097:
---

Thanks Tom. The empty files are removed via the HADOOP-9097-remove.sh script I 
attached.  I could probably make those scripts a bit better as it just does svn 
rm file.

I'll add the tree.h license into hdfs. I'll also add the .git and .idea to top 
level.

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

Attachment: HADOOP-9097.patch
HADOOP-9097-entire.patch

Update common pom.xml to include the .idea/** and .git/**.

also upload the entire patch that includes that common change plus adding the 
tree.h license to hdfs LICENSE.txt.

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
 HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-11 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

Attachment: HADOOP-9097-branch-0.23-entire.patch
HADOOP-9097-branch-0.23.patch

upload corresponding branch-0.23 patches.

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
 HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13550526#comment-13550526
 ] 

Thomas Graves commented on HADOOP-9097:
---

There are a couple of files that I'm not sure about for this. They have 
existing copyright/licenses. Anyone with experience with apache license know?

hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataXceiverAspects.aj
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c




 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.6


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

Attachment: HADOOP-9097-remove-entire.sh
HADOOP-9097-remove-branch23.sh
HADOOP-9097-remove-branch2.sh
HADOOP-9097-entire.patch
HADOOP-9097-branch-0.23.patch
HADOOP-9097-branch-0.23-entire.patch

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

Attachment: HADOOP-9097.patch

I've attached 2 entire patches which are the combination of all 4 jira.

The committer should run the remove script first then apply the appropriate 
patch. The trunk patch works on branch-2 also, but there is a separate remove 
script.

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

Status: Patch Available  (was: Open)

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.5, 2.0.3-alpha
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9097-branch-0.23-entire.patch, 
 HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
 HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
 HADOOP-9097-remove-entire.sh


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-09 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves reassigned HADOOP-9097:
-

Assignee: Thomas Graves

 Maven RAT plugin is not checking all source files
 -

 Key: HADOOP-9097
 URL: https://issues.apache.org/jira/browse/HADOOP-9097
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Tom White
Assignee: Thomas Graves
Priority: Critical
 Fix For: 2.0.3-alpha, 0.23.6


 Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
 downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9190) packaging docs is broken

2013-01-08 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-9190:
-

 Summary: packaging docs is broken
 Key: HADOOP-9190
 URL: https://issues.apache.org/jira/browse/HADOOP-9190
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Thomas Graves


It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
site package -Pdist,docs no longer works.   If you run mvn site or mvn 
site:stage by itself they work fine, its when you go to package it that it 
breaks.

The error is with broken links, here is one of them:

broken-links
  link 
message=hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
 (No such file or directory) uri=HttpAuthentication.html
referrer uri=linkmap.html/
referrer uri=index.html/
referrer uri=single_node_setup.html/
referrer uri=native_libraries.html/
referrer uri=Superusers.html/
referrer uri=service_level_auth.html/
referrer uri=deployment_layout.html/
  /link

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8427) Convert Forrest docs to APT, incremental

2013-01-04 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8427:
--

Fix Version/s: 0.23.6

I merged this to branch-0.23

 Convert Forrest docs to APT, incremental
 

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: hadoop8427-1.txt, hadoop8427-3.txt, hadoop8427-4.txt, 
 hadoop8427-5.txt, HADOOP-8427.sh, hadoop8427.txt


 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new content into 
 hadoop-project/src/site/apt/index.apt.vm) and remove all forrest dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9152) HDFS can report negative DFS Used on clusters with very small amounts of data

2012-12-21 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9152:
--

Fix Version/s: 0.23.6

 HDFS can report negative DFS Used on clusters with very small amounts of data
 -

 Key: HADOOP-9152
 URL: https://issues.apache.org/jira/browse/HADOOP-9152
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Minor
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: HDFS-4229-0.patch.txt


 I had a near empty HDFS instance where I was creating a file and deleting it 
 very quickly. I noticed that HDFS sometimes reported a negative DFS used.
 {noformat}
 root@brock0-1 ~]# sudo -u hdfs -i hdfs dfsadmin -report
 Configured Capacity: 97233235968 (90.56 GB)
 Present Capacity: 84289609707 (78.5 GB)
 DFS Remaining: 84426645504 (78.63 GB)
 DFS Used: -137035797 (-133824.02 KB)
 DFS Used%: -0.16%
 Under replicated blocks: 0
 Blocks with corrupt replicas: 0
 Missing blocks: 0
 -
 Datanodes available: 1 (1 total, 0 dead)
 Live datanodes:
 Name: 127.0.0.1:50010 (localhost)
 Hostname: brock0-1.ent.cloudera.com
 Decommission Status : Normal
 Configured Capacity: 97233235968 (90.56 GB)
 DFS Used: -137035797 (-133824.02 KB)
 Non DFS Used: 12943626261 (12.05 GB)
 DFS Remaining: 84426645504 (78.63 GB)
 DFS Used%: -0.14%
 DFS Remaining%: 86.83%
 Last contact: Thu Nov 22 18:25:37 PST 2012
 [root@brock0-1 ~]# sudo -u hdfs -i hdfs dfsadmin -report
 Configured Capacity: 97233235968 (90.56 GB)
 Present Capacity: 84426973184 (78.63 GB)
 DFS Remaining: 84426629120 (78.63 GB)
 DFS Used: 344064 (336 KB)
 DFS Used%: 0%
 Under replicated blocks: 0
 Blocks with corrupt replicas: 0
 Missing blocks: 0
 -
 Datanodes available: 1 (1 total, 0 dead)
 Live datanodes:
 Name: 127.0.0.1:50010 (localhost)
 Hostname: brock0-1.ent.cloudera.com
 Decommission Status : Normal
 Configured Capacity: 97233235968 (90.56 GB)
 DFS Used: 344064 (336 KB)
 Non DFS Used: 12806262784 (11.93 GB)
 DFS Remaining: 84426629120 (78.63 GB)
 DFS Used%: 0%
 DFS Remaining%: 86.83%
 Last contact: Thu Nov 22 18:28:47 PST 2012
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8957) AbstractFileSystem#IsValidName should be overridden for embedded file systems like ViewFs

2012-12-20 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537226#comment-13537226
 ] 

Thomas Graves commented on HADOOP-8957:
---

can you please update fix versions on this?

 AbstractFileSystem#IsValidName should be overridden for embedded file systems 
 like ViewFs
 -

 Key: HADOOP-8957
 URL: https://issues.apache.org/jira/browse/HADOOP-8957
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8957-branch-trunk-win.2.patch, 
 HADOOP-8957-branch-trunk-win.3.patch, HADOOP-8957-branch-trunk-win.4.patch, 
 HADOOP-8957.patch, HADOOP-8957.patch, HADOOP-8957-trunk.4.patch


 This appears to be a problem with parsing a Windows-specific path, ultimately 
 throwing InvocationTargetException from AbstractFileSystem.newInstance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8561) Introduce HADOOP_PROXY_USER for secure impersonation in child hadoop client processes

2012-12-20 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8561:
--

Fix Version/s: 0.23.6

 Introduce HADOOP_PROXY_USER for secure impersonation in child hadoop client 
 processes
 -

 Key: HADOOP-8561
 URL: https://issues.apache.org/jira/browse/HADOOP-8561
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Luke Lu
Assignee: Yu Gao
 Fix For: 1.2.0, 3.0.0, 2.0.3-alpha, 0.23.6, 1.1.2

 Attachments: hadoop-8561-branch-1.patch, hadoop-8561-branch-2.patch, 
 hadoop-8561.patch, hadoop-8561-v2.patch


 To solve the problem for an authenticated user to type hadoop shell commands 
 in a web console, we can introduce an HADOOP_PROXY_USER environment variable 
 to allow proper impersonation in the child hadoop client processes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9139) improve script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh

2012-12-14 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13532553#comment-13532553
 ] 

Thomas Graves commented on HADOOP-9139:
---

Can you look at the test failure please? 

 improve script 
 hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh
 

 Key: HADOOP-9139
 URL: https://issues.apache.org/jira/browse/HADOOP-9139
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9139--b.patch, HADOOP-9139.patch


 Script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh 
 is used in internal Kerberos tests to kill started apacheds server.
 There are 2 problems in the script:
 1) it invokes kill even if there are no running apacheds servers;
 2) it does not work correctly on all Linux platforms since cut -f4 -d ' ' 
 command relies upon the exact number of spaces in the ps potput, but this 
 number can be different.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9020) Add a SASL PLAIN server

2012-12-03 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9020:
--

Fix Version/s: 0.23.6

Pulled this into branch-0.23 since HADOOP-9083 pulled it into branch-1.

 Add a SASL PLAIN server
 ---

 Key: HADOOP-9020
 URL: https://issues.apache.org/jira/browse/HADOOP-9020
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9020.patch


 Java includes a SASL PLAIN client but not a server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9108) Add a method to clear terminateCalled to ExitUtil for test cases

2012-11-29 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9108:
--

   Resolution: Fixed
Fix Version/s: 0.23.6
   Status: Resolved  (was: Patch Available)

This patch only applies to branch-0.23, hence the jenkins failures.  I 
committed it only to branch-0.23 since trunk and branch-2 already have similar 
functionality.

 Add a method to clear terminateCalled to ExitUtil for test cases
 

 Key: HADOOP-9108
 URL: https://issues.apache.org/jira/browse/HADOOP-9108
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 0.23.5
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.6

 Attachments: hadoop-9108.branch-0.23.patch


 Currently once terminateCalled is set, it will stay set since it's a class 
 static variable. This can break test cases where multiple test cases run in 
 one jvm. In MiniDfsCluster, it should be cleared during shutdown for the next 
 test case to run properly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   >