[jira] [Commented] (HADOOP-11195) Move Id-Name mapping in NFS to the hadoop-common area for better maintenance

2014-10-25 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14184406#comment-14184406
 ] 

Yongjun Zhang commented on HADOOP-11195:


HI [~brandonli],

Sorry for the delay to work on new rev. I just uploaded rev 002 to address your 
comments. Would you please help taking a look when you have chance?  Thanks for 
all your good suggestions, they helped to make the work here easier.

  




> Move Id-Name mapping in NFS to the hadoop-common area for better maintenance
> 
>
> Key: HADOOP-11195
> URL: https://issues.apache.org/jira/browse/HADOOP-11195
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11195.001.patch, HADOOP-11195.002.patch
>
>
> Per [~aw]'s suggestion in HDFS-7146, creating this jira to move the id-name 
> mapping implementation (IdUserGroup.java) to the framework that cache user 
> and group info in hadoop-common area 
> (hadoop-common/src/main/java/org/apache/hadoop/security) 
> Thanks [~brandonli] and [~aw] for the review and discussion in HDFS-7146.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11195) Move Id-Name mapping in NFS to the hadoop-common area for better maintenance

2014-10-25 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-11195:
---
Attachment: HADOOP-11195.002.patch

> Move Id-Name mapping in NFS to the hadoop-common area for better maintenance
> 
>
> Key: HADOOP-11195
> URL: https://issues.apache.org/jira/browse/HADOOP-11195
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11195.001.patch, HADOOP-11195.002.patch
>
>
> Per [~aw]'s suggestion in HDFS-7146, creating this jira to move the id-name 
> mapping implementation (IdUserGroup.java) to the framework that cache user 
> and group info in hadoop-common area 
> (hadoop-common/src/main/java/org/apache/hadoop/security) 
> Thanks [~brandonli] and [~aw] for the review and discussion in HDFS-7146.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11232) jersey-core-1.9 has a faulty glassfish-repo setting

2014-10-25 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze reassigned HADOOP-11232:


Assignee: Sushanth Sowmyan  (was: Tsz Wo Nicholas Sze)

> jersey-core-1.9 has a faulty glassfish-repo setting
> ---
>
> Key: HADOOP-11232
> URL: https://issues.apache.org/jira/browse/HADOOP-11232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Sushanth Sowmyan
>
> The following was reported by [~sushanth].
> hadoop-common brings in jersey-core-1.9 as a dependency by default.
> This is problematic, since the pom file for jersey 1.9 hardcode-specifies 
> glassfish-repo as the place to get further transitive dependencies, which 
> leads to a site that serves a static "this has moved" page instead of a 404. 
> This results in faulty parent resolutions, which when asked for a pom file, 
> get erroneous results.
> The only way around this seems to be to add a series of exclusions for 
> jersey-core, jersey-json, jersey-server and a bunch of others to 
> hadoop-common, then to hadoop-hdfs, then to hadoop-mapreduce-client-core. I 
> don't know how many more excludes are necessary before I can get this to work.
> If you update your jersey.version to 1.14, this faulty pom goes away. Please 
> either update that, or work with build infra to update our nexus pom for 
> jersey-1.9 so that it does not include the faulty glassfish repo.
> Another interesting note about this is that something changed yesterday 
> evening to cause this break in behaviour. We have not had this particular 
> problem in about 9+ months.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11232) jersey-core-1.9 has a faulty glassfish-repo setting

2014-10-25 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-11232:


 Summary: jersey-core-1.9 has a faulty glassfish-repo setting
 Key: HADOOP-11232
 URL: https://issues.apache.org/jira/browse/HADOOP-11232
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


The following was reported by [~sushanth].

hadoop-common brings in jersey-core-1.9 as a dependency by default.

This is problematic, since the pom file for jersey 1.9 hardcode-specifies 
glassfish-repo as the place to get further transitive dependencies, which leads 
to a site that serves a static "this has moved" page instead of a 404. This 
results in faulty parent resolutions, which when asked for a pom file, get 
erroneous results.

The only way around this seems to be to add a series of exclusions for 
jersey-core, jersey-json, jersey-server and a bunch of others to hadoop-common, 
then to hadoop-hdfs, then to hadoop-mapreduce-client-core. I don't know how 
many more excludes are necessary before I can get this to work.

If you update your jersey.version to 1.14, this faulty pom goes away. Please 
either update that, or work with build infra to update our nexus pom for 
jersey-1.9 so that it does not include the faulty glassfish repo.

Another interesting note about this is that something changed yesterday evening 
to cause this break in behaviour. We have not had this particular problem in 
about 9+ months.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11232) jersey-core-1.9 has a faulty glassfish-repo setting

2014-10-25 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-11232:
-
Component/s: build

> jersey-core-1.9 has a faulty glassfish-repo setting
> ---
>
> Key: HADOOP-11232
> URL: https://issues.apache.org/jira/browse/HADOOP-11232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>
> The following was reported by [~sushanth].
> hadoop-common brings in jersey-core-1.9 as a dependency by default.
> This is problematic, since the pom file for jersey 1.9 hardcode-specifies 
> glassfish-repo as the place to get further transitive dependencies, which 
> leads to a site that serves a static "this has moved" page instead of a 404. 
> This results in faulty parent resolutions, which when asked for a pom file, 
> get erroneous results.
> The only way around this seems to be to add a series of exclusions for 
> jersey-core, jersey-json, jersey-server and a bunch of others to 
> hadoop-common, then to hadoop-hdfs, then to hadoop-mapreduce-client-core. I 
> don't know how many more excludes are necessary before I can get this to work.
> If you update your jersey.version to 1.14, this faulty pom goes away. Please 
> either update that, or work with build infra to update our nexus pom for 
> jersey-1.9 so that it does not include the faulty glassfish repo.
> Another interesting note about this is that something changed yesterday 
> evening to cause this break in behaviour. We have not had this particular 
> problem in about 9+ months.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6857) FsShell should report raw disk usage including replication factor

2014-10-25 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-6857:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this. Congratulations Byron.

> FsShell should report raw disk usage including replication factor
> -
>
> Key: HADOOP-6857
> URL: https://issues.apache.org/jira/browse/HADOOP-6857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Alex Kozlov
>Assignee: Byron Wong
> Fix For: 2.7.0
>
> Attachments: HADOOP-6857.patch, HADOOP-6857.patch, HADOOP-6857.patch, 
> show-space-consumed.txt
>
>
> Currently FsShell report HDFS usage with "hadoop fs -dus " command.  
> Since replication level is per file level, it would be nice to add raw disk 
> usage including the replication factor (maybe "hadoop fs -dus -raw "?). 
>  This will allow to assess resource usage more accurately.  -- Alex K



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6857) FsShell should report raw disk usage including replication factor

2014-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14184234#comment-14184234
 ] 

Hudson commented on HADOOP-6857:


FAILURE: Integrated in Hadoop-trunk-Commit #6345 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6345/])
HADOOP-6857. FsShell should report raw disk usage including replication factor. 
Contributed by Byron Wong. (shv: rev 28051e415591b8e33dbe954f65230ede23b11683)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/Snapshot.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DirectoryWithQuotaFeature.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java


> FsShell should report raw disk usage including replication factor
> -
>
> Key: HADOOP-6857
> URL: https://issues.apache.org/jira/browse/HADOOP-6857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Alex Kozlov
>Assignee: Byron Wong
> Attachments: HADOOP-6857.patch, HADOOP-6857.patch, HADOOP-6857.patch, 
> show-space-consumed.txt
>
>
> Currently FsShell report HDFS usage with "hadoop fs -dus " command.  
> Since replication level is per file level, it would be nice to add raw disk 
> usage including the replication factor (maybe "hadoop fs -dus -raw "?). 
>  This will allow to assess resource usage more accurately.  -- Alex K



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)