[jira] [Commented] (HDFS-6658) Namenode memory optimization - Block replicas list

2014-07-13 Thread Amir Langer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060023#comment-14060023
 ] 

Amir Langer commented on HDFS-6658:
---

Hi [~kihwal] - In response to the scenario of massive block deletes without any 
block adds that folllow it which leaves a lot of empty array references:
Yes - you're right and there is currently nothing in the code that takes care 
of it.
We can introduce a check that removes a whole chunk if it is empty, or, copies 
some references around in order to use less memory. (Some algorithm similar to 
defragmentation). 
However, this will either take a lot of latency (if done as part of a client 
call), or will require a monitor thread and then will force us to turn 
everything into being thread-safe which will add latency to all calls again. In 
short, any solution cost is high.

The reason I was reluctant to pay it, is that if you consider the scenario when 
it happens - once we deleted a lot of blocks - we shouldn't really have a big 
memory shortage (even if you still have the arrays, you cleared all those block 
instances which is a lot more). 
We're actually OK if we don't really need to add blocks (i.e. there isn't much 
of a benefit). 
And once we do need to add blocks - then the problem of sparse arrays goes away 
anyway.
In short, yes, it is there  - but I believe the cost does not justify the 
benefits.


 Namenode memory optimization - Block replicas list 
 ---

 Key: HDFS-6658
 URL: https://issues.apache.org/jira/browse/HDFS-6658
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.4.1
Reporter: Amir Langer
Assignee: Amir Langer
 Attachments: Namenode Memory Optimizations - Block replicas list.docx


 Part of the memory consumed by every BlockInfo object in the Namenode is a 
 linked list of block references for every DatanodeStorageInfo (called 
 triplets). 
 We propose to change the way we store the list in memory. 
 Using primitive integer indexes instead of object references will reduce the 
 memory needed for every block replica (when compressed oops is disabled) and 
 in our new design the list overhead will be per DatanodeStorageInfo and not 
 per block replica.
 see attached design doc. for details and evaluation results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6456) NFS: NFS server should throw error for invalid entry in dfs.nfs.exports.allowed.hosts

2014-07-13 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6456:
-

Attachment: HDFS-6456.patch

 NFS: NFS server should throw error for invalid entry in 
 dfs.nfs.exports.allowed.hosts
 -

 Key: HDFS-6456
 URL: https://issues.apache.org/jira/browse/HDFS-6456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
Assignee: Abhiraj Butala
 Attachments: HDFS-6456.patch


 Pass invalid entry in dfs.nfs.exports.allowed.hosts. Use - as separator 
 between hostname and access permission 
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1-rw/value/property
 {noformat}
 This misconfiguration is not detected by NFS server. It does not print any 
 error message. The host passed in this configuration is also not able to 
 mount nfs. In conclusion, no node can mount the nfs with this value. A format 
 check is required for this property. If the value of this property does not 
 follow the format, an error should be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6456) NFS: NFS server should throw error for invalid entry in dfs.nfs.exports.allowed.hosts

2014-07-13 Thread Abhiraj Butala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhiraj Butala updated HDFS-6456:
-

Status: Patch Available  (was: Open)

 NFS: NFS server should throw error for invalid entry in 
 dfs.nfs.exports.allowed.hosts
 -

 Key: HDFS-6456
 URL: https://issues.apache.org/jira/browse/HDFS-6456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
Assignee: Abhiraj Butala
 Attachments: HDFS-6456.patch


 Pass invalid entry in dfs.nfs.exports.allowed.hosts. Use - as separator 
 between hostname and access permission 
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1-rw/value/property
 {noformat}
 This misconfiguration is not detected by NFS server. It does not print any 
 error message. The host passed in this configuration is also not able to 
 mount nfs. In conclusion, no node can mount the nfs with this value. A format 
 check is required for this property. If the value of this property does not 
 follow the format, an error should be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6456) NFS: NFS server should throw error for invalid entry in dfs.nfs.exports.allowed.hosts

2014-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060054#comment-14060054
 ] 

Hadoop QA commented on HDFS-6456:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12655441/HDFS-6456.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7332//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7332//console

This message is automatically generated.

 NFS: NFS server should throw error for invalid entry in 
 dfs.nfs.exports.allowed.hosts
 -

 Key: HDFS-6456
 URL: https://issues.apache.org/jira/browse/HDFS-6456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
Assignee: Abhiraj Butala
 Attachments: HDFS-6456.patch


 Pass invalid entry in dfs.nfs.exports.allowed.hosts. Use - as separator 
 between hostname and access permission 
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1-rw/value/property
 {noformat}
 This misconfiguration is not detected by NFS server. It does not print any 
 error message. The host passed in this configuration is also not able to 
 mount nfs. In conclusion, no node can mount the nfs with this value. A format 
 check is required for this property. If the value of this property does not 
 follow the format, an error should be thrown.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-3851) Make DFSOuputSteram$Packet default constructor reuse the other constructor

2014-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060077#comment-14060077
 ] 

Hudson commented on HDFS-3851:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #611 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/611/])
HDFS-3851. Move attribution to release 2.6.0 section in CHANGES.txt. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1609858)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make DFSOuputSteram$Packet default constructor reuse the other constructor
 --

 Key: HDFS-3851
 URL: https://issues.apache.org/jira/browse/HDFS-3851
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Trivial
 Fix For: 3.0.0, 2.6.0

 Attachments: HDFS-3851.patch, HDFS-3851.patch


 The default constructor of DFSOuputSteram$Packet can be made more clear by 
 reusing the other constructor. Also, two members of DFSOuputSteram$Packet 
 (offsetInBlock and maxChunks) can be declared as final.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-2976) Remove unnecessary method (tokenRefetchNeeded) in DFSClient

2014-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060078#comment-14060078
 ] 

Hudson commented on HDFS-2976:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #611 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/611/])
HDFS-2976. Move attribution to release 2.6.0 section in CHANGES.txt. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1609849)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove unnecessary method (tokenRefetchNeeded) in DFSClient
 ---

 Key: HDFS-2976
 URL: https://issues.apache.org/jira/browse/HDFS-2976
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 0.24.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
Priority: Trivial
 Fix For: 3.0.0, 2.6.0

 Attachments: HDFS-2976.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6509) distcp vs Data At Rest Encryption

2014-07-13 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6509:
---

Attachment: HDFS-6509distcpandDataatRestEncryption.pdf

I've attached a short document describing a proposed design for how distcp and 
Data at Rest Encryption can work together.

 distcp vs Data At Rest Encryption
 -

 Key: HDFS-6509
 URL: https://issues.apache.org/jira/browse/HDFS-6509
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Reporter: Charles Lamb
Assignee: Charles Lamb
 Attachments: HDFS-6509distcpandDataatRestEncryption.pdf


 distcp needs to work with Data At Rest Encryption



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5624) Add tests for ACLs in combination with viewfs.

2014-07-13 Thread Stephen Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060124#comment-14060124
 ] 

Stephen Chu commented on HDFS-5624:
---

[~cnauroth], thanks a lot for the review and comments! I'll work on updating 
the patch to address your comments.

# Oops, missed that. Will update the patch to follow the pattern. Thanks for 
catching.
# Ditto.
# I'll take a shot at implementing the ViewFs ACL methods in this patch. 
Because the code is similar, seems it'll be nice to get that into one patch.
# Agreed. Will rename the test, and will also add another test suite to go with 
the added ViewFs ACL implementation.

 Add tests for ACLs in combination with viewfs.
 --

 Key: HDFS-5624
 URL: https://issues.apache.org/jira/browse/HDFS-5624
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.4.0
Reporter: Chris Nauroth
Assignee: Stephen Chu
 Attachments: HDFS-5624.001.patch


 Add tests verifying that in a federated deployment, a viewfs wrapped over 
 multiple federated NameNodes will dispatch the ACL operations to the correct 
 NameNode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-07-13 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HDFS-6376:
--

Attachment: HDFS-6376-7-trunk.patch

Backed out changes to DFSUtil.

 Distcp data between two HA clusters requires another configuration
 --

 Key: HDFS-6376
 URL: https://issues.apache.org/jira/browse/HDFS-6376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, federation, hdfs-client
Affects Versions: 2.3.0, 2.4.0
 Environment: Hadoop 2.3.0
Reporter: Dave Marion
 Fix For: 3.0.0

 Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
 HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
 HDFS-6376-6-trunk.patch, HDFS-6376-7-trunk.patch, HDFS-6376-branch-2.4.patch, 
 HDFS-6376-patch-1.patch


 User has to create a third set of configuration files for distcp when 
 transferring data between two HA clusters.
 Consider the scenario in [1]. You cannot put all of the required properties 
 in core-site.xml and hdfs-site.xml for the client to resolve the location of 
 both active namenodes. If you do, then the datanodes from cluster A may join 
 cluster B. I can not find a configuration option that tells the datanodes to 
 federate blocks for only one of the clusters in the configuration.
 [1] 
 http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-07-13 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HDFS-6376:
--

Status: Patch Available  (was: Open)

 Distcp data between two HA clusters requires another configuration
 --

 Key: HDFS-6376
 URL: https://issues.apache.org/jira/browse/HDFS-6376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, federation, hdfs-client
Affects Versions: 2.4.0, 2.3.0
 Environment: Hadoop 2.3.0
Reporter: Dave Marion
 Fix For: 3.0.0

 Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
 HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
 HDFS-6376-6-trunk.patch, HDFS-6376-7-trunk.patch, HDFS-6376-branch-2.4.patch, 
 HDFS-6376-patch-1.patch


 User has to create a third set of configuration files for distcp when 
 transferring data between two HA clusters.
 Consider the scenario in [1]. You cannot put all of the required properties 
 in core-site.xml and hdfs-site.xml for the client to resolve the location of 
 both active namenodes. If you do, then the datanodes from cluster A may join 
 cluster B. I can not find a configuration option that tells the datanodes to 
 federate blocks for only one of the clusters in the configuration.
 [1] 
 http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-07-13 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HDFS-6376:
--

Status: Open  (was: Patch Available)

 Distcp data between two HA clusters requires another configuration
 --

 Key: HDFS-6376
 URL: https://issues.apache.org/jira/browse/HDFS-6376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, federation, hdfs-client
Affects Versions: 2.4.0, 2.3.0
 Environment: Hadoop 2.3.0
Reporter: Dave Marion
 Fix For: 3.0.0

 Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
 HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
 HDFS-6376-6-trunk.patch, HDFS-6376-7-trunk.patch, HDFS-6376-branch-2.4.patch, 
 HDFS-6376-patch-1.patch


 User has to create a third set of configuration files for distcp when 
 transferring data between two HA clusters.
 Consider the scenario in [1]. You cannot put all of the required properties 
 in core-site.xml and hdfs-site.xml for the client to resolve the location of 
 both active namenodes. If you do, then the datanodes from cluster A may join 
 cluster B. I can not find a configuration option that tells the datanodes to 
 federate blocks for only one of the clusters in the configuration.
 [1] 
 http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060196#comment-14060196
 ] 

Hadoop QA commented on HDFS-6376:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12655460/HDFS-6376-7-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7333//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7333//console

This message is automatically generated.

 Distcp data between two HA clusters requires another configuration
 --

 Key: HDFS-6376
 URL: https://issues.apache.org/jira/browse/HDFS-6376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, federation, hdfs-client
Affects Versions: 2.3.0, 2.4.0
 Environment: Hadoop 2.3.0
Reporter: Dave Marion
 Fix For: 3.0.0

 Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
 HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
 HDFS-6376-6-trunk.patch, HDFS-6376-7-trunk.patch, HDFS-6376-branch-2.4.patch, 
 HDFS-6376-patch-1.patch


 User has to create a third set of configuration files for distcp when 
 transferring data between two HA clusters.
 Consider the scenario in [1]. You cannot put all of the required properties 
 in core-site.xml and hdfs-site.xml for the client to resolve the location of 
 both active namenodes. If you do, then the datanodes from cluster A may join 
 cluster B. I can not find a configuration option that tells the datanodes to 
 federate blocks for only one of the clusters in the configuration.
 [1] 
 http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6674) UserGroupInformation.loginUserFromKeytab will hang forever if keytab file length is less than 6 byte.

2014-07-13 Thread liuyang (JIRA)
liuyang created HDFS-6674:
-

 Summary: UserGroupInformation.loginUserFromKeytab will hang 
forever if keytab file length  is less than 6 byte.
 Key: HDFS-6674
 URL: https://issues.apache.org/jira/browse/HDFS-6674
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.1-alpha
Reporter: liuyang
Priority: Minor


The jstack is as follows:
   java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.available(Native Method)
at java.io.BufferedInputStream.available(BufferedInputStream.java:399)
- locked 0x000745585330 (a 
sun.security.krb5.internal.ktab.KeyTabInputStream)
at sun.security.krb5.internal.ktab.KeyTab.load(KeyTab.java:257)
at sun.security.krb5.internal.ktab.KeyTab.init(KeyTab.java:97)
at sun.security.krb5.internal.ktab.KeyTab.getInstance0(KeyTab.java:124)
- locked 0x000745586560 (a java.lang.Class for 
sun.security.krb5.internal.ktab.KeyTab)
at sun.security.krb5.internal.ktab.KeyTab.getInstance(KeyTab.java:157)
at javax.security.auth.kerberos.KeyTab.takeSnapshot(KeyTab.java:119)
at 
javax.security.auth.kerberos.KeyTab.getEncryptionKeys(KeyTab.java:192)
at 
javax.security.auth.kerberos.JavaxSecurityAuthKerberosAccessImpl.keyTabGetEncryptionKeys(JavaxSecurityAuthKerberosAccessImpl.java:36)
at 
sun.security.jgss.krb5.Krb5Util.keysFromJavaxKeyTab(Krb5Util.java:381)
at 
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:701)
at 
com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:784)
at 
javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:718)
at javax.security.auth.login.LoginContext.login(LoginContext.java:590)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:679)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6673) Add Delimited format supports for PB OIV tool

2014-07-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6673:


Attachment: HDFS-6673.000.patch

This patch adds Delimited format to the new protobuf-based OIV tool.

 Add Delimited format supports for PB OIV tool
 -

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-6673.000.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6673) Add Delimited format supports for PB OIV tool

2014-07-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu reassigned HDFS-6673:
---

Assignee: Lei (Eddy) Xu

 Add Delimited format supports for PB OIV tool
 -

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-6673.000.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6641) [ HDFS- File Concat ] Concat will fail when block is not full

2014-07-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060325#comment-14060325
 ] 

Brahma Reddy Battula commented on HDFS-6641:


Hi [~cnauroth] 

{quote}
The concat destination file must still maintain the invariant that all blocks 
have the same length, except for possibly the last block, which may be 
partially filled. If this invariant were not maintained, then it could cause 
unpredictable behavior later when a client attempts to read that file.
I'm resolving this issue as Not a Problem, because I believe this is all 
working as designed.
{quote}

you mean,last block should be filled when we go for concat(like pre 
condition)..?  I feel, this can addressed or else need to provide reason then 
we can close this jira..Please correct if i am wrong..

 [ HDFS- File Concat ] Concat will fail when block is not full
 -

 Key: HDFS-6641
 URL: https://issues.apache.org/jira/browse/HDFS-6641
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula

 sually we can't ensure lastblock alwaysfull...please let me know purpose of 
 following check..
 long blockSize = trgInode.getPreferredBlockSize();
 // check the end block to be full
 final BlockInfo last = trgInode.getLastBlock();
 if(blockSize != last.getNumBytes()) {
   throw new HadoopIllegalArgumentException(The last block in  + target
   +  is not full; last block size =  + last.getNumBytes()
   +  but file block size =  + blockSize);
 }
 If it is issue, I'll file jira.
 Following is the trace..
 exception in thread main 
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.HadoopIllegalArgumentException):
  The last block in /Test.txt is not full; last block size = 14 but file block 
 size = 134217728
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInternal(FSNamesystem.java:1887)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInt(FSNamesystem.java:1833)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:1795)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:704)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:512)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)



--
This message was sent by Atlassian JIRA
(v6.2#6252)