[jira] [Reopened] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu reopened HDFS-6299:
---


I reopened this issue, because I found there are more than two issues after I 
review.

 Protobuf for XAttr and client-side implementation 
 --

 Key: HDFS-6299
 URL: https://issues.apache.org/jira/browse/HDFS-6299
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Uma Maheswara Rao G
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6299.patch


 This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
 in DistributedFilesystem and DFSClient. 
 With this JIRA we may just keep the dummy  implemenation for Xattr API of 
 ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6299.
---

Resolution: Fixed

 Protobuf for XAttr and client-side implementation 
 --

 Key: HDFS-6299
 URL: https://issues.apache.org/jira/browse/HDFS-6299
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Uma Maheswara Rao G
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6299.patch


 This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
 in DistributedFilesystem and DFSClient. 
 With this JIRA we may just keep the dummy  implemenation for Xattr API of 
 ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu reopened HDFS-6299:
---


It cannot be closed, I still reopen it. and the committed should be reverted. 
and revise these comments here.

 Protobuf for XAttr and client-side implementation 
 --

 Key: HDFS-6299
 URL: https://issues.apache.org/jira/browse/HDFS-6299
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Uma Maheswara Rao G
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6299.patch


 This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
 in DistributedFilesystem and DFSClient. 
 With this JIRA we may just keep the dummy  implemenation for Xattr API of 
 ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6299.
---

Resolution: Fixed

I have created a JIRA, Lets discuss there with your comments.

 Protobuf for XAttr and client-side implementation 
 --

 Key: HDFS-6299
 URL: https://issues.apache.org/jira/browse/HDFS-6299
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Uma Maheswara Rao G
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6299.patch


 This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
 in DistributedFilesystem and DFSClient. 
 With this JIRA we may just keep the dummy  implemenation for Xattr API of 
 ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6309) Javadocs for Xattrs apis in DFSCliengt and other minor fixups

2014-04-30 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-6309:
-

 Summary: Javadocs for Xattrs apis in DFSCliengt and other minor 
fixups
 Key: HDFS-6309
 URL: https://issues.apache.org/jira/browse/HDFS-6309
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Uma Maheswara Rao G


Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2014-04-30 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6310:
---

 Summary: PBImageXmlWriter should output information about 
Delegation Tokens
 Key: HDFS-6310
 URL: https://issues.apache.org/jira/browse/HDFS-6310
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.4.0
Reporter: Akira AJISAKA


Separated from HDFS-6293.
The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6311) TestLargeBlock#testLargeBlockSize : File /tmp/TestLargeBlock/2147484160.dat could only be replicated to 0 nodes instead of minReplication (=1)

2014-04-30 Thread Tony Reix (JIRA)
Tony Reix created HDFS-6311:
---

 Summary: TestLargeBlock#testLargeBlockSize : File 
/tmp/TestLargeBlock/2147484160.dat could only be replicated to 0 nodes instead 
of minReplication (=1)
 Key: HDFS-6311
 URL: https://issues.apache.org/jira/browse/HDFS-6311
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
 Environment: Virtual Box - Ubuntu 14.04 - x86_64
Reporter: Tony Reix


I'm testing HDFS 2.4.0 

Apache Hadoop HDFS: Tests run: 2650, Failures: 2, Errors: 
2, Skipped: 99

I have the following error each time I launch my tests (3 tries).

Forking command line: /bin/sh -c cd 
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs  
/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Xmx1024m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter2355654085353142996.jar
 
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire983005167523288650tmp
 
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4328161716955453811297tmp

Running org.apache.hadoop.hdfs.TestLargeBlock

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 16.011 sec  
FAILURE! - in org.apache.hadoop.hdfs.TestLargeBlock
testLargeBlockSize(org.apache.hadoop.hdfs.TestLargeBlock)  Time elapsed: 15.549 
sec   ERROR!

org.apache.hadoop.ipc.RemoteException: File /tmp/TestLargeBlock/2147484160.dat 
could only be replicated to 0 nodes instead of minReplication (=1).  There are 
1 datanode(s) running and no node(s) are excluded in this operation.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1430)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2684)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:584)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2008)

at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525)




--
This message was sent by Atlassian JIRA
(v6.2#6252)


Build failed in Jenkins: Hadoop-Hdfs-trunk #1747

2014-04-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1747/changes

Changes:

[atm] HADOOP-10543. RemoteException's unwrapRemoteException method failed for 
PathIOException. Contributed by Yongjun Zhang.

[jeagles] HDFS-6269. NameNode Audit Log should differentiate between webHDFS 
open and HDFS open. (Eric Payne via jeagles)

[jeagles] MAPREDUCE-5638. Port Hadoop Archives document to trunk (Akira AJISAKA 
via jeagles)

[arp] HADOOP-10547. Fix CHANGES.txt

[arp] HADOOP-10547. Give SaslPropertiesResolver.getDefaultProperties() public 
scope. (Contributed by Benoy Antony)

[vinodkv] YARN-1929. Fixed a deadlock in ResourceManager that occurs when 
failover happens right at the time of shutdown. Contributed by Karthik Kambatla.

[jlowe] YARN-738. TestClientRMTokens is failing irregularly while running all 
yarn tests. Contributed by Ming Ma

--
[...truncated 12736 lines...]
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.906 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestClusterId
Running org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.092 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
Running org.apache.hadoop.hdfs.server.namenode.TestClusterJspHelper
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.849 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestClusterJspHelper
Running org.apache.hadoop.hdfs.server.namenode.TestNameCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameCache
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.311 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Running org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.827 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.963 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
Running org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.001 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.136 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.27 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Running org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.672 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker
Running org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.006 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Running org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.327 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.323 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.122 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Running org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.063 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Running org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.976 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp
Running org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.49 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.661 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.765 sec - in 

Hadoop-Hdfs-trunk - Build # 1747 - Still Failing

2014-04-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1747/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12929 lines...]
main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[2:11:07.338s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [2.688s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 2:11:11.750s
[INFO] Finished at: Wed Apr 30 13:52:34 UTC 2014
[INFO] Final Memory: 33M/336M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-10543
Updating YARN-738
Updating YARN-1929
Updating HDFS-6269
Updating HADOOP-10547
Updating MAPREDUCE-5638
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.testBalancerWithRackLocality

Error Message:
expected:1800 but was:1810

Stack Trace:
java.lang.AssertionError: expected:1800 but was:1810
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:253)




[jira] [Resolved] (HDFS-6302) Implement XAttr as a INode feature.

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6302.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

Thanks a lot, Yi for the patch. 
Also thanks a lot, Charles for the review!

I have just committed this patch to branch!
Please note that I have formatted the code which I pasted above while 
committing.


 Implement XAttr as a INode feature.
 ---

 Key: HDFS-6302
 URL: https://issues.apache.org/jira/browse/HDFS-6302
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6302.1.patch, HDFS-6302.2.patch, HDFS-6302.patch


 XAttr is based on INode feature(HDFS-5284).
 Persisting XAttrs in fsimage and edit log is handled by HDFS-6301.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6312) WebHdfs HA failover is broken on secure clusters

2014-04-30 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6312:
-

 Summary: WebHdfs HA failover is broken on secure clusters
 Key: HDFS-6312
 URL: https://issues.apache.org/jira/browse/HDFS-6312
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.4.0, 3.0.0
Reporter: Daryn Sharp
Priority: Blocker


When webhdfs does a failover, it blanks out the delegation token.  This will 
cause subsequent operations against the other NN to acquire a new token.  Tasks 
cannot acquire a token (no kerberos credentials) so jobs will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6313) WebHdfs may use the wrong NN when configured for multiple HA NNs

2014-04-30 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6313:
-

 Summary: WebHdfs may use the wrong NN when configured for multiple 
HA NNs
 Key: HDFS-6313
 URL: https://issues.apache.org/jira/browse/HDFS-6313
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.4.0, 3.0.0
Reporter: Daryn Sharp


WebHdfs resolveNNAddr will return a union of addresses for all HA configured 
NNs.  The client may access the wrong NN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6314) Testcases for XAttrs

2014-04-30 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6314:


 Summary: Testcases for XAttrs
 Key: HDFS-6314
 URL: https://issues.apache.org/jira/browse/HDFS-6314
 Project: Hadoop HDFS
  Issue Type: Task
  Components: test
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)


Tests NameNode interaction for all XAttr APIs, covers restarting NN, saving new 
checkpoint.
Tests XAttr for Snapshot, symlinks.
Tests XAttr for HA failover.
And more...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6315) Decouple recoding edit logs from FSDirectory

2014-04-30 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-6315:


 Summary: Decouple recoding edit logs from FSDirectory
 Key: HDFS-6315
 URL: https://issues.apache.org/jira/browse/HDFS-6315
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6315.000.patch

Currently both FSNamesystem and FSDirectory record edit logs. This design 
requires both FSNamesystem and FSDirectory to be tightly coupled together to 
implement a durable namespace.

This jira proposes to separate the responsibility of implementing the namespace 
and providing durability with edit logs. Specifically, FSDirectory implements 
the namespace (which should have no edit log operations), and FSNamesystem 
implement durability by recording the edit logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6316) Only one replica for each block in single-rack cluster

2014-04-30 Thread Wenwu Peng (JIRA)
Wenwu Peng created HDFS-6316:


 Summary: Only one replica for each block in single-rack cluster
 Key: HDFS-6316
 URL: https://issues.apache.org/jira/browse/HDFS-6316
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 1.2.1
Reporter: Wenwu Peng


In HDFS 1,only one replica for each block when in single rack cluster when 
enble nodegroup awareness. in my environment, there is single rack and four 
nodegroup
even though the replication=3.

Steps:
1. setup a hadoop cluster with all pyhsical in same rack
2. Run hadoop fs -copyFromLocal /data /test
3. Run hadoop fsck /test -files -locations -blocks -racks

Topology:
192.168.0.1   /default-rack/nodegroup1
192.168.0.2   /default-rack/odegroup1
192.168.0.3   /default-rack/nodegroup1
192.168.0.4   /default-rack/nodegroup2
192.168.0.5  /default-rack/nodegroup2
192.168.0.6  /default-rack/nodegroup2
192.168.0.7   /default-rack/nodegroup3
192.168.0.8   /default-rack/nodegroup3
192.168.0.9  /default-rack/nodegroup4
192.168.0.19   /default-rack/nodegroup4

Namenode.log
2014-01-14 00:51:19,884 WARN 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough 
replicas, still in need of 2 to reach 3
Not able to place enough replicas

Note: 
1. Not exist in Hadoop 2.2.0
2. Not exist when have more than one rack 




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6317) Add snapshot quota

2014-04-30 Thread Alex Shafer (JIRA)
Alex Shafer created HDFS-6317:
-

 Summary: Add snapshot quota
 Key: HDFS-6317
 URL: https://issues.apache.org/jira/browse/HDFS-6317
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Alex Shafer


Either allow the 65k snapshot limit to be set with a configuration option  or 
add a per-directory snapshot quota settable with the `hdfs dfsadmin` CLI and 
viewable by appending fields to `hdfs dfs -count -q` output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6318) refreshServiceAcl cannot affect both active NN and standby NN

2014-04-30 Thread Fengdong Yu (JIRA)
Fengdong Yu created HDFS-6318:
-

 Summary: refreshServiceAcl cannot affect both active NN and 
standby NN
 Key: HDFS-6318
 URL: https://issues.apache.org/jira/browse/HDFS-6318
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, namenode
Affects Versions: 2.4.0
Reporter: Fengdong Yu
Assignee: Fengdong Yu


refreshServiceAcl cannot affect both active NN and standby NN, it only select 
one NN to reload the ACL configuration. but we should reload Acl on both active 
NN and standby NN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)