[jira] [Created] (HDFS-4406) read file failure,when the file is not close in secret mode

2013-01-15 Thread liaowenrui (JIRA)
liaowenrui created HDFS-4406:


 Summary: read file failure,when the file is not close in secret 
mode
 Key: HDFS-4406
 URL: https://issues.apache.org/jira/browse/HDFS-4406
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.2-alpha, 3.0.0
Reporter: liaowenrui
Priority: Critical


2013-01-14 18:27:06,216 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth 
failed for 160.172.0.11:45176:null
2013-01-14 18:27:06,217 INFO org.apache.hadoop.ipc.Server: IPC Server listener 
on 50020: readAndProcess threw exception javax.security.sasl.SaslException: 
DIGEST-MD5: IO error acquiring password [Caused by 
org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't re-compute 
password for block_token_identifier (expiryDate=1358195226206, 
keyId=1639335405, userId=hbase, blockPoolId=BP-myhacluster-25656, 
blockId=-6489888518203477527, access modes=[READ]), since the required block 
key (keyID=1639335405) doesn't exist.] from client 160.172.0.11. Count of bytes 
read: 0
javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password 
[Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't 
re-compute password for block_token_identifier (expiryDate=1358195226206, 
keyId=1639335405, userId=hbase, blockPoolId=BP-myhacluster-25656, 
blockId=-6489888518203477527, access modes=[READ]), since the required block 
key (keyID=1639335405) doesn't exist.]
at 
com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:577)
at 
com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java:226)
at 
org.apache.hadoop.ipc.Server$Connection.saslReadAndProcess(Server.java:1199)
at 
org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1393)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:710)
at 
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:509)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:484)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #495

2013-01-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/495/changes

Changes:

[tgraves] HADOOP-9181. Set daemon flag for HttpServer's QueuedThreadPool (Liang 
Xie via tgraves)

[tgraves] YARN-170. NodeManager stop() gets called twice on shutdown (Sandy 
Ryza via tgraves)

[tgraves] HADOOP-9097. Maven RAT plugin is not checking all source files 
(tgraves)

[tgraves] HDFS-4385. Maven RAT plugin is not checking all source files (tgraves)

[tgraves] MAPREDUCE-4934. Maven RAT plugin is not checking all source files 
(tgraves)

[tgraves] YARN-334. Maven RAT plugin is not checking all source files (tgraves)

--
[...truncated 9014 lines...]
 [exec] 
 [exec] validate-skins:
 [exec] 
 [exec] validate-skinchoice:
 [exec] ...validated existence of skin 'pelt'
 [exec] 
 [exec] validate-stylesheets:
 [exec] 
 [exec] validate:
 [exec] 
 [exec] site:
 [exec] 
 [exec] Copying the various non-generated resources to site.
 [exec] Warnings will be issued if the optional project resources are not 
found.
 [exec] This is often the case, because they are optional and so may not be 
available.
 [exec] Copying project resources and images to site ...
 [exec] Copied 1 empty directory to 1 empty directory under 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
 [exec] Copying main skin images to site ...
 [exec] Created dir: 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images
 [exec] Copying 20 files to 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images
 [exec] Copying 14 files to 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin/images
 [exec] Warning: 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common/images
 not found.
 [exec] Warning: 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt/images
 not found.
 [exec] Warning: 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/common
 not found.
 [exec] Warning: 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/src/documentation/skins/pelt
 not found.
 [exec] Copying project skin images to site ...
 [exec] Copying main skin css and js files to site ...
 [exec] Copying 11 files to 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin
 [exec] Copied 4 empty directories to 3 empty directories under 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin
 [exec] Copying 4 files to 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/skin
 [exec] Copying project skin css and js files to site ...
 [exec] 
 [exec] Finished copying the non-generated resources.
 [exec] Now Cocoon will generate the rest.
 [exec]   
 [exec] 
 [exec] Static site will be generated at:
 [exec] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
 [exec] 
 [exec] Cocoon will report the status of each document:
 [exec]   - in column 1: *=okay X=brokenLink ^=pageSkipped (see FAQ).
 [exec]   
 [exec] 
 
 [exec] cocoon 2.1.12-dev
 [exec] Copyright (c) 1999-2007 Apache Software Foundation. All rights 
reserved.
 [exec] 
 
 [exec] 
 [exec] 
 [exec] ^api/index.html
 [exec] ^jdiff/changes.html
 [exec] ^releasenotes.html
 [exec] ^changes.html
 [exec] * [1/26][26/30]   2.557s 8.6Kb   linkmap.html
 [exec] ^api/index.html
 [exec] ^jdiff/changes.html
 [exec] ^releasenotes.html
 [exec] ^changes.html
 [exec] * [2/26][1/29]0.845s 19.4Kb  hdfs_permissions_guide.html
 [exec] ^api/index.html
 [exec] ^jdiff/changes.html
 [exec] ^  

Hadoop-Hdfs-0.23-Build - Build # 495 - Still Failing

2013-01-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/495/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 9207 lines...]
 [exec] * Even if only one link is broken, you will still get failed.
 [exec] * Your site would still be generated, but some pages would be 
broken.
 [exec]   - See 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site/broken-links.xml
 [exec] 
 [exec] Total time: 18 seconds
 [exec] 
 [exec]   Copying broken links file to site root.
 [exec]   
 [exec] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/docs-src/build/site
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [1:58.467s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:59.083s
[INFO] Finished at: Tue Jan 15 11:35:42 UTC 2013
[INFO] Final Memory: 37M/468M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project 
hadoop-hdfs: An Ant BuildException has occured: exec returned: 1 - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Publishing Javadoc
ERROR: Publisher hudson.tasks.JavadocArchiver aborted due to exception
java.lang.IllegalStateException: basedir 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/site/api
 does not exist.
at org.apache.tools.ant.DirectoryScanner.scan(DirectoryScanner.java:879)
at hudson.FilePath$37.hasMatch(FilePath.java:2109)
at hudson.FilePath$37.invoke(FilePath.java:2006)
at hudson.FilePath$37.invoke(FilePath.java:1996)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2309)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Recording fingerprints
Updating HADOOP-9181
Updating YARN-334
Updating HADOOP-9097
Updating YARN-170
Updating HDFS-4385
Updating MAPREDUCE-4934
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Hadoop-Hdfs-trunk - Build # 1286 - Still Failing

2013-01-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10854 lines...]
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.264 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.408 sec
Running org.apache.hadoop.fs.TestVolumeId
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec

Results :

Failed tests:   
testBalancerEndInNoMoveProgress(org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup)

Tests in error: 
  
testBalancerWithNodeGroup(org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup):
 test timed out after 6 milliseconds

Tests run: 1662, Failures: 1, Errors: 1, Skipped: 6

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:20:25.131s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:20:25.899s
[INFO] Finished at: Tue Jan 15 12:54:21 UTC 2013
[INFO] Final Memory: 16M/671M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating YARN-334
Updating HADOOP-9097
Updating HDFS-4364
Updating HADOOP-9203
Updating MAPREDUCE-4938
Updating HDFS-4375
Updating HDFS-3429
Updating HADOOP-9178
Updating HDFS-4385
Updating HADOOP-9202
Updating MAPREDUCE-4934
Updating YARN-330
Updating HDFS-4369
Updating YARN-328
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1286

2013-01-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/changes

Changes:

[eli] Add missing file from previous commit.

[eli] HADOOP-9178. src/main/conf is missing hadoop-policy.xml. Contributed by 
Sandy Ryza

[suresh] HDFS-4375. Use token request messages defined in hadoop common. 
Contributed by Suresh Srinivas.

[suresh] YARN-328. Use token request messages defined in hadoop common. 
Contributed by Suresh Srinivas.

[suresh] MAPREDUCE-4938. Use token request messages defined in hadoop common. 
Contributed by Suresh Srinvias.

[suresh] HADOOP-9203. RPCCallBenchmark should find a random available port. 
Contributec by Andrew Purtell.

[suresh] HDFS-4369. GetBlockKeysResponseProto does not handle null response. 
Contributed by Suresh Srinivas.

[suresh] HDFS-4364. GetLinkTargetResponseProto does not handle null path. 
Contributed by Suresh Srinivas.

[hitesh] YARN-330. Fix flakey test: 
TestNodeManagerShutdown#testKillContainersOnShutdown. Contributed by Sandy Ryza

[todd] HDFS-3429. DataNode reads checksums even if client does not need them. 
Contributed by Todd Lipcon.

[bobby] HADOOP-9202. test-patch.sh fails during mvn eclipse:eclipse if patch 
adds a new module to the build (Chris Nauroth via bobby)

[tgraves] HADOOP-9097. Maven RAT plugin is not checking all source files 
(tgraves)

[tgraves] HDFS-4385. Maven RAT plugin is not checking all source files (tgraves)

[tgraves] MAPREDUCE-4934. Maven RAT plugin is not checking all source files 
(tgraves)

[tgraves] YARN-334. Maven RAT plugin is not checking all source files (tgraves)

--
[...truncated 10661 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.467 sec
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.497 sec
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 138.584 sec
Running org.apache.hadoop.hdfs.TestLeaseRecovery
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.485 sec
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.959 sec
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.35 sec
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.056 sec
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.655 sec
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.508 sec
Running org.apache.hadoop.hdfs.TestDFSMkdirs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.763 sec
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.798 sec
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.889 sec
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.163 sec
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.526 sec
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.177 sec
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.698 sec
Running org.apache.hadoop.hdfs.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.022 sec
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.782 sec
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.224 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.663 sec
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.425 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.504 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.348 sec
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.893 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 2, 

[jira] [Created] (HDFS-4407) In INodeDirectoryWithSnapshot, change combinePostDiff to merge-sort like

2013-01-15 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HDFS-4407:


 Summary: In INodeDirectoryWithSnapshot, change combinePostDiff to 
merge-sort like
 Key: HDFS-4407
 URL: https://issues.apache.org/jira/browse/HDFS-4407
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE


Change combinePostDiff to merge-sort like so that it is more efficient.  Also, 
it should not modify the postDiff parameter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4408) Balancer for DataNode's Volumes

2013-01-15 Thread Li Junjun (JIRA)
Li Junjun created HDFS-4408:
---

 Summary: Balancer  for  DataNode's Volumes
 Key: HDFS-4408
 URL: https://issues.apache.org/jira/browse/HDFS-4408
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Li Junjun


After a long time , I found  some datanode :
Volume a,b,c,d  their and total space and configurations are same.
but a used 100% , b c d are not , may be casued by write and  delete files  
mega times .

so it'll be better to write a balancer for datanode's volumes to improve the 
cluster's write performance;

before balance : 4 write request ,3 volumes handles.
after  balance : 4 write request ,4 volumes handles.
and in most cases, compute follows write , and it'll  improve the read 
performance;




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira