[jira] [Created] (HDFS-8463) Calling DFSInputStream.seekToNewSource just after stream creation causes NullPointerException

2015-05-22 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HDFS-8463:
--

 Summary: Calling DFSInputStream.seekToNewSource just after stream 
creation causes  NullPointerException
 Key: HDFS-8463
 URL: https://issues.apache.org/jira/browse/HDFS-8463
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8459) Question: Why Namenode doesn't judge the status of replicas when convert block status from commited to complete?

2015-05-22 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-8459.
-
Resolution: Invalid

Apache JIRA is for reporting bugs or filing proposed enhancement or features, 
not for end-user question. I recommend you to e-mail to u...@hadoop.apache.org 
with this question.

 Question: Why Namenode doesn't judge the status of replicas when convert 
 block status from commited to complete? 
 -

 Key: HDFS-8459
 URL: https://issues.apache.org/jira/browse/HDFS-8459
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: cuiyang

   Why Namenode doesn't judge the status of replicas when convert block status 
 from commited to complete?
   When client finished write block and call namenode::complete(), namenode do 
 things as follow
   (in BlockManager::commitOrCompleteLastBlock):
final boolean b = commitBlock((BlockInfoUnderConstruction)lastBlock, 
 commitBlock);
   if(countNodes(lastBlock).liveReplicas() = minReplication)
 completeBlock(bc, bc.numBlocks()-1, false);
   return b;
  
   But  the NameNode doesn't care how many replicas which status is finalized 
 this block has! 
   It should be this: if there is no one replica which status is not 
 finalized, the block should not convert to complete status!
   Because According to the appendDesign3.pdf 
 (https://issues.apache.org/jira/secure/attachment/12445209/appendDesign3.pdf):
Complete:
A 
complete 
block 
is 
a 
block 
whose 
length
 and
 GS 
are 

 finalized 
and
 NameNode
 has 
seen
 a
 GS/len 
matched
 finalized 
replica 

 of 
the
  block.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8464) hdfs namenode UI shows Max Non Heap Memory is -1 B

2015-05-22 Thread tongshiquan (JIRA)
tongshiquan created HDFS-8464:
-

 Summary: hdfs namenode UI shows Max Non Heap Memory is -1 B
 Key: HDFS-8464
 URL: https://issues.apache.org/jira/browse/HDFS-8464
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
 Environment: suse11.3
Reporter: tongshiquan
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8460) Erasure Coding: stateful result doesn't match data occasionally

2015-05-22 Thread Yi Liu (JIRA)
Yi Liu created HDFS-8460:


 Summary: Erasure Coding: stateful result doesn't match data 
occasionally
 Key: HDFS-8460
 URL: https://issues.apache.org/jira/browse/HDFS-8460
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu


I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
occasionally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operation for WebImageViewer

2015-05-22 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-8462:
---

 Summary: Implement GETXATTRS and LISTXATTRS operation for 
WebImageViewer
 Key: HDFS-8462
 URL: https://issues.apache.org/jira/browse/HDFS-8462
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Akira AJISAKA


In Hadoop 2.7.0, WebImageViewer supports the following operations:
* {{GETFILESTATUS}}
* {{LISTSTATUS}}
* {{GETACLSTATUS}}

I'm thinking it would be better for administrators if {{GETXATTRS}} and 
{{LISTXATTRS}} are supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8461) Erasure coding: fix priority level of UnderReplicatedBlocks for striped block

2015-05-22 Thread Walter Su (JIRA)
Walter Su created HDFS-8461:
---

 Summary: Erasure coding: fix priority level of 
UnderReplicatedBlocks for striped block
 Key: HDFS-8461
 URL: https://issues.apache.org/jira/browse/HDFS-8461
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


{code:title=UnderReplicatedBlocks.java}
  private int getPriority(int curReplicas,
  ...
} else if (curReplicas == 1) {
  //only on replica -risk of loss
  // highest priority
  return QUEUE_HIGHEST_PRIORITY;
  ...
{code}
For stripe blocks, we should return QUEUE_HIGHEST_PRIORITY when curReplicas == 
6( Suppose 6+3 schema).

That's important. Because
{code:title=BlockManager.java}
DatanodeDescriptor[] chooseSourceDatanodes(BlockInfo block,
  ...
 if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
   !node.isDecommissionInProgress() 
   node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
  {
continue; // already reached replication limit
  }
  ...
{code}
It may return not enough source DNs ( maybe 5), and failed to recover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8467) [HDFS-Quota]Quota is getting updated after storage policy is modified even before mover command is executed.

2015-05-22 Thread Jagadesh Kiran N (JIRA)
Jagadesh Kiran N created HDFS-8467:
--

 Summary: [HDFS-Quota]Quota is getting updated after storage policy 
is modified even before mover command is executed.
 Key: HDFS-8467
 URL: https://issues.apache.org/jira/browse/HDFS-8467
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jagadesh Kiran N
Assignee: surendra singh lilhore


a. create a directory 
{code}
./hdfs dfs -mkdir /d1
{code}
b. Set storage policy HOT on /d1
{code}
./hdfs storagepolicies -setStoragePolicy -path /d1 -policy HOT
{code}

c. Set space quota to disk on /d1
{code}
  ./hdfs dfsadmin -setSpaceQuota 1 -storageType DISK /d1
{code}

{code} 
./hdfs dfs -count -v -q -h -t  /d1
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
9.8 K 9.8 K  none   inf  none   
inf /d1
{code}

d. Insert 2 file each of 1000B
{code}
./hdfs dfs -count -v -q -h -t  /d1
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
9.8 K 3.9 K  none   inf  none   
inf /d1
{code}

e. Set ARCHIVE quota on /d1
{code}
./hdfs dfsadmin -setSpaceQuota 1 -storageType ARCHIVE /d1
./hdfs dfs -count -v -q -h -t  /d1
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
9.8 K 3.9 K  none   inf 9.8 K   
  9.8 K /d1
{code}

f. Change storagepilicy to COLD
{code}
./hdfs storagepolicies -setStoragePolicy -path /d1 -policy COLD
{code}

g. Check REM_ARCHIVE_QUOTA Value
{code}
./hdfs dfs -count -v -q -h -t  /d1
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
9.8 K 9.8 K  none   inf 9.8 K   
  3.9 K /d1
{code}

Here even when 'Mover' command is not run, quota of REM_ARCHIVE_QUOTA is 
reduced and REM_DISK_QUOTA is increased.

Expected : After Mover is success quota values has to be changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8466) Refactor BlockInfoContiguous and fix NPE in TestBlockInfo#testCopyConstructor()

2015-05-22 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-8466:
---

 Summary: Refactor BlockInfoContiguous and fix NPE in 
TestBlockInfo#testCopyConstructor()
 Key: HDFS-8466
 URL: https://issues.apache.org/jira/browse/HDFS-8466
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Vinayakumar B
Assignee: Vinayakumar B


HDFS-7716 refactored BlockInfoContiguous.java
Since then TestBlockInfo#testCopyConstructor(..) fails with NPE.

Along with fixing test failure, some of the code can be refactored to re-use 
code from BlockInfo.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8465) Mover is success even when space exceeds storage quota.

2015-05-22 Thread Archana T (JIRA)
Archana T created HDFS-8465:
---

 Summary: Mover is success even when space exceeds storage quota.
 Key: HDFS-8465
 URL: https://issues.apache.org/jira/browse/HDFS-8465
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Reporter: Archana T
Assignee: surendra singh lilhore



*Steps :*
1. Create directory /dir 
2. Set its storage policy to HOT --
hdfs storagepolicies -setStoragePolicy -path /dir -policy HOT

3. Insert files of total size 10,000B  into /dir.
4. Set above path /dir ARCHIVE type quota to 5,000B --
hdfs dfsadmin -setSpaceQuota 5000 -storageType ARCHIVE /dir
{code}
hdfs dfs -count -v -q -h -t  /dir
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
 none   inf  none   inf 4.9 K   
  4.9 K /dir
{code}
5. Now change policy of '/dir' to COLD
6. Execute Mover command

*Observations:*
1. Mover is successful moving all 10,000B to ARCHIVE datapath.

2. Count command displays negative value '-59.4K'--
{code}
hdfs dfs -count -v -q -h -t  /dir
   DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
REM_ARCHIVE_QUOTA PATHNAME
 none   inf  none   inf 4.9 K   
-59.4 K /dir
{code}
*Expected:*
Mover should not be successful as ARCHIVE quota is only 5,000B.
Negative value should not be displayed for quota output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-05-22 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-8468:
---

 Summary: 2 RPC calls for every file read in DFSClient#open(..) 
resulting in double Audit log entries
 Key: HDFS-8468
 URL: https://issues.apache.org/jira/browse/HDFS-8468
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B


In HDFS-7285 branch, 
To determine whether file is striped/not and get the Schema for the file, 2 
RPCs done to Namenode.
This is resulting in double audit logs for every file read for both 
striped/non-striped.

This will be a major impact in size of audit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 193 - Still Failing

2015-05-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/193/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8598 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [ 48.247 s]
[INFO] Apache Hadoop HDFS  FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.045 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:51 h
[INFO] Finished at: 2015-05-22T14:30:18+00:00
[INFO] Final Memory: 55M/253M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project 
hadoop-hdfs: An Ant BuildException has occured: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/src/main/docs
 does not exist.
[ERROR] around Ant part ...copy 
todir=/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/docs-src...
 @ 5:127 in 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 797637 bytes
Compression is 0.0%
Took 28 sec
Recording test results
Updating YARN-3594
Updating HADOOP-11955
Updating HADOOP-11594
Updating HADOOP-12014
Updating HDFS-8421
Updating HADOOP-12016
Updating HDFS-8268
Updating HDFS-8451
Updating HDFS-8454
Updating YARN-3684
Updating HADOOP-11743
Updating YARN-3646
Updating YARN-3694
Updating YARN-3675
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Hadoop-Hdfs-trunk - Build # 2133 - Still Failing

2015-05-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2133/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6833 lines...]
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [ 47.837 s]
[INFO] Apache Hadoop HDFS  FAILURE [  02:42 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.056 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-05-22T14:20:00+00:00
[INFO] Final Memory: 61M/678M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 363209 bytes
Compression is 0.0%
Took 23 sec
Recording test results
Updating YARN-3594
Updating HADOOP-11955
Updating HADOOP-11594
Updating HADOOP-12014
Updating HDFS-8421
Updating HADOOP-12016
Updating HDFS-8268
Updating HDFS-8451
Updating HDFS-8454
Updating YARN-3684
Updating HADOOP-11743
Updating YARN-3646
Updating YARN-3694
Updating YARN-3675
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateFailure

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4b6c,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4b6c,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4b6c,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:42515,DS-44c37427-cddc-4c16-9321-a6a2f58e97c4,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:54909,DS-53f6d722-a15f-4293-8c37-185c4b6c,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1145)
at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1211)
at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1375)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1289)
at 

Build failed in Jenkins: Hadoop-Hdfs-trunk #2133

2015-05-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2133/changes

Changes:

[aajisaka] YARN-3694. Fix dead link for TimelineServer REST API. Contributed by 
Jagadesh Kiran N.

[devaraj] YARN-3646. Applications are getting stuck some times in case of retry

[wheat9] HDFS-8421. Move startFile() and related functions into 
FSDirWriteFileOp. Contributed by Haohui Mai.

[xyao] HDFS-8451. DFSClient probe for encryption testing interprets empty URI 
property for enabled. Contributed by Steve Loughran.

[kasha] YARN-3675. FairScheduler: RM quits when node removal races with 
continuous-scheduling on the same node. (Anubhav Dhoot via kasha)

[jghoman] HADOOP-12016. Typo in FileSystem::listStatusIterator. Contributed by 
Arthur Vigil.

[vinodkv] YARN-3684. Changed ContainerExecutor's primary lifecycle methods to 
use a more extensible mechanism of context objects. Contributed by Sidharta 
Seethana.

[arp] HDFS-8454. Remove unnecessary throttling in TestDatanodeDeath. (Arpit 
Agarwal)

[aajisaka] HADOOP-12014. hadoop-config.cmd displays a wrong error message. 
Contributed by Kengo Seki.

[aajisaka] HADOOP-11955. Fix a typo in the cluster setup doc. Contributed by 
Yanjun Wang.

[aajisaka] HADOOP-11594. Improve the readability of site index of 
documentation. Contributed by Masatake Iwasaki.

[vinayakumarb] HDFS-8268. Port conflict log for data node server is not 
sufficient (Contributed by Mohammad Shahid Khan)

[junping_du] YARN-3594. WintuilsProcessStubExecutor.startStreamReader leaks 
streams. Contributed by Lars Francke.

[vinayakumarb] HADOOP-11743. maven doesn't clean all the site files 
(Contributed by ramtin)

--
[...truncated 6640 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.763 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationToken
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.374 sec - in 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.962 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.819 sec - in 
org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.594 sec - in 
org.apache.hadoop.hdfs.TestFileCorruption
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.581 sec - in 
org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.71 sec - in 
org.apache.hadoop.hdfs.TestFileAppend4
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.965 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.439 sec - in 
org.apache.hadoop.hdfs.client.impl.TestLeaseRenewer
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.248 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.889 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestIsMethodSupported
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.16 sec - in 
org.apache.hadoop.hdfs.TestIsMethodSupported
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.957 sec - in 
org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.752 sec - in 
org.apache.hadoop.hdfs.TestGetFileChecksum
Running org.apache.hadoop.hdfs.TestFileConcurrentReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.229 sec - in 
org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.888 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.227 sec - in 

[jira] [Created] (HDFS-8469) Lockfiles are not being created for datanode storage directories

2015-05-22 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-8469:
--

 Summary: Lockfiles are not being created for datanode storage 
directories
 Key: HDFS-8469
 URL: https://issues.apache.org/jira/browse/HDFS-8469
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Lockfiles are not being created for datanode storage directories.  Due to a 
mixup, we are initializing the StorageDirectory class with shared=true (an 
option which was only intended for NFS directories used to implement NameNode 
HA).  Setting shared=true disables lockfile generation and prints a log message 
like this:

{code}
2015-05-22 11:45:16,367 INFO  common.Storage (Storage.java:lock(675)) - Locking 
is disabled for /home/cmccabe/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/  
test/data/dfs/data/data5/current/BP-122766180-127.0.0.1-1432320314834
{code}

Without lock files, we could accidentally spawn two datanode processes using 
the same directories without realizing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8010) Erasure coding: extend UnderReplicatedBlocks to accurately handle striped blocks

2015-05-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8010.
-
Resolution: Duplicate

 Erasure coding: extend UnderReplicatedBlocks to accurately handle striped 
 blocks
 

 Key: HDFS-8010
 URL: https://issues.apache.org/jira/browse/HDFS-8010
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8010-000.patch


 This JIRA tracks efforts to accurately assess the _risk level_ of a striped 
 block groups with missing blocks, when added to {{UnderReplicatedBlocks}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)