[ 
https://issues.apache.org/jira/browse/HDFS-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14085395#comment-14085395
 ] 

Yongjun Zhang commented on HDFS-6694:
-------------------------------------

My last test run indicated that the ulimit num of open file is 1024 on the 
machine {{Slave H5 (Build slave for Hadoop project builds : 
asf905.gq1.ygridcore.net)}}. However, when I ran the test locally, the num of 
open files is 4096. 

{code} 
YJD ulimit -a contents: 
time(seconds) unlimited 
file(blocks) unlimited 
data(kbytes) unlimited 
stack(kbytes) 8192 
coredump(blocks) 0 
memory(kbytes) unlimited 
locked memory(kbytes) 64 
process 386178 
nofiles 1024 <========== 
vmemory(kbytes) unlimited 
locks unlimited 
{code} 


> TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently 
> with various symptoms
> ------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-6694
>                 URL: https://issues.apache.org/jira/browse/HDFS-6694
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.0.0
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>         Attachments: HDFS-6694.001.dbg.patch, 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover-output.txt, 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.txt
>
>
> TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently 
> with various symptoms. Typical failures are described in first comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to