[ 
https://issues.apache.org/jira/browse/ACCUMULO-49?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13130920#comment-13130920
 ] 

Keith Turner commented on ACCUMULO-49:
--------------------------------------

May not have to swap to satisfy stack growth, w/o mlock could create a lot of 
threads w/ shallow stacks and would not need to swap.  

Ideally we woud just like the java process to die when a situation like this 
occurrs (can't start a thread).  Not related to this mlock issue, but OOME can 
basically kill random threads.  If its an important thread like zookeeper, then 
a process could lose its lock and not kill itself.  We added the java option 
-XX:OnOutOfMemoryError="kill -9 %p".  However not all OOME seem to trigger 
this.  I think the mlock error did not, so it left the process running but in a 
screwy state.  I think thrift was failing to create threads to process 
connections.
                
> optionally monitor swappiness on every server
> ---------------------------------------------
>
>                 Key: ACCUMULO-49
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-49
>             Project: Accumulo
>          Issue Type: New Feature
>          Components: logger, master, trace, tserver
>         Environment: idle tablet server is swapped out on an otherwise busy 
> system
>            Reporter: Eric Newton
>            Assignee: Eric Newton
>            Priority: Minor
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Linux kernel will eagerly swap idle memory (such as the tablet server) for 
> disk cache unless the /proc/sys/vm/swappiness setting is set to 0. A 
> swapped-out server is sluggish enough that it loses its zookeeper lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to