[ 
https://issues.apache.org/jira/browse/SOLR-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13195592#comment-13195592
 ] 

Mark Miller edited comment on SOLR-3066 at 1/28/12 6:28 PM:
------------------------------------------------------------

*edited - changed an 'of' to 'on'

This looks similar to an issue I have occasionally but rarely seen with the 
replication handler test - a running snap pull keeps the core open just a 
little while even after core container shutdown. We have a similar situation in 
some of these tests because recoveries that will be attempted as nodes go down 
uses replication. What I'm still not sure about is why even though we wait 
quite a while for any index searchers to be closed after core container 
shutdown, we still don't see a searcher get closed.

I've seen that symtom in tests before - it ended up being because an executor 
was rejecting a task during shutdown and a searcher was not cleaned up - that 
doesn't look like it's a problem anymore though.

I've tried to replicate a slow env by running tons of tests at the same time on 
my windows vm - I have only seen a similar result once out of over 100 runs 
though.

Perhaps we just need to wait even longer - which seems strange that it could be 
that slow - but it does almost look like the searcher is returned at some point 
during later tests based on how it can throw off open/close numbers for a later 
test (more closes than opens).

Taking a break, but I'll be back on this later today.
                
      was (Author: [email protected]):
    This looks similar to an issue I have occasionally but rarely seen with the 
replication handler test - a running snap pull keeps the core open just a 
little while even after core container shutdown. We have a similar situation in 
some of these tests because recoveries that will be attempted as nodes go down 
uses replication. What I'm still not sure about is why even though we wait 
quite a while for any index searchers to be closed after core container 
shutdown, we still don't see a searcher get closed.

I've seen that symtom in tests before - it ended up being because an executor 
was rejecting a task during shutdown and a searcher was not cleaned up - that 
doesn't look like it's a problem anymore though.

I've tried to replicate a slow env by running tons of tests at the same time on 
my windows vm - I have only seen a similar result once out of over 100 runs 
though.

Perhaps we just need to wait even longer - which seems strange that it could be 
that slow - but it does almost look like the searcher is returned at some point 
during later tests based of how it can throw off open/close numbers for a later 
test (more closes than opens).

Taking a break, but I'll be back on this later today.
                  
> SolrIndexSearcher open/close imbalance in some of the new SolrCloud tests.
> --------------------------------------------------------------------------
>
>                 Key: SOLR-3066
>                 URL: https://issues.apache.org/jira/browse/SOLR-3066
>             Project: Solr
>          Issue Type: Test
>            Reporter: Mark Miller
>
> I have not been able to duplicate this test issue on my systems yet, but on 
> jenkins, some tests that start and stop jetty instances during the test are 
> having trouble cleaning up and can bleed into other tests. I'm working on 
> isolating the reason for this - I seem to have been ip banned from jenkins at 
> the moment, but when I can ssh in there, I will be able to speed up the 
> try/feedback loop some. I've spent a lot of time trying to duplicate across 3 
> other systems, but I don't see the same issue anywhere but our jenkins server 
> thus far.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to