[ 
https://issues.apache.org/jira/browse/SOLR-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788398#comment-16788398
 ] 

Hoss Man commented on SOLR-13310:
---------------------------------

{quote}No clue how a thread pool executor is treated special due to error 
handling.
{quote}
I was thinking of this comment regarding SOLR-11880 – but i guess it would just 
complicate/confuse stacktraces in the event of exceptions...

{code}
  private ExecutorService updateExecutor = new 
ExecutorUtil.MDCAwareThreadPoolExecutor(0, Integer.MAX_VALUE,
      60L, TimeUnit.SECONDS,
      new SynchronousQueue<>(),
      new SolrjNamedThreadFactory("updateExecutor"),
      // the Runnable added to this executor handles all exceptions so we 
disable stack trace collection as an optimization
      // see SOLR-11880 for more details
      false);
{code}

bq. Executors like this should live on the core container ...

Maybe, but there is really no reason why there needs to be a "long lived" 
facetExecutor ... the most straight forward way for {{facet.threads=N}} to work 
would be to create an Executor of size N on demand ... it's really just a 
shared _threadpool_ that we should re-use to prevent lots of short lived 
threads.

> facet.threads is using the updateExecutor
> -----------------------------------------
>
>                 Key: SOLR-13310
>                 URL: https://issues.apache.org/jira/browse/SOLR-13310
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Hoss Man
>            Priority: Major
>
> Had a WTF moment skimming some SimpleFacets code today...
> {code}
>     this.facetExecutor = 
> req.getCore().getCoreContainer().getUpdateShardHandler().getUpdateExecutor();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to