[ 
https://issues.apache.org/jira/browse/SOLR-12755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608446#comment-16608446
 ] 

Daniel Lowe commented on SOLR-12755:
------------------------------------

Yes I am seeing all shards being optimized simultaneously, and yes it does 
cause the I/O problems one would expect from doing that! It sounds like prior 
to SOLR-6264 it may have worked in the way you describe.

I hadn't seen SOLR-10740, and I agree that this is basically a duplicate, but 
hopefully this issue spells out the problems with sending the optimize request 
to all replicas of every shard simultaneously.

> Force merge (optimize) should respect distrib=false
> ---------------------------------------------------
>
>                 Key: SOLR-12755
>                 URL: https://issues.apache.org/jira/browse/SOLR-12755
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: SolrCloud, update
>    Affects Versions: 7.4
>            Reporter: Daniel Lowe
>            Priority: Major
>
> It would be desirable in a Solr Cloud configuration if a request like:
> update?optimize=true&distrib=false
> only executed on the shard that received the request.
>  
> As is well known force merging is a very expensive/disk space hungry 
> operation and hence this increased control should address the following 
> issues:
> Free disk space requirements: 1-2x size of ALL shards on the machine vs 1-2x 
> size of largest shard
> I/O: High disk contention when a machine holds multiple shards as all shards 
> are being simultaneously rewritten
> Availability: All replicas will simultaneously have impaired performance
>  
> Relevant previous issue: SOLR-6264



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to