[ 
https://issues.apache.org/jira/browse/SOLR-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15404292#comment-15404292
 ] 

Erick Erickson commented on SOLR-9320:
--------------------------------------

If we're going to take advantage of multithreading we need to take some care to 
throttle replication. If we try to copy 500 cores' indexes to some other node 
all at once with 500 separate threads I'd worry about network bandwidth issues. 
I've seen saturated I/O cause nodes to go into recovery and the like. Not to 
mention beating up the disks on both machines.

The number of replace operations to carry out in parallel is probably the 
easiest. I can think of two tuning parameters, max # of simultaneous threads 
and max bandwidth consumption. 

Thinking about it for a bit, though, the bandwidth parameter seems like it's 
difficult to do well. There'd have to be some kind of cross-copy communications 
I should think. And other ugliness. I suppose one could get this behavior on a 
case-by-case basis by 
1> specifying the max # of replace ops
2> specifying <str name="maxWriteMBPerSec">${maxWriteMbPerSec:100000}</str> in 
the replication handler and overriding with a sys var when doing a 
REPLACENODE....



> A REPLACENODE command to decommission an existing node with another new node
> ----------------------------------------------------------------------------
>
>                 Key: SOLR-9320
>                 URL: https://issues.apache.org/jira/browse/SOLR-9320
>             Project: Solr
>          Issue Type: Sub-task
>          Components: SolrCloud
>            Reporter: Noble Paul
>             Fix For: 6.1
>
>         Attachments: DELETENODE.jpeg, REPLACENODE_After.jpeg, 
> REPLACENODE_Before.jpeg, REPLACENODE_call_response.jpeg, SOLR-9320.patch, 
> SOLR-9320.patch
>
>
> The command should accept a source node and target node. recreate the 
> replicas in source node in the target and do a DLETENODE of source node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to