@eric  Thanks for the reply. The upgrading process is in two phases. In the 
initial phase brings the cluster using new solr version binaries. In the next 
phase use the non-deprecated functionalities and make use of the new 
functionalities
Let me explain the problem little different way. 
Environment lets say has 8 hosts, 4 leaders and 4 replicas for a 4 shard and 
single core cluster. On each host solr is started as  something like the below  
from command line.(not using solr control script)Before starting the solrs on 
any node, collection  is created using uploaded, linked in zk using zkcli.sh of 
the solr bundle.
Once first 4 hosts for each of the shard is started, the cluster is in 
functional state. When  the fifth node solr process is started as replica of 
shard1, expectation is the fifth node will join as replica for shard1 along 
with host1. But host1 is dropped from the admin console / cloud / graph , only 
host 5 is displayed and core is not loaded for host 5.
Question is how to make later joining solr process for a particular shard as 
part of leader/replica group. It will be less of a concern, which one will be 
leader/replica , as along as a replicationFactor is achieved. 
Should the  collection APIs need to be used to create a replicas? Could it be 
achieved with settings in some properties or with startup param in solr ?  Any 
pointers towards this will be helpful.
" /usr/bin/java -server -Xms512m -Xmx512m -XX:NewRatio=3 -XX:SurvivorRatio=4 
-XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark 
-XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly 
-XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000 
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled 
-XX:-OmitStackTraceInFastThrow -verbose:gc -XX:+PrintHeapAtGC 
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
-XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
-Xloggc:/Users/ravi/solr-7.6.0/server/logs/solr_gc.log 
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
-DzkClientTimeout=15000 -DzkHost=localhost:2181 
-Dsolr.log.dir=/Users/ravi/solr-7.6.0/server/logs -Djetty.port=8080 
-DSTOP.PORT=7080 -DSTOP.KEY=solrrocks -Duser.timezone=UTC 
-Djetty.home=/Users/ravi/solr-7.6.0/server 
-Dsolr.solr.home=/Users/ravi/solr-7.6.0/server/solr -Dsolr.data.home= 
-Dsolr.install.dir=/Users/ravi/solr-7.6.0 
-Dsolr.default.confdir=/Users/ravi/solr-7.6.0/server/solr/configsets/_default/conf
 -Xss256k -Dsolr.jetty.https.port=8080 -Dsolr.log.muteconsole 
-XX:OnOutOfMemoryError=/Users/ravi/solr-7.6.0/bin/oom_solr.sh 8080 
/Users/ravi/solr-7.6.0/server/logs -jar start.jar --module=http"



 

'Ravi'  Raveendra
 

    On Sunday, April 21, 2019, 11:36:07 AM EDT, Erick Erickson 
<erickerick...@gmail.com> wrote:  
 
 My first question would “why do you think this is important?”. The additional 
burden a leader has is quite small and is only present when indexing. Leaders 
have no role in processing queries.

So unless you have, say, on the order of 100 leaders on a single Solr instance 
_and_ are indexing heavily, I’d be surprised if you notice any difference 
between that and having leaders evenly distributed.

I would _strongly_ urge you, BTW, to have legacyCloud set to false. That 
setting will totally go away at some point, so adjusting sooner rather than 
later is probably a good idea.

Bottom line. I’d recommend just using the defaults (including legacyCloud) and 
not worrying about distributing the leader role unless and until you can prove 
that having unbalanced leaders is really having a performance impact. In my 
experience, 95% of the time people spend time trying to manage which nodes are 
leaders the effort is wasted.

Best,
Erick

> On Apr 20, 2019, at 2:45 PM, Raveendra Yerraguntla 
> <raveend...@yahoo.com.INVALID> wrote:
> 
> All,
> We are upgrading from solr 5.4 to solr 7.6. In 5.4 each solr process based on 
> the core.properties (shard value assigned) will be joining as either leader 
> or replica based on the sequence of start.
> By following the same procedure in 7.6 the initial leader node solr process 
> is replaced with later started solr.  Also in the process core is not loaded 
> in the later solr process (which is supposed to be replica).
> To have the compatibility , the legacyCloud property is set as true in the 
> clustereprops.json
> Question: .With replica type is NRT to keep the transition smooth.How to add 
> later started solr process as replicas in 7.6, without interfering the leader 
> election process?
>  Appreciate any pointers in understanding and resolving the issue.
> ThanksRavi
  

Reply via email to