[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16221766#comment-16221766
 ] 

Varun Thacker commented on SOLR-11003:
--------------------------------------

Hi Amrit,

Patch looks great!

{{isTargetCluster}} uses the entry list size to see if we have the extra 
information in the tlog. This works and handles back-compat correctly.
However I'm more inclined to do this:
Encode a CDCR TLOG version if each entry . We can use this for the back-compat 
check. This potentially opens the door for new features down the road to be 
handled differently, when to stop caring about old back-compat checks ( next 
major version ) etc . Lucene does it from 7.0 ( LUCENE-7703  )

We could however argue that this improvement could be implemented for the 
regular transaction log as well. So let's put this for another Jira

- In CDCRBiDirectionalTest can we have one try block instead of two currently. 
In the finally we could do a null check before shutting down 
- Methods like {{cdcrStart}} are the same in CdcrBootstrapTest. Can we put them 
in a util class and reuse?We also have 
{{BaseCdcrDistributedZkTest#invokeCdcrAction}} . Let's refactor the usage
- {{waitForClusterToSync}} is the same as 
{{CdcrBootsrapTest#waitForTargetToSync}}
- Even {{CdcrBootstrapTest#indexDocs}} can be factored into the util class and 
reused here?
- Can we use the logger instead of System.out.println
- After we add docs to cluster 1, can we do a assert that the numFound is not 0 
. Just a sanity check to make sure we indexed docs atleast. 
- Are we printing the queue responses just for debugging? can we change that to 
an assert?
- For {{/get?getVersions=X}} we need to do distrib=false . But since this is a 
one shard collection we don't need to I guess. You can add fingerprint=true and 
use the maxVersionEncountered key instead of calculating the max in code?
- After a DBQ/delete-by-id/atomic updates we do a 2s thread wait. Can we not 
rely on this and use waitForClusterToSync? In general we should not be doing 
any thread sleeps in the test

> Enabling bi-directional CDCR on cluster for better failover
> -----------------------------------------------------------
>
>                 Key: SOLR-11003
>                 URL: https://issues.apache.org/jira/browse/SOLR-11003
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: CDCR
>            Reporter: Amrit Sarkar
>            Assignee: Varun Thacker
>         Attachments: SOLR-11003-tlogutils.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, sample-configs.zip
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time (given the backlog 
> of replicating updates to other data center). ClusterACollectionA => 
> ClusterBCollectionB | ClusterBCollectionB => ClusterACollectionA.
> The STRONG RECOMMENDED way to keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to