This is all quite strange. Optimize (BTW, it's rarely
necessary/desirable on an index that changes, despite its name)
shouldn't matter here. CDCR forwards the raw documents to the target
cluster.

Ample time indeed. With a soft commit of 15 seconds, that's your
window (with some slop for how long CDCR takes).

If you do a search and sort by your timestamp descending, what do you
see on the target cluster? And when you are indexing and CDCR is
running, your target cluster solr logs should show updates coming in.
Mostly checking if the data is even getting to the target cluster
here.

Also check the tlogs on the source cluster. By "check" here I just
mean "are they reasonable size", and "reasonable" should be very
small. The tlogs are the "queue" that CDCR uses to store docs before
forwarding to the target cluster, so this is just a sanity check. If
they're huge, then CDCR is not forwarding anything to the target
cluster.

It's also vaguely possible that
IgnoreCommitOptimizeUpdateProcessorFactory is interfering, if so it's
a bug and should be reported as a JIRA. If you remove that on the
target cluster, does the behavior change?

I'm mystified here as you can tell.

Best,
Erick

On Tue, May 23, 2017 at 10:12 AM, Webster Homer <webster.ho...@sial.com> wrote:
> We see a pretty consistent issue where the replicas show in the admin
> console as not current, indicating that our auto commit isn't commiting. In
> one case we loaded the data to the source, cdcr replicated it to the
> targets and we see the source and the target as having current = false. It
> is searchable so the soft commits are happening. We turned off data loading
> to investigate this issue, and the replicas are still not current after 3
> days. So there should have been ample time to catch up.
> This is our autoCommit
>      <autoCommit>
>        <maxDocs>25000</maxDocs>
>        <maxTime>${solr.autoCommit.maxTime:300000}</maxTime>
>        <openSearcher>false</openSearcher>
>      </autoCommit>
>
> This is our autoSoftCommit
>      <autoSoftCommit>
>        <maxTime>${solr.autoSoftCommit.maxTime:15000}</maxTime>
>      </autoSoftCommit>
> neither property, solr.autoCommit.maxTime or solr.autoSoftCommit.maxTime
> are set.
>
> We also have an updateChain that calls the
> solr.IgnoreCommitOptimizeUpdateProcessorFactory to ignore client commits.
> Could that be the cause of our
>       <updateRequestProcessorChain name="cleanup">
>      <!-- Ignore commits from clients, telling them all's OK -->
>        <processor class="solr.IgnoreCommitOptimizeUpdateProcessorFactory">
>          <int name="statusCode">200</int>
>        </processor>
>
>        <processor class="TrimFieldUpdateProcessorFactory" />
>        <processor class="RemoveBlankFieldUpdateProcessorFactory" />
>
>        <processor class="solr.LogUpdateProcessorFactory" />
>        <processor class="solr.RunUpdateProcessorFactory" />
>      </updateRequestProcessorChain>
>
> We did create a date field to all our collections that defaults to NOW so I
> can see that no new data was added, but the replicas don't seem to get the
> commit. I assume this is something in our configuration (see above).
>
> Is there a way to determine when the last commit occurred?
>
> I believe that the one replica got out of sync due to an admin running an
> optimize while cdcr was still running.
> That was one collection, but it looks like we are missing commits on most
> of our collections.
>
> Any help would be greatly appreciated!
>
> Thanks,
> Webster Homer
>
> On Mon, May 22, 2017 at 4:12 PM, Erick Erickson <erickerick...@gmail.com>
> wrote:
>
>> You can ping individual replicas by addressing to a specific replica
>> and setting distrib=false, something like
>>
>>      http://SOLR_NODE:port/solr/collection1_shard1_replica1/
>> query?distrib=false&q=......
>>
>> But one thing to check first is that you've committed. I'd:
>> 1> turn off indexing on the source cluster.
>> 2> wait until the CDCR had caught up (if necessary).
>> 3> issue a hard commit on the target
>> 4> _then_ see if the counts were what is expected.
>>
>> Due to the fact that autocommit settings can fire at different clock
>> times even for replicas on the same shard, it's easier to track
>> whether it's a transient issue. The other thing I've seen people do is
>> have a timestamp on the docs set to NOW (there's an update processor
>> that can do this). Then when you check for consistency you can use
>> fq=timestamp:[* TO NOW - (some interval significantly longer than your
>> autocommit interval)].
>>
>> bq: Is there a way to recover when a shard has inconsistent replicas.
>> If I use the delete replica API call to delete one of them and then use add
>> replica to create it from scratch will it auto-populate from the other
>> replica in the shard?
>>
>> Yes. Whenever you ADDREPLICA it'll catch itself up from the leader
>> before becoming active. It'll have to copy the _entire_ index from the
>> leader, so you'll see network traffic spike.
>>
>> Best,
>> Erick
>>
>> On Mon, May 22, 2017 at 1:41 PM, Webster Homer <webster.ho...@sial.com>
>> wrote:
>> > I have a solrcloud collection with 2 shards and 4 replicas. The replicas
>> > for shard 1 have different numbers of records, so different queries will
>> > return different numbers of records.
>> >
>> > I am not certain how this occurred, it happened in a collection that was
>> a
>> > cdcr target.
>> >
>> > Is there a way to limit a search to a specific replica of a shard? We
>> want
>> > to understand the differences
>> >
>> > Is there a way to recover when a shard has inconsistent replicas.
>> > If I use the delete replica API call to delete one of them and then use
>> add
>> > replica to create it from scratch will it auto-populate from the other
>> > replica in the shard?
>> >
>> > Thanks,
>> > Webster
>> >
>> > --
>> >
>> >
>> > This message and any attachment are confidential and may be privileged or
>> > otherwise protected from disclosure. If you are not the intended
>> recipient,
>> > you must not copy this message or attachment or disclose the contents to
>> > any other person. If you have received this transmission in error, please
>> > notify the sender immediately and delete the message and any attachment
>> > from your system. Merck KGaA, Darmstadt, Germany and any of its
>> > subsidiaries do not accept liability for any omissions or errors in this
>> > message which may arise as a result of E-Mail-transmission or for damages
>> > resulting from any unauthorized changes of the content of this message
>> and
>> > any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
>> > subsidiaries do not guarantee that this message is free of viruses and
>> does
>> > not accept liability for any damages caused by any virus transmitted
>> > therewith.
>> >
>> > Click http://www.emdgroup.com/disclaimer to access the German, French,
>> > Spanish and Portuguese versions of this disclaimer.
>>
>
> --
>
>
> This message and any attachment are confidential and may be privileged or
> otherwise protected from disclosure. If you are not the intended recipient,
> you must not copy this message or attachment or disclose the contents to
> any other person. If you have received this transmission in error, please
> notify the sender immediately and delete the message and any attachment
> from your system. Merck KGaA, Darmstadt, Germany and any of its
> subsidiaries do not accept liability for any omissions or errors in this
> message which may arise as a result of E-Mail-transmission or for damages
> resulting from any unauthorized changes of the content of this message and
> any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
> subsidiaries do not guarantee that this message is free of viruses and does
> not accept liability for any damages caused by any virus transmitted
> therewith.
>
> Click http://www.emdgroup.com/disclaimer to access the German, French,
> Spanish and Portuguese versions of this disclaimer.

Reply via email to