There may be other ways, easiest way is to write a script that gets the cluster
status, and for each collection per replica you will have these details:
"collections":{
“collection1":{
"pullReplicas":"0",
"replicationFactor":"1",
"shards":{
"shard1":{
Glad u nailed the out of sync one :)
> On Aug 12, 2020, at 4:38 PM, Jae Joo wrote:
>
> I found it the root cause. I have 3 collections assigned to a alias and one
> of them are NOT synched.
> By the alias.
>
>
>
>
>
>
>
>
>
>
>
> Collection 1
>
>
>
>
>
>
>
>
>
>
>
> Collec
I found it the root cause. I have 3 collections assigned to a alias and one
of them are NOT synched.
By the alias.
Collection 1
Collection 2
Collection 3
On Wed, Aug 12, 2020 at 7:29 PM Jae Joo wrote:
> Good question. How can I validate if the replicas
Different absolute scores from different collections are OK, because
the exact values depend on the number of deleted documents.
For the set of documents that are in different orders from different
collections, are the scores of that set identical? If they are, then it
is normal to have a differen
Good question. How can I validate if the replicas are all synched?
On Wed, Aug 12, 2020 at 7:28 PM Jae Joo wrote:
> numFound is same but different score.
>
>
>
>
>
>
>
>
> On Wed, Aug 12, 2020 at 6:01 PM Aroop Ganguly
> wrote:
>
>> Try a simple test of querying each collection 5 times
numFound is same but different score.
On Wed, Aug 12, 2020 at 6:01 PM Aroop Ganguly
wrote:
> Try a simple test of querying each collection 5 times in a row, if the
> numFound are different for a single collection within tase 5 calls then u
> have it.
> Please try it, what you may think i
Try a simple test of querying each collection 5 times in a row, if the numFound
are different for a single collection within tase 5 calls then u have it.
Please try it, what you may think is sync’d may actually not be. How do you
validate correct sync ?
> On Aug 12, 2020, at 10:55 AM, Jae Joo w
Are the scores the same for the documents that are ordered differently?
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Aug 12, 2020, at 10:55 AM, Jae Joo wrote:
>
> The replications are all synched and there are no updates while I was
> testing.
>
The replications are all synched and there are no updates while I was
testing.
On Wed, Aug 12, 2020 at 1:49 PM Aroop Ganguly
wrote:
> Most likely you have 1 or more collections behind the alias that have
> replicas out of sync :)
>
> Try querying each collection to find the one out of sync.
>
>
Most likely you have 1 or more collections behind the alias that have replicas
out of sync :)
Try querying each collection to find the one out of sync.
> On Aug 12, 2020, at 10:47 AM, Jae Joo wrote:
>
> I have 10 collections in single alias and having different result sets for
> every time wi
We are actually very close to doing what Shawn has suggested.
Emir has a good point about new collections failing on deletes/updates of
older documents which were not present in the new collection. But even if
this
feature can be implemented for an append-only log, it would make a good
feature IMO
This approach could work only if it is append only index. In case you have
updates/deletes, you have to process in order, otherwise you will get incorrect
results. I am thinking that is one of the reasons why it might not be supported
since not too useful.
Emir
--
Monitoring - Log Management -
On 11/9/2017 11:09 AM, S G wrote:
> However, re-ingestion takes several hours to complete and during that time,
> the customer has to write to both the collections - previous collection and
> the one being bootstrapped.
> This dual-write is harder to do from the client side (because client needs
>
Aliases can already point to multiple collections, have you just tried that?
I'm not totally sure what the behavior would be, but nothing you've written
indicates you tried so I thought I'd point it out.
It's not clear to me how useful this is though, or what failure messages
are returned. Or how
Thanks for the great advice Erick. I will experiment with your suggestions
and see how it goes!
Chris
On Sun, May 7, 2017 at 12:34 AM, Erick Erickson
wrote:
> Well, you've been doing your homework ;).
>
> bq: I am a little confused on this statement you made:
>
> > Plus you can't commit
> > ind
Well, you've been doing your homework ;).
bq: I am a little confused on this statement you made:
> Plus you can't commit
> individually, a commit on one will _still_ commit on all so you're
> right back where you started.
Never mind. autocommit kicks off on a per replica basis. IOW, when a
new d
Hi Erick,
Thanks for the reply, I really appreciate it.
To answer your questions, we have a little over 300 tenants, and a couple
of different collections, the largest of which has ~11 million documents
(so not terribly large). We are currently running standard Solr with simple
master/slave repli
Well, it's not either/or. And you haven't said how many tenants we're
talking about here. Solr startup times for a single _instance_ of Solr
when there are thousands of collections can be slow.
But note what I am talking about here: A single Solr on a single node
where there are hundreds and hundr
You want to create both under different root nodes in zk, so that you would have
/cluster1
and
/cluster2
Then you startup with addresses of:
> zookeeper:{port1},zookeeper:{port2}/cluster1
> zookeeper:{port2},zookeeper:{port2}/cluster2
If you are using one of the bootstrap calls on startup, it
Ok, I'm a little confused.
I had originally bootstrapped zookeeper using a solr.xml file which
specified the following cores:
cats
dogs
birds
In my /solr/#/cloud?view=tree view I see that I have
/collections
/cats
/dogs
/birds
/configs
/cats
/dogs
/birds
When I launch a new server and co
Yes, but you'll need to append a sub path on to the zookeeper path for your
second cluster. For ex:
zookeeper1.example.com,zookeeper2.example.com,zookeeper3.example.com/subpath
On Mar 8, 2013 6:46 PM, "jimtronic" wrote:
> Hi,
>
> I have a solrcloud cluster running several cores and pointing at o
: I have an application that manages documents in real-time into
: collections where a given document can live in more than one collection
: and multiple users can create collections on the fly.
: I get from reading that it's better to have a single index over all
: documents than to have one per
: different field sets under solr. Would I have to have multiple
: implementations of solr running, or can I have more than one schema.xml
: file per "collection" ?
currently the only supported way to do this is run multiple isntances of
the solr.war ... if you look at the various container spec
23 matches
Mail list logo