If you're talking about the core admin API, it is entirely local basis,
that code is completely unaware of anything having to do with collections.

So it works but the ability to forte one core then have questions is
pretty high....

Best,
Erick

On Tue, Mar 15, 2016 at 12:05 PM, Nick Vasilyev
<nick.vasily...@gmail.com> wrote:
> I had another collection I was running into this issue with, so I decided
> to play around with it. This one had active indexing going on, so I was
> able to confirm how the counts get updated. Basically, it looks like
> clicking the reload button will only send a commit to that one core, it
> will not be propagated to other shards and the same shard on the other
> replica. Full commit update?commit=true&openSearcher=true works fine. I
> know that the reload button was not intended to issue commits, but it's
> quicker than typing out the command.
>
> On Tue, Mar 15, 2016 at 12:24 PM, Nick Vasilyev <nick.vasily...@gmail.com>
> wrote:
>
>> Yea, the code sends actual commits, but I hate typing so usually just
>> click the reload button unless it's production.
>> On Mar 15, 2016 12:22 PM, "Erick Erickson" <erickerick...@gmail.com>
>> wrote:
>>
>>> bq: Not sure what the issue was, in previous versions of Solr, clicking
>>> reload
>>> would send a commit to all replicas, right
>>>
>>> Reloading doesn't really have anything to do with commits. Reload
>>> would certainly
>>> cause a new searcher to be opened and thus would pick up any changes
>>> that hat been hard-committed (openSearcher=false), but that's a complete
>>> side-effect. Simply issuing a commit on the url to the _collection_ will
>>> cause
>>> commits to happen on all replicas, as:
>>>
>>> blah/solr/collection/update?commit=true
>>>
>>> Best,
>>> Erick
>>>
>>> On Tue, Mar 15, 2016 at 9:11 AM, Nick Vasilyev <nick.vasily...@gmail.com>
>>> wrote:
>>> > I reloaded the collection and ran distrib=false query for several
>>> shards on
>>> > both replicas. The counts matched exactly.
>>> >
>>> > I then reloaded the second replica (through the UI) and now it seems
>>> like
>>> > it is working fine, I am getting consistent matches.
>>> >
>>> > Not sure what the issue was, in previous versions of Solr, clicking
>>> reload
>>> > would send a commit to all replicas, right? Is that still the case?
>>> >
>>> >
>>> >
>>> > On Tue, Mar 15, 2016 at 11:53 AM, Erick Erickson <
>>> erickerick...@gmail.com>
>>> > wrote:
>>> >
>>> >> This is very strange. What are the results you get when
>>> >> you compare replicas in th e_same_ shard? It doesn't really
>>> >> mean anything when you say
>>> >> "shard1 has X docs, shard2 has Y docs". The only way
>>> >> you should be getting different results from
>>> >> the match all docs query is if different replicas within the
>>> >> _same_ shard have different counts.
>>> >>
>>> >> And just as a sanity check, issue a commit. It's highly unlikely
>>> >> that you have uncommitted changes, but it never hurts to try.
>>> >>
>>> >> All distributed queries should have a sub query sent to one
>>> >> replica of each shard, is that what you're seeing? And I'd ping
>>> >> the cores  directly rather than provide shards parameters,
>>> >> something like:
>>> >>
>>> >> blha blah blah/products/query/shard1_core3/query?q=*:*. That
>>> >> addresses the specific core rather than rely on any internal query
>>> >> routing logic..
>>> >>
>>> >> Best,
>>> >> Erick
>>> >>
>>> >> On Tue, Mar 15, 2016 at 8:43 AM, Nick Vasilyev <
>>> nick.vasily...@gmail.com>
>>> >> wrote:
>>> >> > Hello,
>>> >> >
>>> >> > I have a brand new installation of Solr 5.4.1 and I am running into a
>>> >> > strange problem with one of my collections. Collection *products*
>>> has 5
>>> >> > shards and replication factor of two. Both replicas are up and show
>>> green
>>> >> > status on the Cloud page in the UI.
>>> >> >
>>> >> > When I run a default search on the query page (q=*:*) I always get a
>>> >> > different numFound although there is no active indexing and
>>> everything is
>>> >> > committed. I checked the logs and it looks like every time it runs a
>>> >> > search, it is sent to different shards. Below, search1 went to shard
>>> 5, 2
>>> >> > and 4, search2 went to shard 5, 3, 1 and search 3 went to shard 3,
>>> 4, 1,
>>> >> 5.
>>> >> >
>>> >> > To confirm this, I ran a &distrib=false query on shard 5 and got
>>> >> 8,928,379
>>> >> > items, 8,917,318 for shard 2, and 9,005,295 for shard 4. The results
>>> from
>>> >> > shard 2 distrib=false query did not match the results that were in
>>> the
>>> >> > distributed query (from the logs). The query returned 8917318. Here
>>> is
>>> >> the
>>> >> > log entry for the query.
>>> >> >
>>> >> > 214467874 INFO  (qtp1013423070-21019) [c:products s:shard2
>>> r:core_node7
>>> >> > x:products_shard2_replica2] o.a.s.c.S.Request
>>> [products_shard2_replica2]
>>> >> > webapp=/solr path=/select
>>> >> > params={q=*:*&distrib=false&indent=true&wt=json&_=1458056340020}
>>> >> > hits=8917318 status=0 QTime=0
>>> >> >
>>> >> >
>>> >> > Here are the logs from other queries.
>>> >> >
>>> >> > Search 1 - numFound 18309764
>>> >> >
>>> >> > 213941984 INFO  (qtp1013423070-21046) [c:products s:shard5
>>> r:core_node4
>>> >> > x:products_shard5_replica2] o.a.s.c.S.Request
>>> [products_shard5_replica2]
>>> >> > webapp=/solr path=/select
>>> >> >
>>> >>
>>> params={df=text&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=
>>> >> >
>>> >>
>>> http://192.168.1.211:9000/solr/products_shard5_replica2/|http://192.168.1.212:9000/solr/products_shard5_replica1/&rows=10&version=2&q=*:*&NOW=1458055805759&isShard=true&wt=javabin&_=1458055814096
>>> >> }
>>> >> > hits=8928379 status=0 QTime=3
>>> >> > 213941985 INFO  (qtp1013423070-21028) [c:products s:shard4
>>> r:core_node6
>>> >> > x:products_shard4_replica2] o.a.s.c.S.Request
>>> [products_shard4_replica2]
>>> >> > webapp=/solr path=/select
>>> >> >
>>> >>
>>> params={df=text&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=
>>> >> >
>>> >>
>>> http://192.168.1.212:9000/solr/products_shard4_replica1/|http://192.168.1.211:9000/solr/products_shard4_replica2/&rows=10&version=2&q=*:*&NOW=1458055805759&isShard=true&wt=javabin&_=1458055814096
>>> >> }
>>> >> > hits=9005295 status=0 QTime=3
>>> >> > 213942045 INFO  (qtp1013423070-21042) [c:products s:shard2
>>> r:core_node7
>>> >> > x:products_shard2_replica2] o.a.s.c.S.Request
>>> [products_shard2_replica2]
>>> >> > webapp=/solr path=/select
>>> >> > params={q=*:*&indent=true&wt=json&_=1458055814096} hits=18309764
>>> status=0
>>> >> > QTime=81
>>> >> >
>>> >> >
>>> >> > Search 2 - numFound 27072144
>>> >> > 213995779 INFO  (qtp1013423070-21046) [c:products s:shard5
>>> r:core_node4
>>> >> > x:products_shard5_replica2] o.a.s.c.S.Request
>>> [products_shard5_replica2]
>>> >> > webapp=/solr path=/select
>>> >> >
>>> >>
>>> params={df=text&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=
>>> >> >
>>> >>
>>> http://192.168.1.211:9000/solr/products_shard5_replica2/|http://192.168.1.212:9000/solr/products_shard5_replica1/&rows=10&version=2&q=*:*&NOW=1458055859563&isShard=true&wt=javabin&_=1458055867894
>>> >> }
>>> >> > hits=8928379 status=0 QTime=1
>>> >> > 213995781 INFO  (qtp1013423070-20985) [c:products s:shard3
>>> r:core_node10
>>> >> > x:products_shard3_replica2] o.a.s.c.S.Request
>>> [products_shard3_replica2]
>>> >> > webapp=/solr path=/select
>>> >> >
>>> >>
>>> params={df=text&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=
>>> >> >
>>> >>
>>> http://192.168.1.212:9000/solr/products_shard3_replica1/|http://192.168.1.211:9000/solr/products_shard3_replica2/&rows=10&version=2&q=*:*&NOW=1458055859563&isShard=true&wt=javabin&_=1458055867894
>>> >> }
>>> >> > hits=8980542 status=0 QTime=3
>>> >> > 213995785 INFO  (qtp1013423070-21042) [c:products s:shard1
>>> r:core_node9
>>> >> > x:products_shard1_replica2] o.a.s.c.S.Request
>>> [products_shard1_replica2]
>>> >> > webapp=/solr path=/select
>>> >> >
>>> >>
>>> params={df=text&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=
>>> >> >
>>> >>
>>> http://192.168.1.212:9000/solr/products_shard1_replica1/|http://192.168.1.211:9000/solr/products_shard1_replica2/&rows=10&version=2&q=*:*&NOW=1458055859563&isShard=true&wt=javabin&_=1458055867894
>>> >> }
>>> >> > hits=8914801 status=0 QTime=3
>>> >> > 213995798 INFO  (qtp1013423070-21028) [c:products s:shard2
>>> r:core_node7
>>> >> > x:products_shard2_replica2] o.a.s.c.S.Request
>>> [products_shard2_replica2]
>>> >> > webapp=/solr path=/select
>>> >> > params={q=*:*&indent=true&wt=json&_=1458055867894} hits=27072144
>>> status=0
>>> >> > QTime=30
>>> >> >
>>> >> >
>>> >> > Search 3 - numFound 35953734
>>> >> >
>>> >> > 214022457 INFO  (qtp1013423070-21019) [c:products s:shard3
>>> r:core_node10
>>> >> > x:products_shard3_replica2] o.a.s.c.S.Request
>>> [products_shard3_replica2]
>>> >> > webapp=/solr path=/select
>>> >> >
>>> >>
>>> params={df=text&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=
>>> >> >
>>> >>
>>> http://192.168.1.212:9000/solr/products_shard3_replica1/|http://192.168.1.211:9000/solr/products_shard3_replica2/&rows=10&version=2&q=*:*&NOW=1458055886247&isShard=true&wt=javabin&_=1458055894580
>>> >> }
>>> >> > hits=8980542 status=0 QTime=0
>>> >> > 214022458 INFO  (qtp1013423070-21036) [c:products s:shard4
>>> r:core_node6
>>> >> > x:products_shard4_replica2] o.a.s.c.S.Request
>>> [products_shard4_replica2]
>>> >> > webapp=/solr path=/select
>>> >> >
>>> >>
>>> params={df=text&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=
>>> >> >
>>> >>
>>> http://192.168.1.212:9000/solr/products_shard4_replica1/|http://192.168.1.211:9000/solr/products_shard4_replica2/&rows=10&version=2&q=*:*&NOW=1458055886247&isShard=true&wt=javabin&_=1458055894580
>>> >> }
>>> >> > hits=9005295 status=0 QTime=1
>>> >> > 214022459 INFO  (qtp1013423070-21046) [c:products s:shard1
>>> r:core_node9
>>> >> > x:products_shard1_replica2] o.a.s.c.S.Request
>>> [products_shard1_replica2]
>>> >> > webapp=/solr path=/select
>>> >> >
>>> >>
>>> params={df=text&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=
>>> >> >
>>> >>
>>> http://192.168.1.212:9000/solr/products_shard1_replica1/|http://192.168.1.211:9000/solr/products_shard1_replica2/&rows=10&version=2&q=*:*&NOW=1458055886247&isShard=true&wt=javabin&_=1458055894580
>>> >> }
>>> >> > hits=8914801 status=0 QTime=0
>>> >> > 214022460 INFO  (qtp1013423070-20985) [c:products s:shard5
>>> r:core_node4
>>> >> > x:products_shard5_replica2] o.a.s.c.S.Request
>>> [products_shard5_replica2]
>>> >> > webapp=/solr path=/select
>>> >> >
>>> >>
>>> params={df=text&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=
>>> >> >
>>> >>
>>> http://192.168.1.211:9000/solr/products_shard5_replica2/|http://192.168.1.212:9000/solr/products_shard5_replica1/&rows=10&version=2&q=*:*&NOW=1458055886247&isShard=true&wt=javabin&_=1458055894580
>>> >> }
>>> >> > hits=8928379 status=0 QTime=1
>>> >> > 214022471 INFO  (qtp1013423070-21043) [c:products s:shard2
>>> r:core_node7
>>> >> > x:products_shard2_replica2] o.a.s.c.S.Request
>>> [products_shard2_replica2]
>>> >> > webapp=/solr path=/select
>>> >> > params={q=*:*&indent=true&wt=json&_=1458055894580} hits=35953734
>>> status=0
>>> >> > QTime=20
>>> >> >
>>> >> >
>>> >> > I would really like to avoid re-indexing if possible. Can someone
>>> >> provide a
>>> >> > bit of info on what is happening?
>>> >>
>>>
>>

Reply via email to