devices.
-Original Message-
From: Ere Maijala
Sent: 01 March 2021 12:53
To: solr-user@lucene.apache.org
Subject: Re: Potential Slow searching for unified highlighting on Solr
8.8.0/8.8.1
EXTERNAL EMAIL - Be cautious of all links and attachments.
Hi,
Whoa, thanks for the heads-up! You
inkedin.com/company/unisys>Grey_TW
<http://twitter.com/unisyscorp>Grey_YT
<http://www.youtube.com/theunisyschannel>Grey_FB
<http://www.facebook.com/unisyscorp>Grey_Vimeo
<https://vimeo.com/unisys>Grey_UB <http://blogs.unisys.com/>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
0.212.305:8983_solr into
leader-initiated recovery.
So the node in question isn't responding quickly enough to http requests
and gets put into recovery. The log for the recovering node starts too
late, so I can't say anything about what happened before 14:42:43.943
that lead to recovery.
--Ere
>
s Size(GB)Number of Documents
>>>> Size(GB)
>>>> collection1 26913364201 26913379202 26913380
>>>> 198 26913379198
>>>> collection2 13934360310 13934367310 13934368
>>>> 219 13934367219
>>>> collection3 351539689 73.5351540040 73.5351540136
>>>> 75.2351539722 75.2
>>>>
>>>> *My server configurations are below:
>>>>
>>>> Server1 Server2
>>>> CPU Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz, 2301 Mhz, 10 Core(s),
>>>> 20 Logical Processor(s)Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz,
>>>> 2301 Mhz, 10 Core(s), 20 Logical Processor(s)
>>>> HardDisk(GB)3845 ( 3.84 TB) 3485 GB (3.48 TB)
>>>> Total memory(GB)320 320
>>>> Shard1 Allocated memory(GB) 55
>>>> Shard2 Replica Allocated memory(GB) 55
>>>> Shard2 Allocated memory(GB) 55
>>>> Shard1 Replica Allocated memory(GB) 55
>>>> Other Applications Allocated Memory(GB) 60 22
>>>> Other Number Of Applications11 7
>>>>
>>>>
>>>> Sometimes, any one replica goes into recovery mode. Why replica goes into
>>>> recovery? Due to heavy search OR heavy update/insert OR long GC pause
>>>> time? If any one of them then what should we do in configuration?
>>>> Should we increase the shard for recovery issue?
>>>>
>>>> Regards,
>>>> Vishal Patel
>>>>
>>>
>>
>
>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
ve document. I would
>> like a
>>>> way so that the document matches only and only if the query were also
>>>> "united states of america" (case-insensitive).
>>>>
>>>> Document field type: TextField
>>>> Index Analyzer: TokenizerChain
>>>> Index Tokenizer: StandardTokenizerFactory
>>>> Index Token Filters: StopFilterFactory, LowerCaseFilterFactory,
>>>> SnowballPorterFilterFactory
>>>> The Query Analyzer / Tokenizer / Token Filters are the same as the Index
>>>> ones above.
>>>>
>>>> FYI I'm relatively novice at Solr / Lucene / Search.
>>>>
>>>> Much appreciated
>>>> Omer
>>>
>>
>>
>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
ant to avoid the indexing server handling queries.
It can also be used to prefer local replicas to minimize network access.
--Ere
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
gt;>> Heap [ -> 32GB]
>>> G1GC settings
>>>
>>> * Solr:
>>> (Jetty) MaxThreads [ -> 2]
>>>
>>>
>>> And the other info is as follows.
>>>
>>> CPU : 16 cores
>>> RAM : 128 GB
>>> Disk : SSD 500GB
>>> NIC : 10Gbps(maybe)
>>> OS : Ubuntu 14.04
>>> JVM : OpenJDK 1.8.0u191
>>> Solr : 6.2.1
>>> Index size : about 60GB
>>>
>>> Any insights will be appreciated.
>>>
>>> Thanks and regards,
>>> Yasufumi.
>>>
>>
>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
ete the data directory, but later on stop replicating at some random time,
> which is different for each shard.
>
> Is there something i should change in state.json, and can it just be
> reuploaded to ZK?
>
> Thanks,
> Markus
>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
little more than four minutes.
>>
>> https://www.dropbox.com/s/vkxfoihh12brbnr/gc-analysis-kojo2.png?dl=0
>>
>> What happened at approximately 10:55:15 PM on the day that the smaller
>> log was produced? Whatever happened caused Solr's heap usage to
>> skyrocket and require more than 6GB.
>>
>> Thanks,
>> Shawn
>>
>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
me reason.
>
> On 7/24/19, 2:06 AM, "Ere Maijala" wrote:
>
> I think you might be missing the d parameter in geofilt. I'm not sure if
> geofilt actually does anything useful without it.
>
> Regards,
> Ere
>
> Sanders, Marsh
42704 -84.4412613))"],
> "ID": "284600596",
> "geodist()": 20015.115
> }
> ]
>
>
> Anyone who has experience in this area can you point us in the right
> direction about what we’re doing incorrectly with either how we are indexing
> the data and/or how we are querying against the polygons.
>
> Thank you,
>
>
> --
> Marshall Sanders
> Principal Software Engineer
> Autotrader.com
> marshall.sande...@coxautoinc.com<mailto:marshall.sande...@coxautoinc.com>
>
>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
r just
> simpler to use?
>
> Le lun. 11 févr. 2019 à 09:58, Ere Maijala a
> écrit :
>
>> Please note that mapping characters works well for a small set of
>> characters, but if you want full UNICODE normalization, take a look at
>> the ICUFoldingFilter:
>>
>&
cause mm=757 looks really wrong. From the docs:
>>>> Defines the minimum number of clauses that must match, regardless of
>>>> how many clauses there are in total.
>>>>
>>>> edismax is used much more than dismax as it's more flexible, but
>>>> that's not germane here.
>>>>
>>>> finally, try adding =query to the url to see exactly how the
>>>> query is parsed.
>>>>
>>>> Best,
>>>> Erick
>>>>
>>>> On Mon, Feb 4, 2019 at 9:09 AM SAUNIER Maxence
>> wrote:
>>>>>
>>>>> Hello,
>>>>>
>>>>> How can I ignore accent in the query result ?
>>>>>
>>>>> Request :
>>>>> http://*:8983/solr/***/select?defType=dismax=je+suis+avarié;
>>>>> qf
>>>>> =t
>>>>> itle%5e20+subject%5e15+category%5e1+content%5e0.5=757
>>>>>
>>>>> I want to have doc with avarié and avarie.
>>>>>
>>>>> I have add this in my schema :
>>>>>
>>>>> {
>>>>> "name": "string",
>>>>> "positionIncrementGap": "100",
>>>>> "analyzer": {
>>>>> "filters": [
>>>>> {
>>>>> "class": "solr.LowerCaseFilterFactory"
>>>>> },
>>>>> {
>>>>> "class": "solr.ASCIIFoldingFilterFactory"
>>>>> },
>>>>> {
>>>>> "class": "solr.EdgeNGramFilterFactory",
>>>>> "minGramSize": "3",
>>>>> "maxGramSize": "50"
>>>>> }
>>>>> ],
>>>>> "tokenizer": {
>>>>> "class": "solr.KeywordTokenizerFactory"
>>>>> }
>>>>> },
>>>>> "stored": true,
>>>>> "indexed": true,
>>>>> "sortMissingLast": true,
>>>>> "class": "solr.TextField"
>>>>> },
>>>>>
>>>>> But it not working.
>>>>>
>>>>> Thanks.
>>>>
>>
>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
" indexed="true" required="true"
stored="true") but the admin UI shows DocValues as enabled, so I guess
this is by default. Is the solution to add "docValues=false" in the schema?
On 12.11.18 10:43, Ere Maijala wrote:
Sofiya,
Do you have docVa
il Signature
*Sofiia Strochyk
*
s...@interlogic.com.ua <mailto:s...@interlogic.com.ua>
InterLogic
www.interlogic.com.ua <https://www.interlogic.com.ua>
Facebook icon <https://www.facebook.com/InterLogicOfficial> LinkedIn
icon <https://www.linkedin.com/company/interlo
Could it be related to reloading a collection? I need to do some
testing, but it just occurred to me that reload was done at least once
during the period the cluster had been up.
Regards,
Ere
Ere Maijala kirjoitti 30.10.2018 klo 12.03:
Hi,
We had the same happen with PULL replicas with Solr
ions?
--
Best regards, Vadim
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
gt;
Facebook icon <https://www.facebook.com/InterLogicOfficial> LinkedIn
icon <https://www.linkedin.com/company/interlogic>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
Hi,
In addition to what others wrote already, there are a couple of things
that might trigger sudden memory allocation surge that you can't really
account for:
1. Deep paging, especially in a sharded index. Don't allow it and you'll
be much happier.
2. Faceting without docValues
This was used with a very specific usage pattern, so it probably doesn't
apply in most situations.
Regards,
Ere
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
d the term offset as payload for every term and boost on
that, but this should not be necessary since offset info is already part
of
the index?
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
--
Sincerely yours
Mikhail Khludnev
--
CTO, OpenSource Connections
Author, Re
ay also work) or use
something else than Solr. Cursor mark or streaming are great options,
but only if you want to process all the records. Often the deep paging
need is practically the need to see the last results, and that can also
be achieved by allowing reverse sorting.
Regards,
Ere
--
Ere Maija
to design
Also, we're hiring. Apply here!
<https://about.canva.com/careers/>
<https://twitter.com/canva>
<https://facebook.com/canva> <https://au.linkedin.com/company/canva>
<https://instagram.com/canva>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
On 4 June 2018 at 19:43, Ere Maijala wrote:
Hi,
Well, SOLR-11982 adds server-side support for part of what SOLR-8146 aims
to do (shards.preference=replica.location:[something]). It doesn't do
regular expressions or snitches at the moment, though it would be easy to
add. So, it looks to me like
preferred replica for
query/read
I am looking at setting one of the replica to be the preferred replica for
query/read, and another replica to be use for indexing.
I am using Solr 7.3.1 currently.
Regards,
Edwin
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
:04 AM, Ere Maijala <ere.maij...@helsinki.fi> wrote:
Server load alone doesn't always indicate the server's ability to serve
queries. Memory and cache state are important too, and they're not as easy to
monitor. Additionally, server load at any single point in time or a short term
a
hile in state=recovering until the switch to
state=active. Is my understanding correct?
2. Is it all worth it? Is there any advantage to running a cluster of 3
TLOGs + 10 PULL replicas vs running 13 TLOG replicas?
On 12 February 2018 at 19:25, Ere Maijala <ere.maij...@helsinki.fi> wrote:
You
any advantage to running a cluster of 3
TLOGs + 10 PULL replicas vs running 13 TLOG replicas?
On 12 February 2018 at 19:25, Ere Maijala <ere.maij...@helsinki.fi> wrote:
Your question about directing queries to PULL replicas only has been
discussed on the list. Look for topic "Limit sear
ur PULL replicas for read queries and TLOG replicas for
writes?
Thanks
Greg
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
OR+allowedRoles:(6342201+172408+6336860)+OR+
combinationUsers:(4858120))+AND+-blockedUsers:(4858120
+OR+(isLatestRevPrivate:(true)+AND+allowedUsersForPvtRev:(
4858120)+AND+-folderadmin_list:(4858120)))=true=
1516786982952=true=javabin} hits=0 status=0 QTime=83309
R
und) is still 15. I have confirmed this in
the
code.
Looks like SolrCloud doesn't log the values it's using for
things
like
zkClientTimeout. I think it should.
https://issues.apache.org/jira/browse/SOLR-11915
Thanks,
Shawn
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
remove it - could not find explanation why.
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
On 5 Jan 2018, at 15:22, Ere Maijala <ere.maij...@helsinki.fi> wrote:
Hi,
It would be really nice
//sematext.com/
On 5 Jan 2018, at 15:22, Ere Maijala <ere.maij...@helsinki.fi> wrote:
Hi,
It would be really nice to have a server-side option, though. Not everyone uses Solrj,
and a typical fairly dummy client just queries the server without any understanding about
shards etc. Solr co
re the possible ways to limit search request only to pull type replicase?
At the moment the only solution I found is to append shards parameter to each
query, but if new shards added later it requires to change solrconfig. Is it
the only way to do this?
Thank you
Regards
Stanislav
Hi Everyone,
This is a followup on the discussion from September 2017. Since then
I've spent a lot of time gathering a better understanding on docValues
compared to UIF and other stuff related to Solr performance. Here's a
summary of the results based on my real-world experience:
1. Making
iple times without waiting for the
overseer queue to empty could also cause the issue.
Thanks,
Shawn
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
hone: 514-343-6111 p. 3988 --
luc.gron...@umontreal.ca<mailto:luc.gron...@umontreal.ca>
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
patches.. :)
--Ere
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
t specified
- optimized: ~17ms with facet.method=uif
--Ere
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
Toke Eskildsen kirjoitti 4.9.2017 klo 13.38:
On Mon, 2017-09-04 at 13:21 +0300, Ere Maijala wrote:
Thanks for the insight, Yonik. I can confirm that #2 is true. I ran
and after it completed I was able to retrieve 2000 values in 17ms.
Very interesting. Is this on spinning disks or SSD
Yonik Seeley kirjoitti 1.9.2017 klo 17.03:> On Fri, Sep 1, 2017 at 9:17
AM, Ere Maijala <ere.maij...@helsinki.fi> wrote:
>> I spoke a bit too soon. Now I see why I didn't see any improvement from
>> facet.method=uif before: its performance seems to depend heavily o
facet.limit=2000 takes ~1300ms
So, for some uses it provides a nice boost, but if you need to fetch
more than a few top items, it doesn't perform properly.
Query used was:
q=*:*=0=true=building=1=2000=true=uif
--Ere
Ere Maijala kirjoitti 1.9.2017 klo 13.10:
I can confirm that we're seeing
ersität Basel
Universitätsbibliothek
Günter Hipler
Projekt SwissBib
Schoenbeinstrasse 18-20
4056 Basel, Schweiz
Tel.: + 41 (0)61 267 31 12 Fax: ++41 61 267 3103
E-Mail guenter.hip...@unibas.ch
URL: www.swissbib.org / http://www.ub.unibas.ch/
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
From the fact that someone has tried to access /etc/passwd file via
your Solr (see all those WARN messages), it seems you have it exposed to
the world, unless of course it's a security scanner you use internally.
Internet is a hostile place, and the very first thing I would do is
shield Solr
}thematicgroupid
stats.field={!min=true max=true}price
stats.field={!min=true max=true}yearout
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
I'm running a three-node SolrCloud (tested with versions 6.4.2 and
6.5.0) with 1 shard and 3 replicas, and I'm having trouble getting the
collection backup API to actually do the job. This is the request I use
to initiate the backup:
e been looking around, but cannot really find a reason for the
performance degradation. Does any of you have an idea?
Cheers,
Henrik
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
Hi all,
this is just a quick heads-up that we've stumbled on serious performance
issues after upgrading to Solr 6.4.1 apparently due to the new metrics
collection causing a major slowdown. I've filed an issue
(https://issues.apache.org/jira/browse/SOLR-10130) about it, but decided
to post
about New York, but it needs to be the best match for the query
“new york new york”.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
On Feb 9, 2017, at 5:18 AM, Ere Maijala <ere.maij...@helsinki.fi> wrote:
Thanks Emir.
I was thinking of somethin
. Implementing such filter would
require looking backward which violates streaming approach of token
filters and unpredictable memory usage.
I would do it as part of query preprocessor and not necessarily as part
of Solr.
HTH,
Emir
On 09.02.2017 12:24, Ere Maijala wrote:
Hi,
I just noticed
Hi,
I just noticed that while we use RemoveDuplicatesTokenFilter during
query time, it will consider term positions and not really do anything
e.g. if query is 'term term term'. As far as I can see the term
positions make no difference in a simple non-phrase search. Is there a
built-in way
Hi,
on startup Solr 6.4.0 logs the following warning:
o.a.s.c.SolrResourceLoader Solr loaded a deprecated plugin/analysis
class [solr.SynonymFilterFactory]. Please consult documentation how to
replace it accordingly.
What documentation? As far as I can see, there's nothing at
Systems Engineer
Management Sciences, Inc.
6022 Constitution Ave NE
Albuquerque, NM 87144
505-255-8611 <(505)%20255-8611> (office)
505-205-7057 <(505)%20205-7057> (cell)
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
i.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
The reason in a nutshell: Disks are EXTREMELY slow. Memory is very fast.
Thanks,
Shawn
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
Then again, if the load characteristics on the Solr instance differ e.g.
by time of day, G1GC, in my experience, may have trouble adapting. For
instance if your query load reduces drastically during the night, it may
take a while for G1GC to catch up in the morning. What I've found useful
from
days, please make friendly reminders." but doesn't tell how to make
a friendly reminder. Could someone point me to the right direction? Thanks!
--Ere
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
boosts to the MLT similarity fields instead of
the defaults set in the config when making a MLT query?
I'm currently using a query parser and attempting /select?q={!mlt
qf=foo^10,bar^20,upc^50}/id /etc but it's taking the boost to be part of
the field name.
--
Ere Maijala
Kansalliskirjasto
I've now attached a proposed patch to a pre-existing issue
https://issues.apache.org/jira/browse/SOLR-9267.
--Ere
13.10.2016, 2.19, Ere Maijala kirjoitti:
Answering to myself.. I did some digging and found out that boosts work
if qf is repeated in the local params, at least in Solr 6.2, like
fix for this and will post a Jira issue
with a patch when done. There appears to be another problem with
CloudMLTQParser too where it includes extraneous terms in the final
query, and I'll take a stab at fixing that too.
--Ere
1.8.2016, 9.12, Ere Maijala kirjoitti:
Hi All,
I, too, would like
Hi All,
I, too, would like to know the answer to these questions. I saw a
similar question by Nikaash Puri on 22 June with subject "help with
moreLikeThis" go unanswered. Any insight?
Regards,
Ere
11.7.2016, 18.32, Demian Katz kirjoitti:
Hello,
I am currently using field-specific boosts
ike "&" and "#".
{
"responseHeader":{
"status":400,
"QTime":0},
"error":{
"msg":"org.apache.solr.search.SyntaxError: Cannot parse
'\"Research ': Lexical error at line 1, column 11. Encountered:
after : \"\\\"Research \"",
"code":400}}
I have done the search by putting inverted commands, like: q="Research &
Development"
What could be the issue here?
I'm facing this problem in both Solr 5.4.0 and Solr 6.0.1.
Regards,
Edwin
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
--
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
17.6.2016, 11.05, Bernd Fehling kirjoitti:
Am 17.06.2016 um 09:06 schrieb Ere Maijala:
16.6.2016, 1.41, Shawn Heisey kirjoitti:
If you want to continue avoiding G1, you should definitely be using
CMS. My recommendation right now would be to try the G1 settings on my
wiki page under
16.6.2016, 1.41, Shawn Heisey kirjoitti:
If you want to continue avoiding G1, you should definitely be using
CMS. My recommendation right now would be to try the G1 settings on my
wiki page under the heading "Current experiments" or the CMS settings
just below that.
For what it's worth, we're
/about.me/alessandro_benedetti
"Tyger, tyger burning bright
In the forests of the night,
What immortal hand or eye
Could frame thy fearful symmetry?"
William Blake - Songs of Experience -1794 England
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
? Ramkumar
is working on it...
Best,
Erick
On Wed, Dec 30, 2015 at 11:07 PM, Ere Maijala <ere.maij...@helsinki.fi> wrote:
Well, for us SOLR-8418 is a major issue. I haven't encountered other issues,
but that one was sort of a show-stopper.
--Ere
31.12.2015, 7.27, William Bell kirjoitti:
How i
on how to
do what you need, is where this requirement comes from. I guess the main
question is: WHY do you need to search for millions of IDs?
--Ere
5.1.2016, 16.12, Mugeesh Husain kirjoitti:
Thanks for your reply @Ere Maijala,
one of my eCommerce based client have a requirement to search
.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
Well, for us SOLR-8418 is a major issue. I haven't encountered other
issues, but that one was sort of a show-stopper.
--Ere
31.12.2015, 7.27, William Bell kirjoitti:
How is SOLR 5.4.0 ? I heard there was a quick 5.4.1 coming out?
Any major issues?
--
Ere Maijala
Kansalliskirjasto
it seems to be a showstopper.
--Ere
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
useful.
Upayavira
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
..) and the script doesn't wait any longer than necessary. But if
you find that a necessity, it shouldn't be too difficult to add.
--Ere
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
of shutdown. Since the script already has the bits for checking
process status, how about checking it once a second for 30 seconds or
until the process has stopped and only kill it if it doesn't shut down
in that time?
Thanks,
Ere
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
of the issues
at hand:
https://issues.apache.org/jira/browse/SOLR-5855
I think this means they are getting there, but the performance is really
still much worse than 4.10, and its not obvious why.
On 5/5/15, 2:06 AM, Ere Maijala ere.maij...@helsinki.fi wrote:
I'm seeing the same with Solr 5.1.0 after
: { time: 6551, process: { time: 6549, query: { time:
0 }, highlight: { time: 6524 }, debug: { time: 25 }
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
---
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
created an issue : SOLR-7275
https://issues.apache.org/jira/browse/SOLR-7275 and will create more
as I find more of those.
I plan to get to them and fix over the weekend.
On Wed, Apr 15, 2015 at 5:13 AM, Ere Maijala ere.maij...@helsinki.fi
wrote:
Hi,
I'm trying to gather information on how mlt works
/biblio/conf.
--Ere
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
=32G, but it's
not mentioned in solr.in.sh by default.
--Ere
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
in context:
http://lucene.472066.n3.nabble.com/Solr-4-10-2-Found-core-but-I-get-No-cores-available-in-dashboard-page-tp4173602p4174118.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
So, is it possible to configure a ping handler to return quickly with
non-OK status if a search handler is not yet available? This would
allow the load balancer to quickly fail over to another server. I
couldn't find anything like this in the docs, but I'm still hopeful.
I'm aware of the
As far as I can see, when a Solr instance is started (whether standalone
or SolrCloud), a PingRequestHandler will wait until index warmup is
complete before returning (at least with useColdSearcher=false) which
may take a while. This poses a problem in that a load balancer either
needs to wait
, Ere Maijala ere.maij...@helsinki.fi wrote:
David,
I made a note about your mentioning the deprecation below to take it into
account in our software, but now that I tried to find out more about this I
ran into some confusion since the Solr documentation regarding spatial
searches is currently
pointers.
Thanks,
Shawn
--
Ere Maijala
Kansalliskirjasto / The National Library of Finland
list archive at Nabble.com.
--
Ere Maijala (Mr.)
The National Library of Finland
not notice what I'm missing.
Any pointers or hints on what to do to make this work would be highly
appreciated. Thanks!
Regards,
Ere
--
Ere Maijala (Mr.)
The National Library of Finland
86 matches
Mail list logo