Hi,
Following error is very commonly seen in Solr.
Does anybody know why that is so?
And is it asking the user to do something about it?
org.apache.solr.common.SolrException:
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode
= NoNode for
bq. Could SolrCloud avoid putting multiple replicas of the same shard
on the same host when there are multiple nodes per host?
Yes with some fiddling as far as "placement rules", start here:
https://lucene.apache.org/solr/guide/6_6/rule-based-replica-placement.html
The idea (IIUC) is that you
bq: Error opening new searcher. exceeded limit of maxWarmingSearchers=2
did you make sure that your indexing client isn't issuing commits all
the time? The other possible culprit (although I'd be very surprised)
is if you have your filterCache and queryResultCache autowarm settings
set extremely
What about cursorMark? That's designed to handle repeated calls with
increasing "start" parameters without bogging down.
https://lucene.apache.org/solr/guide/6_6/pagination-of-results.html
Best,
Erick
On Fri, Jul 27, 2018 at 9:47 AM, Tom Burton-West wrote:
> Thanks Joel,
>
> My use case is
Hi fellow Solr users,
I am looking for a way to index nested documents in mongodb using Solr's
DataImportHandler. Is there any recommendations?
I googled on the web in the last two weeks and found the following posts. I
was able to index the top level fields W/O any issue, but had trouble in
That makes sense, the ulimit was too small and I've updated it.
I'm just curious why are there still so many 503 errors being generated
(Error - Rsolr::Error::Http - 503 Service Unavailable - retrying ...)
Is it related to all the "Error opening new searcher. exceeded limit of
Thanks Joel,
My use case is that I have a complex edismax query (example below) and the
user wants to download the set of *all* search results (ids and some small
metadata fields). So they don't need the relevance ranking. However, I
need to somehow get the exact set that the complex edismax
Yes. command line with -d works.
Thanks,
Chuming
On Jul 27, 2018, at 7:49 AM, Alexandre Rafalovitch wrote:
> For non cloud, the schema is on the filesystem.
>
> At least from command line, you can specify path to it with -d flag when
> creating a new core. It will then be treated as
On 7/25/2018 11:04 AM, Chuming Chen wrote:
> From Solr Admin interface, I have created a collection and added field
> definitions. I can get its managed-schema from the Admin interface.
>
> Can I use this managed-schema to create a new collection? If yes, how?
What Solr version?
The fact that
On 7/26/2018 8:58 AM, solrnoobie wrote:
> We are having problems with zk / solr node recovery and we are encountering
> this issue:
>
> [ ] o.a.z.ClientCnxn Client session timed out, have not heard from server
> in 5003ms
>
> We have set the solr.xml zkClientTimeout to 30 secs.
The internal
On 7/25/2018 3:49 PM, Oakley, Craig (NIH/NLM/NCBI) [C] wrote:
> I end up with four cores instead of two, as expected. The problem is that
> three of the four cores (col_shard1_0_replica_n5, col_shard1_0_replica0 and
> col_shard1_1_replica_n6) are *all on hostname1*. Only col_shard1_1_replica0
>
Hi,
You have to increase the openfile limit for your SOLR user - you can
check it with uname -a. It should show something about 1024.
To increase it, you have to raise the systemlimit in
/etc/security/limits.conf.
Add the following lines:
* hard nofile 102400
* soft nofile 102400
root hard
I have Rails 5 application that uses solr to index and search our site. The
sunspot gem is used to integrate ruby and sunspot. It's a relatively small
site (no more 100,000 records) and has moderate usage (except for the
googlebot).
Until recently we regularly received 503 errors; reloading the
Hi Mario, could you please share your settings (e.g. OS, JVM memory,
System memory)?
Andrea
On 27/07/18 11:36, Bisonti Mario wrote:
Hallo
I obtain the error indexing a .xlsm or .xlsx file of 11 MB
What could I do?
Thanks a lot
Mario
2018-07-27 11:08:25.634 WARN (qtp1521083627-99) [
The lucenenet project is a separate Apache project from Solr (and the
Lucene project as well).
You will have better luck getting helpful information on their mailing list
(https://cwiki.apache.org/confluence/display/LUCENENET/Mailing+Lists).
Best,
Chris
On Fri, Jul 27, 2018 at 8:40 AM - -
Hallo
I obtain the error indexing a .xlsm or .xlsx file of 11 MB
What could I do?
Thanks a lot
Mario
2018-07-27 11:08:25.634 WARN (qtp1521083627-99) [ x:core_share]
o.e.j.s.HttpChannel /solr/core_share/update/extract
java.lang.OutOfMemoryError
at
I use lucene.net to index the documents. My main aim was to get to search and
have the line number and the line of text returned in a document.
Here's the code that indexes
using (TextReader contentsReader = new StreamReader(fi.FullName))
{
doc.Add(new StringField("FullFileName",
On 7/26/2018 1:32 PM, cyndefromva wrote:
At the point it starts failing I see a java exception: "java.io-IOException:
Too many open files" in the solr log file and a SolrException (Error open
new searcher) is returned to the user.
The operating system where Solr is running needs its open file
On 7/25/2018 10:46 PM, Reem wrote:
The way I found to change the ranking function is by setting the similarity
property of text fields in schema.xml as follows:
``
However, this means we can only set the similarity/ranking function only in
indexing time. As Solr is built over Lucene which
For non cloud, the schema is on the filesystem.
At least from command line, you can specify path to it with -d flag when
creating a new core. It will then be treated as template to copy.
That is more of a trick than production approach though.
Regards,
Alex
On Wed, Jul 25, 2018, 1:04 PM
Ok, I thought that it was somehow expected, but what bothers me is that if
I use min and max = 2 or min and max = 3, it grows linearly, but when I
change to min = 2 and max = 3, the number of tokens explode.
What I expect it was going to do was to make first the 2 shingles clauses
and after the 3
Hi there.
Are there any ideas?
Od: nc-tech-user
Wysłane: 19 lipca 2018 11:09
Do: solr-user@lucene.apache.org
Temat: Problem in QueryElevationComponent with solr 7.4.0
Hello.
We are using solr 6.6.2 and want to upgrade it to version 7.4.0.
But we have a
Hi,
It is not possible in general because similarities are computing norms at
index time. (
https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/similarities/Similarity.java#L46
)
My understanding is that you should double a field and set different
23 matches
Mail list logo