Hi,
Our application has a facet select admin screen UI that would allow the
users to add/update/delete the facets that has to be returned from Solr.
Right now we have the facet fields defined in the defaults of
requestHandler.
So if a user wanted a new facet, I know sending that newly selected
Hi,
This question might have been asked on the solr user mailing list earlier.
Solr has four different types of Cache DocumentCache, QueryResultCache,
FieldValueCache and FilterQueryCache.
Are these Caches memory mapped or they reside in the JVM heap?
Which Caches have the maximum impact on the
Yes hoss,
it only convert to the range query when there is two token only,,BTW
thanks for raising the issue
On 11-May-2017 5:38 AM, "Chris Hostetter" wrote:
> : I'm facing a issue when i'm querying the Solr
> : my query is "xiomi Mi 5 -white [64GB/ 3GB]"
> ...
Hi Alessandro,
I tried the suggestions on the parameters you have specified and is working
fine now.Thanks.
Thanks and Reagrds,
Arun
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrSpellChecker-returning-suggestions-for-words-present-in-index-tp4334554p4334756.html
Specifically answering the _indexing_ part of the question, in
solrconfig.xml theres a ramBufferSizeMB (from memory) that governs how
much RAM is used while indexing before flushing to disk. I think the
default is 100MB or so.
On Thu, May 11, 2017 at 2:34 PM, Rick Leir wrote:
One additional bit: The *.fdt files contain the stored values (i.e.
stored=true). This a verbatim, compressed copy of the input for these
fields. This data does not need to reside in any memory. Say you have
rows=10, and numFound is 10,000,000. The stored data is only accessed
for the 10 returned
Please enroll me to the group
On 5/11/2017 4:59 PM, S G wrote:
> How can 50GB index be handled by a 10GB heap?
> I am a developer myself and would love to know as many details as possible.
> So a long answer would be much appreciated.
Lucene (which is what provides large pieces of Solr's functionality)
does not read the
Thanks Toke. Your answer did help me a lot.
But one part about your answer is something that has always been confusing
to be me.
> The JVM heap is not used for caching the index data directly (although it
holds derived data). What you need is free memory on your machine for OS
disk-caching.
>
Hi,
This question might have been asked on the solr user mailing list earlier. Solr
has four different types of Cache DocumentCache, QueryResultCache,
FieldValueCache and FilterQueryCache
I would like to know which of these Caches are off heap cache? Which Caches
have the maximum impact on the
On 5/11/2017 3:49 PM, Oakley, Craig (NIH/NLM/NCBI) [C] wrote:
> FWIW, we now have a hypothetical suspect. We are getting these errors on
> three CentOS7 hosts, each of which recently had antivirus software installed.
Lucene index files tend to be large binary files where almost any
combination
FWIW, we now have a hypothetical suspect. We are getting these errors on three
CentOS7 hosts, each of which recently had antivirus software installed.
-Original Message-
From: Oakley, Craig (NIH/NLM/NCBI) [C] [mailto:craig.oak...@nih.gov]
Sent: Thursday, May 11, 2017 11:03 AM
To:
You could limit the Java heap, but that is counter productive. You should have
a look at how much heap it uses. But let Solr use what it needs. My guess is
that your -Xmx or -Xmm is too low at the moment.
Apart from that, Solr will mmap large files. When there is not enough RAM for
this, any
Not at all. I don't know whether it works or doesn't. There is no
a testcase proving it. It might be that there is a trick to make it work.
On Thu, May 11, 2017 at 8:01 PM, jotpe wrote:
> Thank you. Okay, so you think, this should work, too.
>
> Best regards Johannes
>
> Am
Hello All,
if there is any way to set threshold memory to the solr indexing process.
My computer is hung and the indexing process is killed by the OS.
So , I was wondering if there is any way to set threshold memory usage to
solr indexing process in linux environments.
Thank you in advance.
Hi,
Just getting up to speed on LTR and have a few questions (most of which are
speculative at this point and exploratory, as I have a couple of talks
coming up on this and other relevance features):
1. Has anyone looked at what's involved with supporting SparkML or other
models (e.g. PMML)?
2.
Hi All,
At the moment RankQueries [1] are not supported when you perform grouping:
if you perform a ReRankQuery and ask for the groups, reranking will be ignored
in the scoring.
In SOLR-8776, I added support for ReRankQueries in grouping and I opened a PR
on github [2].
ReRankQueries are
> For the sessionexpiredexception, the solr is throwing this exception and
> then the shard goes down.
>
> From the following discussion, it seems to be that the solr is loosing
> connection to zookeeper and throws the exception. In the zoo keeper
> configuration file, zoo.cfg, is it safe to
It worked.Adding 0 to search handler did the
trick.Thanks.
On May 11, 2017 3:13 PM, "Atita Arora [via Lucene]" <
ml+s472066n433458...@n3.nabble.com> wrote:
Hi Arun,
Try adding
0
to your
configuration.
It should work !
Thanks,
Atita
On Thu, May 11, 2017 at 6:34 AM, aruninfo100
For the sessionexpiredexception, the solr is throwing this exception and
then the shard goes down.
>From the following discussion, it seems to be that the solr is loosing
connection to zookeeper and throws the exception. In the zoo keeper
configuration file, zoo.cfg, is it safe to increase the
Thank you. Okay, so you think, this should work, too.
Best regards Johannes
Am 11. Mai 2017 17:14:45 MESZ schrieb Mikhail Khludnev :
>Can't say anything. Just raised
>https://issues.apache.org/jira/browse/SOLR-10673.
>
>On Thu, May 11, 2017 at 4:13 PM, jotpe
No, this won't work. You do _not_ want to have core.properties on ZK
anyway or be identical on each replica, certain values in that file
must be unique per replica (e.g. the name)
All you command should do would be to create an entry in each
core.properties file on each replica like
Impossible to answer as Shawn says. Or even recommend. For instance,
you say "but once we launch our application all across the world it
may give performance issues."
You haven't defined at all what changes when you "launch our
application all across the world". Increasing your query traffic 10
Can't say anything. Just raised
https://issues.apache.org/jira/browse/SOLR-10673.
On Thu, May 11, 2017 at 4:13 PM, jotpe wrote:
> A InvalidShapeException is thrown: Point must be in 'lat, lon' or 'x y'
> format
>
> I can see in the after this error, that in /select is an
>
Thanks, Shawn.
As of now, we don't have any performance issues, We are just working for
the future purpose.
So I was looking for any general architecture which is agreed by many of
Solr experts.
Thanks,
Venkat.
On Thu, May 11, 2017 at 8:19 PM, Shawn Heisey wrote:
> On
On 5/11/2017 8:38 AM, Webster Homer wrote:
> When I ran the backup and restore of a real collection: which I restored to
> sial-catalog-product-2 I didn't see a new config for sial-catalog-product-2
> in Zookeeper. When I did what you describe it I see the config name is
> sial-catalog-product
None of them have dataDir properties: they just use the "data" subdirectory in
the same directory as the core.properties
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Wednesday, May 10, 2017 6:59 PM
To: solr-user
Subject:
It appears that RESTORE pretty much ignores the configuration that was
backed up by the backup command, so why does backup bother?
The documented behavior of RESTORE is not very clear, and a scenario where
we are restoring a collection from a backup after the configuration in the
original
On 5/11/2017 7:39 AM, Venkateswarlu Bommineni wrote:
> In current design we have below configuration: *One collection with
> one shard with 4 replication factor with 4 nodes.* as of now, it is
> working fine.but once we launch our application all across the world
> it may give performance issues.
So, is there any method other than Luke to get the index information of all
the cores?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Luke-using-shards-tp3865816p4334603.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have this method: http://localhost:8983/solr/core1/select?wt=csv
This gives all the field names found in the core.
But when I do a shards like this:
http://localhost:8983/solr/core1/select?wt=csv=localhost:8983/solr/core1,localhost:8983/solr/core2,
I get no output. Have you any idea to get
When I ran the backup and restore of a real collection:
sial-catalog-product which I restored to sial-catalog-product-2 I didn't
see a new config for sial-catalog-product-2 in Zookeeper. When I did what
you describe it I see the config name is sial-catalog-product not
sial-catalog-product-2
So
On 5/10/2017 11:52 AM, S G wrote:
> Is there a recommendation on the size of index that one should host
> per core?
No, there really isn't.
I can list off a bunch of recommendations, but a whole bunch of things
that I don't know about your install could make those recommendations
completely
Hello Guys,
In current design we have below configuration:
*One collection with one shard with 4 replication factor with 4 nodes.*
as of now, it is working fine.but once we launch our application all across
the world it may give performance issues.
To improve the performance below is our
Hi,
I have a SolrCloud collection I create using a list of properties within a
core.properties file.
When I create the collection I call the collection API passing the
core.properties using the "property.properties=/localpath/core.properties":
You can load a core.properties using
"=/localpath/core.properties"
For instance
http://solrserver:8983/solr/admin/collections?action=CREATE=person=1=2&=person=/localpath/core.properties
--
View this message in context:
A InvalidShapeException is thrown: Point must be in 'lat, lon' or 'x y' format
I can see in the after this error, that in /select is an
=50.9,6.9 in the params{} section.
But the parameter qt stays empty. ...==*:*
Am 11. Mai 2017 14:42:41 MESZ schrieb Mikhail Khludnev :
I am curious about this as well. I generally have been using about a third
of available memory for the java heap, so I keep 50gb/150 available for the
jvm. Think this should be reduced?
On Wed, May 10, 2017 at 6:36 PM, Toke Eskildsen wrote:
> S G wrote:
What does appear in logs? It should log subquery request param right after
an exception (if there is an exception).
On Thu, May 11, 2017 at 1:36 PM, jotpe wrote:
> Dear list,
>
> i work a lot with subqueries. And its Wirkung fine for me.
> Now I ran into the problem, that the
Dear Solr users,
We are upgrading from solr 4.x to solr 6.5 and one important part of it is
to implement json API in our application.
Our facet query uses group.facet, in order to support this we have used
facet aggregation function 'unique(myField)' to get the unique count.
After doing
On 5/11/2017 2:56 AM, Sabine Forkel wrote:
> Having small files, I set the HDFS block size to 16m in hdfs-site.xml:
>
> dfs.blocksize
> 16m
>
> After restarting the HDFS daemons and the Solr cloud, the block size is
> still 128 MB for new files.
> Can this be due to the block cache slab size
On 5/11/2017 4:32 AM, jawaharsam wrote:
> Since it has been 5 years, is there any way to view the index information
> from all the cores of Solr?
Both the Luke program (separate from Solr) and the Luke support included
inside Solr can only operate on a single core (Lucene index). These
tools are
Since it has been 5 years, is there any way to view the index information
from all the cores of Solr?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Luke-using-shards-tp3865816p4334598.html
Sent from the Solr - User mailing list archive at Nabble.com.
Dear list,
i work a lot with subqueries. And its Wirkung fine for me.
Now I ran into the problem, that the qt parameter for the geodist() function
rejects to use the document field coordinate as inputvalue for my subquery.
I want somethong linke this
Hi Arun,
Try adding
0
to your
configuration.
It should work !
Thanks,
Atita
On Thu, May 11, 2017 at 6:34 AM, aruninfo100
wrote:
> Hi All,
>
> I am trying to do spell check with Solr.I am able to get suggestions when
> the word is incorrectly spelled.
>
I see at least two request parameters you are passing to the Spellchecker
that may cause this :
true
*The spellcheck.onlyMorePopular Parameter
If true, Solr will to return suggestions that result in more hits for the
query than the existing query. Note that this will return more popular
Hi,
Having small files, I set the HDFS block size to 16m in hdfs-site.xml:
dfs.blocksize
16m
After restarting the HDFS daemons and the Solr cloud, the block size is
still 128 MB for new files.
Can this be due to the block cache slab size which is 128 MB in size?
How can the HDFS block size
Hello,
When the distributed search is requested (SolrCloud), the query component
invokes prepare() where a query is parsed. But then it's just ignored, I
suppose because all work is done by subordinate shards' requests.
It's fine the most of the times because query parsing is cheap. Until we
have
49 matches
Mail list logo