http://solr:port/collection/update?version_field=1234582.0
works for the payload
{"delete":[{"id":"51"},{"id":"5"}]} with multiple ids and the version
parameter is applied to both requests.
Is it possible to send separate version numbers for the ids in the parameter
?
--
Sent from: https:/
ardQuery is generated.
>
> -- Jack Krupansky
>
> On Thu, Jan 21, 2016 at 8:18 PM, Jian Mou wrote:
>
> > We are using Solr as our search engine, and recently notice some user
> > input wildcard query can lead to Solr dead loop in
> >
> > org.apache.lucene
We are using Solr as our search engine, and recently notice some user input
wildcard query can lead to Solr dead loop in
org.apache.lucene.util.automaton.Operations.determinize()
, and it also eats memory and finally OOM.
the wildcard query seems like **?-???o·???è??**。
Although we
I am using Solr cloud 4.10.0 and I have been seeing this for a while now.
Does anyone has similar experience or clue what's happening ?
auto commit error...:org.apache.solr.common.SolrException: Error opening new
searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
Can anyone please confirm if this is not supported in the current version?
I am trying to use pre-analyzed field for mlt and when creating the mltquery
it does not get anything from the index.
I think even if I set termVectors=true in the PreAnalyzed field definition,
it is being ignored.
--
V
We are getting the following error intermittently( two in two weeks
interval). The load on server seems to be usual.I see in the log that just
before the failure ( 4-5 mins) qtime was very high, normally those query
will be processed within 300 ms but before failure they took more than 100
secs.So
Hi,
If you use a codec which is not default, you need to download/build lucene
codec jars and put it in solr_home/lib directory and add the codecfactory in
the solr config file.
Look here for detail instruction
http://wiki.apache.org/solr/SimpleTextCodecExample
Best,
Mou
--
View this
t;
>
>
>
>
>
> -Original Message-
> From: Mou <[hidden email]>
> To: solr-user <[hidden email]>
> Sent: Thu, Feb 14, 2013 2:35 pm
> Subject: Re: long QTime for big index
>
>
> Just to close this discussion , we solved the problem by split
at 9:34 PM, Mou [via Lucene]
wrote:
> Thank you again.
>
> Unfortunately the index files will not fit in the RAM.I have to try using
> document cache. I am also moving my index to SSD again, we took our index
> off when fusion IO cards failed twice during indexing and index was
>
Thank you again.
Unfortunately the index files will not fit in the RAM.I have to try using
document cache. I am also moving my index to SSD again, we took our index
off when fusion IO cards failed twice during indexing and index was
corrupted.Now with the bios upgrade and new driver, it is suppose
Thank you Shawn for reading all of my previous entries and for a detailed
answer.
To clarify, the third shard is used to store the recently added/updated
data. Two main big cores take very long to replicate ( when a full
replication is required) so the third one helps us to return the newly
indexe
Thanks for your reply.
No, there is no eviction, yet.
The time is spent mostly on org.apache.solr.handler.component.QueryComponent
to process the request.
Again, the time varies widely for same query.
--
View this message in context:
http://lucene.472066.n3.nabble.com/long-QTime-for-big-inde
I am running solr 3.4 on tomcat 7.
Our index is very big , two cores each 120G. We are searching the slaves
which are replicated every 30 min.
I am using filtercache only and We have more than 90% cache hits. We use
lot of filter queries, queries are usually pretty big with 10-20 fq
parameters. N
Hi,
I think that this totally depends on your requirements and thus applicable
for a user scenario. Score does not have any absolute meaning, it is always
relative to the query. If you want to watch some particular queries and want
to show results with score above previously set threshold, you can
SLES kernel version is different in production, its a 3.0.* , test was
2.6.* but I do not think that can cause a problem.
Thank you again,
Mou
On Wed, Jul 18, 2012 at 6:26 PM, Erick Erickson [via Lucene] <
ml-node+s472066n3995861...@n3.nabble.com> wrote:
> Replication will indeed be in
Hi Eric,
I totally agree. That's what I also figured ultimately. One thing I am not
clear. The replication is supposed to be incremental ? But looks like it
is trying to replicate the whole index. May be I am changing the index so
frequently, it is triggering auto merge and a full replication ?
for many minutes, while a Resource
> Monitor session reports that that same Tomcat process is frantically
> reading from the page file the whole time. So there is something besides
> plausibility to the idea.
>
> -- Bryan
>
> > -Original Message-
> > From: M
ou better than a monster 70GB JVM.
>
> -- Bryan
>
> > -Original Message-
> > From: Mou [mailto:[hidden
> > email]<http://user/SendEmail.jtp?type=node&node=3995446&i=0>]
>
> > Sent: Monday, July 16, 2012 7:43 PM
> > To: [hidden email]
Hi,
Our index is divided into two shards and each of them has 120M docs , total
size 75G in each core.
The server is a pretty good one , jvm is given memory of 70G and about same
is left for OS (SLES 11) .
We use all dynamic fields except th eunique id and are using long queries
but almost all of
We are using a VeloDrive (SSD) to store and search our solr index.
The system is running on SLES 11.
Right now we are using ext3 but wondering if anyone has any experience using
XFS/ext3 on SSD or FusionIO for Solr .
Does solr have any preference for the underlined file system ?
Our index will b
20 matches
Mail list logo