y 11, 2021 8:30 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr using all available CPU and becoming unresponsive
Hi Jeremy,
Can you share your analysis chain configs? (SOLR-13336 can manifest in a
similar way, and would affect 7.3.1 with a susceptible config, given the
right (wro
d maybe the StopFilterFactory from the index section as well)?
>
> Thanks again,
> Jeremy
>
>
> From: Michael Gibney
> Sent: Monday, January 11, 2021 8:30 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr using all available CPU and becom
(and maybe the
StopFilterFactory from the index section as well)?
Thanks again,
Jeremy
From: Michael Gibney
Sent: Monday, January 11, 2021 8:30 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr using all available CPU and becoming unresponsive
Hi Jeremy,
Can
Hi Jeremy,
Can you share your analysis chain configs? (SOLR-13336 can manifest in a
similar way, and would affect 7.3.1 with a susceptible config, given the
right (wrong?) input ...)
Michael
On Mon, Jan 11, 2021 at 5:27 PM Jeremy Smith wrote:
> Hello all,
> We have been struggling with an
Hello all,
We have been struggling with an issue where solr will intermittently use
all available CPU and become unresponsive. It will remain in this state until
we restart. Solr will remain stable for some time, usually a few hours to a
few days, before this happens again. We've tried
I recommend _against_ issuing explicit commits from the client, let
your solrconfig.xml autocommit settings take care of it. Make sure
either your soft or hard commits open a new searcher for the docs
to be searchable.
I’ll bend a little bit if you can _guarantee_ that you only ever have one
Hi everyone,
I'm trying to figure out when and how I should handle failures that may
occur during indexing. In the sample code below, look at my comment and
let me know what state my index is in when things fail:
SolrClient solrClient = new HttpSolrClient.Builder(url).build();
with
strategies like the cacheing you describe.
Charlie
On 16/07/2020 18:14, harjag...@gmail.com wrote:
Hi All,
Below are question regarding querying solr using many QueryParser in one
call.
We have need to do a search by keyword and also include few specific
documents to result. We don't want to use
Hi All,
Below are question regarding querying solr using many QueryParser in one
call.
We have need to do a search by keyword and also include few specific
documents to result. We don't want to use elevator component as that would
put those mandatory documents to the top of the result. We would
Hi All,
Below are question regarding querying solr using many QueryParser in one
call.
We have need to do a search by keyword and also include few specific
documents to result. We don't want to use elevator component as that would
put those mandatory documents to the top of the result. We would
Have a look at "invariants" for your requestHandler in solrconfig.xml.
It might be an option for you.
Regards
Bernd
Am 22.05.19 um 22:23 schrieb RaviTeja:
Hello Solr Expert,
How are you?
Am trying to ignore faceting for some of the fields. Can you please help me
out to ignore faceting using
Just don’t ask for them. Or you saying that users can specify arbitrary fields
to facet on and you want to prevent certain fields from being possible?
No, there’s no good way to do that in solrconfig.xml. You could write a query
component that stripped out certain fields from the facet.field
Hello Solr Expert,
How are you?
Am trying to ignore faceting for some of the fields. Can you please help me
out to ignore faceting using solrconfig.xml.
I tried but I can ignore faceting all the fields that useless. I'm trying
to ignore some specific fields.
Really Appreciate your help for the
Hello,
Sometimes we have "broken pipe" error when indexing 40 000 000 of documents
using ConcurrentUpdateSolrClient in java.
I have find on google that's probably is because à time out. But in
ConcurrentUpdateSolrClient we can't configure timeout.
For example the last broken pie was for
Hi,
I am trying to load MIME file of Email with multipart to SOLR using TIKA and
Morphline
I am using Flume to load continuously but it fails to load when it finds
attachments inside an email(multiparts)
It does not throw any error but fails to index the email to SOLR
Could you please help me
CVE-2016-6809: Java code execution for serialized objects embedded in
MATLAB files parsed by Apache Solr using Tika
Severity: Important
Vendor:
The Apache Software Foundation
Versions Affected:
Solr 5.0.0 to 5.5.4
Solr 6.0.0 to 6.6.1
Solr 7.0.0 to 7.0.1
Description:
Apache Solr uses Apache
Hi,
Though I see Zookeeper is uploaded with the collection, I get below error while
Ingesting data to Solr using Flume and Morphline.
Kindly let me know if you need more details
017-04-26 18:25:31,767 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG
On 4/20/2017 9:02 AM, Anantharaman, Srinatha (Contractor) wrote:
> Hi all,
>
> I am trying to ingest data to Solr 6.3 using flume 1.5 on Hortonworks 2.5
> platform Facing below issue while sinking the data
>
> 19 Apr 2017 19:54:26,943 ERROR [lifecycleSupervisor-1-3]
>
Hi all,
I am trying to ingest data to Solr 6.3 using flume 1.5 on Hortonworks 2.5
platform Facing below issue while sinking the data
19 Apr 2017 19:54:26,943 ERROR [lifecycleSupervisor-1-3]
(org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run:253) -
Unable to start SinkRunner:
Hi All,
I am trying to index a HBase table into Solr using HBase indexer and
morphline conf. file.
The issue I'm facing is that, one of the column in HBase table is a count
field (with values as integer) and except this column all other string type
HBase columns are getting indexed in Solr
Hi,
We are seeing transient Connection reset in our custom solr client(a
wrapper around solrj). We want to add retries to all methods that we are
currently using so that the we are able to upload successfully. However,
I'm not sure if there's any relevant documentation on which methods are
re GC in php engine.
That's all for now. Rest will be your own homework.
Scott Chu,scott@udngroup.com
2016/6/2 (週四)
- Original Message -
From: Shawn Heisey
To: solr-user
CC:
Date: 2016/5/31 (週二) 02:57
Subject: Re: Recommended api/lib to search Solr using PHP
On 5/30/2016 12
20Solarium
I follow it and install and use Solarium correctly.
scott.chu,scott@udngroup.com
2016/5/31 (週二)
- Original Message -
From: Shawn Heisey
To: solr-user
CC:
Date: 2016/5/31 (週二) 02:57
Subject: Re: Recommended api/lib to search Solr using PHP
On 5/30/2016 12:32 PM, GW w
(週二)
- Original Message -
From: GW
To: solr-user ; scott(自己)
CC:
Date: 2016/5/31 (週二) 02:32
Subject: Re: Recommended api/lib to search Solr using PHP
I would say look at the urls for searches you build in the query tool
In my case
http://172.16.0.1:8983/solr/#/products/query
When
On 5/30/2016 12:32 PM, GW wrote:
> I would say look at the urls for searches you build in the query tool
>
> In my case
>
> http://172.16.0.1:8983/solr/#/products/query
>
> When you build queries with the Query tool, for example an edismax query,
> the URL is there for you to copy.
> Use the url
I would say look at the urls for searches you build in the query tool
In my case
http://172.16.0.1:8983/solr/#/products/query
When you build queries with the Query tool, for example an edismax query,
the URL is there for you to copy.
Use the url structure with curl in your
we also use Solarium. the documentation is pretty spotty in some cases (tho
they've recently updated it, or at least the formatting, which seems to be
a move in the right direction), but overall pretty simple to use. some good
plugins at hand to help extend the base power, too. i'd say give it a
On 5/30/2016 1:29 AM, scott.chu wrote:
> We have two legacy in-house applications written in PHP 5.2.6 and 5.5.3. Our
> engineers currently just use fopen with url to search Solr but it's kinda
> unenough when we want to do more advanced, complex queries. We've tried to
> use something called
We've had good experiences with Solarium, so it's probably worth spending
some time in getting it to run.
scott.chu schrieb am Mo., 30. Mai 2016 um
09:30 Uhr:
>
> We have two legacy in-house applications written in PHP 5.2.6 and 5.5.3.
> Our engineers currently just use
We have two legacy in-house applications written in PHP 5.2.6 and 5.5.3. Our
engineers currently just use fopen with url to search Solr but it's kinda
unenough when we want to do more advanced, complex queries. We've tried to use
something called 'Solarium' but its installtion steps has
bject: Re: Teiid with Solr - using any other engine except the
SolrDefaultQueryEngine
Are you trying to do federated search? What about carrot? Not the one that
ships with Solr, the parent project.
Regards,
Alex
On 31 Dec 2015 12:21 am, "Mark Horninger" <mhornin...@grayhairsoftware.com&g
sing a join operator.
>>> 5. Cache possible matches in SQL Server for a given record in order for a
>>> human to disposition them.
>>>
>>> From what I read, Carrot is great for Solr clustering, but once you get
>>> into RDBMS, you're out of luck.
>>>
> -Original Message-
> From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
> Sent: Thursday, December 31, 2015 12:44 AM
> To: solr-user <solr-user@lucene.apache.org>
> Subject: Re: Teiid with Solr - using any other engine except the
> SolrDefaultQueryEngine
ce you get into
>> RDBMS, you're out of luck.
>>
>>
>>
>> -Original Message-
>> From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
>> Sent: Thursday, December 31, 2015 12:44 AM
>> To: solr-user <solr-user@lucene.apache.org>
>&
Are you trying to do federated search? What about carrot? Not the one that
ships with Solr, the parent project.
Regards,
Alex
On 31 Dec 2015 12:21 am, "Mark Horninger"
wrote:
> I have gotten Teiid and Solr wired up, but it seems like the only way to
> query
I have gotten Teiid and Solr wired up, but it seems like the only way to query
is with the default Solr Query Engine, and nothing else. In asking Dr. Google,
this is a data black hole. The more I look at it, the more I think I'm going
to end up having to write a custom translator. Is there
Hi Adrian,
I'd probably start with the expect command and "echo ruok | nc "
for a simple script. You might also want to try the Netflix Exhibitor REST
interface:
https://github.com/Netflix/exhibitor/wiki/REST-Cluster
k/r,
Scott
On Thu, Oct 15, 2015 at 2:01 AM, Adrian Liew
Hi,
I am trying to implement some scripting to detect if all Zookeepers have
started in a cluster, then restart the solr servers. Has anyone achieved this
yet through scripting?
I also saw there is the ZookeeperClient that is available in .NET via a nuget
package. Not sure if this could be
Hi,
Using CursorMark we over come the Deep paging so far so good. As far
as I understand cursormark unique for each and every query depending on
sort values other than unique id and also depends up on number of rows.
But my concern is if solr internally creates a different set for each
Apologies in advance for hijacking the thread, but somewhat related, does
anyone have experience with using cursorMark and elevations at the same
time? When I tried this, either passing elevatedIds via solrJ or specifying
them in elevate.xml, I got an AIOOBE if a cursorMark was also specified.
On Tue, Jan 27, 2015 at 3:29 AM, CKReddy Bhimavarapu
chaitu...@gmail.com wrote:
Hi,
Using CursorMark we over come the Deep paging so far so good. As far
as I understand cursormark unique for each and every query depending on
sort values other than unique id and also depends up on number
: But my concern is if solr internally creates a different set for each
: and every different queries upon sort values and they lasts for ever I
: think.
https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results
Cursors in Solr are a logical concept, that doesn't involve
, it stop and show result is file name docu.
--
View this message in context:
http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4162063.html
Sent from the Solr - User mailing list archive at Nabble.com.
[7] = file name doc
[8] = file name docu (always is 14 character)
)
---
when result is 14 character, it stop and show result is file name docu.
--
View this message in context:
http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr
Hi,
You have the wrong varname in your sub query.
select favouritedby from filefav where id=
'${filemetadata.id}'
should be
select favouritedby from filefav where id=
'${restaurant.id}'
://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4129807.html
Sent from the Solr - User mailing list archive at Nabble.com.
Can anyone suggest me the best practices how to do SpellCheck and
AutoSuggest in solarium.
Can anyone give me example for that?
--
Regards,
*Sohan Kalsariya*
, 2014 8:14 PM
To: solr-user@lucene.apache.org
Subject: AutoSuggest like Google in Solr using Solarium Client.
Can anyone suggest me the best practices how to do SpellCheck and
AutoSuggest in solarium.
Can anyone give me example for that?
--
Regards,
*Sohan Kalsariya*
[Aspire Systems]
This e-mail
I think it's best to use one of the many autosuggesters Lucene/Solr provide?
E.g. AnalyzingInfixSuggester is running here:
http://jirasearch.mikemccandless.com
But that's just one suggester... there are many more.
Mike McCandless
http://blog.mikemccandless.com
On Mon, Mar 17, 2014 at 10:44
Not sure if you have already seen this one..
http://www.solarium-project.org/2012/01/suggester-query-support/
You can also use edge N gram filter to implement typeahead auto suggest.
--
View this message in context:
http://lucene.472066.n3.nabble.com/AutoSuggest-like-Google-in-Solr-using
://lucene.472066.n3.nabble.com/SOLR-USING-100-percent-CPU-and-not-responding-after-a-while-tp4021359p4114026.html
Sent from the Solr - User mailing list archive at Nabble.com.
, there's a lot. RAID 10 (15000RPM rapid hdd).
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-USING-100-percent-CPU-and-not-responding-after-a-while-tp4021359p4114026.html
Sent from the Solr - User mailing list archive at Nabble.com.
using solr 4.4. Is the suggester component still works in this version
--
View this message in context:
http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4098032.html
Sent from the Solr - User mailing list archive at Nabble.com.
-while-importing-HBase-data-to-Solr-using-the-DataImportHandler-tp4085613p4089402.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
I want to import HBase(0.90.4) data to solr 4.3.0 using DIH (Data import
handler) for this I used https://code.google.com/p/hbase-solr-dataimport/;
project. Whenever I run data import handler
http://localhost:8080/solr/#/collection1/dataimport; it throws following
error in log.
*Jar:*
/downloads/detail?name=hbase-solr-dataimport-0.0.1.jarcan=2q=
http://code.google.com/p/hbase-solr-dataimport/downloads/detail?name=hbase-solr-dataimport-0.0.1.jarcan=2q=
--
View this message in context:
http://lucene.472066.n3.nabble.com/Error-while-importing-HBase-data-to-Solr-using
?name=hbase-solr-dataimport-0.0.1.jarcan=2q=
http://code.google.com/p/hbase-solr-dataimport/downloads/detail?name=hbase-solr-dataimport-0.0.1.jarcan=2q=
--
View this message in context:
http://lucene.472066.n3.nabble.com/Error-while-importing-HBase-data-to-Solr-using-the-DataImportHandler
Hi Biva,
Any luck on this?
Even we are facing same issue with exactly same configuration and setup.
Any inputs will help a lot.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-USING-100-percent-CPU-and-not-responding-after-a-while-tp4021359p4083234.html
Sent from
Thanks.
On Mon, Aug 5, 2013 at 4:57 PM, Shawn Heisey s...@elyograg.org wrote:
On 8/5/2013 2:35 AM, Mysurf Mail wrote:
When I query using
http://localhost:8983/solr/vault/select?q=*:*
I get reuslts including the following
doc
...
...
int name=VersionNumber7/int
...
When I query using
http://localhost:8983/solr/vault/select?q=*:*
I get reuslts including the following
doc
...
...
int name=VersionNumber7/int
...
/doc
Now I try to get only that row so I add to my query fq=VersionNumber:7
Is VersionNumber an indexed field, or just stored?
-- Jack Krupansky
-Original Message-
From: Mysurf Mail
Sent: Monday, August 05, 2013 4:35 AM
To: solr-user@lucene.apache.org
Subject: solr - using fq parameter does not retrieve an answer
When I query using
http://localhost:8983
On 8/5/2013 2:35 AM, Mysurf Mail wrote:
When I query using
http://localhost:8983/solr/vault/select?q=*:*
I get reuslts including the following
doc
...
...
int name=VersionNumber7/int
...
/doc
Now I try to get only that row so I add to my query fq=VersionNumber:7
=specifications /
/entity
/document
/dataConfig
Why is this happening? how do i solve it? how does giving an alias affect
indexing process? Thanks in advance
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-Oracle-Database-in-Solr-using-Data-Import-Handler-tp4079649.html
Sent
On 11 July 2013 11:13, archit2112 archit2...@gmail.com wrote:
Im trying to index MySql database using Data Import Handler in solr.
[...]
Everything is working but the favouritedby1 field is not getting indexed
,
ie, that field does not exist when i run the *:* query. Can you please
help
me
/dataConfig
Everything is working but the favouritedby1 field is not getting indexed ,
ie, that field does not exist when i run the *:* query. Can you please help
me out?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-database-in-Solr-using-Data-Import-Handler
John:
If you'd like to add your experience to the Wiki, create
an ID and let us know what it is and we'll add you to the
contributors list. Unfortunately we had problems with
spam pages to we added this step.
Make sure you include your logon in the request.
Thanks,
Erick
On Fri, Jun 14, 2013
a lot.
Not sure related but you might want to check that out.
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-using-a-ridiculous-amount-of-memory-tp4050840p4070803.html
Sent from the Solr - User mailing list archive at Nabble.com.
AM
To: solr-user@lucene.apache.org
Subject: Re: Solr using a ridiculous amount of memory
It was interesting to read this post. I had similar issue on Solr v4.2.1.
The
nature of our document is that it has huge multiValued fields and we were
able to knock off out server in about 30muns
We
Sorry for not getting back to the list sooner. It seems like I finally
solved the memory problems by following Toke's instruction of splitting the
cores up into smaller chunks.
After some major refactoring, our 15 cores have now turned into ~500 cores
and our memory consumption has dropped
On Fri, 2013-06-14 at 14:55 +0200, John Nielsen wrote:
Sorry for not getting back to the list sooner.
Time not important, only feedback important (apologies to Fifth
Element).
After some major refactoring, our 15 cores have now turned into ~500 cores
and our memory consumption has dropped
Hmmm. There has been quite a bit of work lately to support a couple of
things that might be of interest (4.3, which Simon cut today, probably
available to all mid next week at the latest). Basically, you can
choose to pre-define all the cores in solr.xml (so-called old style)
_or_ use the
That was strange. As you are using a multi-valued field with the new
setup, they should appear there.
Yes, the new field we use for faceting is a multi valued field.
Can you find the facet fields in any of the other caches?
Yes, here it is, in the field cache:
On Thu, 2013-04-18 at 08:34 +0200, John Nielsen wrote:
[Toke: Can you find the facet fields in any of the other caches?]
Yes, here it is, in the field cache:
http://screencast.com/t/mAwEnA21yL
Ah yes, mystery solved, my mistake.
http://172.22.51.111:8000/solr/default1_Danish/search
http://172.22.51.111:8000/solr/default1_Danish/search
[...]
fq=site_guid%3a(10217)
This constraints to hits to a specific customer, right? Any search will
only be in a single customer's data?
Yes, thats right. No search from any given client ever returns anything
from another client.
On Thu, 2013-04-18 at 11:59 +0200, John Nielsen wrote:
Yes, thats right. No search from any given client ever returns
anything from another client.
Great. That makes the 1 core/client solution feasible.
[No sort facet warmup is performed]
[Suggestion 1: Reduce the number of sort fields by
You are missing an essential part: Both the facet and the sort
structures needs to hold one reference for each document
_in_the_full_index_, even when the document does not have any values in
the fields.
Wow, thank you for this awesome explanation! This is where the penny
dropped for me.
I
I managed to get this done. The facet queries now facets on a multivalue
field as opposed to the dynamic field names.
Unfortunately it doesn't seem to have done much difference, if any at all.
Some more information that might help:
The JVM memory seem to be eaten up slowly. I dont think that
John Nielsen [j...@mcb.dk] wrote:
I managed to get this done. The facet queries now facets on a multivalue
field as opposed to the dynamic field names.
Unfortunately it doesn't seem to have done much difference, if any at all.
I am sorry to hear that.
documents = ~1.400.000
references
I am surprised about the lack of UnInverted from your logs as it is
logged on INFO level.
Nope, no trace of it. No mention either in Logging - Level from the admin
interface.
It should also be available from the admin interface under
collection/Plugin / Stats/CACHE/fieldValueCache.
I never
John Nielsen [j...@mcb.dk]:
I never seriously looked at my fieldValueCache. It never seemed to get used:
http://screencast.com/t/YtKw7UQfU
That was strange. As you are using a multi-valued field with the new setup,
they should appear there. Can you find the facet fields in any of the other
Whopps. I made some mistakes in the previous post.
Toke Eskildsen [t...@statsbiblioteket.dk]:
Extrapolating from 1.4M documents and 180 clients, let's say that
there are 1.4M/180/5 unique terms for each sort-field and that their
average length is 10. We thus have
1.4M*log2(1500*10*8) +
On Sun, 2013-03-24 at 09:19 +0100, John Nielsen wrote:
Our memory requirements are running amok. We have less than a quarter of
our customers running now and even though we have allocated 25GB to the JVM
already, we are still seeing daily OOM crashes.
Out of curiosity: Did you manage to
Yes and no,
The FieldCache is the big culprit. We do a huge amount of faceting so it
seems right. Unfortunately I am super swamped at work so I have precious
little time to work on this, which is what explains my silence.
Out of desperation, I added another 32G of memory to each server and
On Mon, 2013-04-15 at 10:25 +0200, John Nielsen wrote:
The FieldCache is the big culprit. We do a huge amount of faceting so
it seems right.
Yes, you wrote that earlier. The mystery is that the math does not check
out with the description you have given us.
Unfortunately I am super swamped
I did a search. I have no occurrence of UnInverted in the solr logs.
Another explanation for the large amount of memory presents itself if
you use a single index: If each of your clients facet on at least one
fields specific to the client (client123_persons or something like
that), then your
Might be obvious, but just in case - remember that you'll need to
re-index your content once you've added docValues to your schema, in
order to get the on-disk files to be created.
Upayavira
On Mon, Mar 25, 2013, at 03:16 PM, John Nielsen wrote:
I apologize for the slow reply. Today has been
I apologize for the slow reply. Today has been killer. I will reply to
everyone as soon as I get the time.
I am having difficulties understanding how docValues work.
Should I only add docValues to the fields that I actually use for sorting
and faceting or on all fields?
Will the docValues magic
Hello all,
We are running a solr cluster which is now running solr-4.2.
The index is about 35GB on disk with each register between 15k and 30k.
(This is simply the size of a full xml reply of one register. I'm not sure
how to measure it otherwise.)
Our memory requirements are running amok. We
Subject: Solr using a ridiculous amount of memory
Hello all,
We are running a solr cluster which is now running solr-4.2.
The index is about 35GB on disk with each register between 15k and 30k.
(This is simply the size of a full xml reply of one register. I'm not sure
how to measure it otherwise
On Sun, Mar 24, 2013 at 4:19 AM, John Nielsen j...@mcb.dk wrote:
Schema with DocValues attempt at solving problem:
http://pastebin.com/Ne23NnW4
Config: http://pastebin.com/x1qykyXW
This schema isn't using docvalues, due to a typo in your config.
it should not be DocValues=true but
From: John Nielsen [j...@mcb.dk]:
The index is about 35GB on disk with each register between 15k and 30k.
(This is simply the size of a full xml reply of one register. I'm not sure
how to measure it otherwise.)
Our memory requirements are running amok. We have less than a quarter of
our
Toke Eskildsen [t...@statsbiblioteket.dk]:
If your whole index has 10M documents, which each has 100 values
for each field, with each field having 50M unique values, then the
memory requirement would be more than
10M*log2(100*10M) + 100*10M*log2(50M) bit ~= 340MB/field ~=
1.6GB for faceting
, March 24, 2013 2:00 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr using a ridiculous amount of memory
Just to get started, do you hit OOM quickly with a few expensive queries, or
is it after a number of hours and lots of queries?
Does Java heap usage seem to be growing linearly as queries
this?
Thanks in advance
--
View this message in context:
http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4035931.html
Sent from the Solr - User mailing list archive at Nabble.com.
.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4035931.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards
Naresh
with urlencoded + Chinese.
any suggestion?
thanks
jie
--
View this message in context:
http://lucene.472066.n3.nabble.com/POST-query-with-non-ASCII-to-solr-using-httpclient-wont-work-tp4032957p4033262.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Jie,
maybe there is a simple solution. When we used tomcat as servlet
container for solr I notices similar problems. Even with the hints from
the solr wiki about unicode and Tomcat, i wasn't able to fix this.
So we switched back to Jetty, querys like q=allfields2%3A能力 are
reliable now.
(there is no
way seems to me to make it sends out request in POST without urlencoding the
body).
am I missing something, or do you have any suggestion what I am doing wrong?
thanks
Jie
--
View this message in context:
http://lucene.472066.n3.nabble.com/POST-query-with-non-ASCII-to-solr-using
, or do you have any suggestion what I am doing
wrong?
thanks
Jie
--
View this message in context:
http://lucene.472066.n3.nabble.com/POST-query-with-non-ASCII-to-solr-using-httpclient-wont-work-tp4032957.html
Sent from the Solr - User mailing list archive at Nabble.com.
:-) Otis, I also looked at solrJ source code, seems exactly what I am doing
here... but I probably will do what you suggested ... thanks
Jie
--
View this message in context:
http://lucene.472066.n3.nabble.com/POST-query-with-non-ASCII-to-solr-using-httpclient-wont-work-tp4032957p4032973.html
1 - 100 of 205 matches
Mail list logo