Hello, I am begining with solr and i have a problem with the delete by query.
If i do a query to solr, it give me the results that i hope but when the
query is sent by XML as a delete, solr don't erase it from the index.
The query contains !·$%=¿^¨Ç_;^Ǩ_;;;[EMAIL PROTECTED]@[EMAIL PROTECTED]
Hi,
I'm currently in the middle of converting my index from the old
spellchecker request handler to the spellcheck component. My index has a
category field and my frontend only allows to search in one category at
once so I have a spellchecker request handler for each category in order
to present
are you sure you committed after the 'delete' ?
On Wed, Sep 10, 2008 at 2:26 PM, Athok [EMAIL PROTECTED] wrote:
Hello, I am begining with solr and i have a problem with the delete by query.
If i do a query to solr, it give me the results that i hope but when the
query is sent by XML as a
Yes, when the file is index, i send a commit
Noble Paul നോബിള് नोब्ळ् wrote:
are you sure you committed after the 'delete' ?
On Wed, Sep 10, 2008 at 2:26 PM, Athok [EMAIL PROTECTED] wrote:
Hello, I am begining with solr and i have a problem with the delete by
query.
If i do a query to
The only thing I can suggest is that each and every Query in Solr/
Lucene is an example of custom scoring. You might be better off
starting w/ TermQuery and working through PhraseQuery, BooleanQuery,
on up. At the point you get to DisJunctionMax, then ask questions
about that specific
On Sep 5, 2008, at 6:27 PM, Ravindra Sharma wrote:
Hi Folks,
I have somewhat complex scoring/boosting requirement.
Say I have 3 text fields A, B, C and a Numeric field called D.
Say My query is testrank.
Scoring should be based on following:
Query matches
1. text fields A, B and C,
Hi All,
We have a cluster of 4 servers for the application and Just one
server for Solr. We have just about 2 million docs to index and we never
bothered to make the solr environment clustered as Solr was delivering
performance with the current setup itself. Offlate we just discovered
We do both #2 and #4 from the Wiki page. If the schemas have a lot of
overlap and you don't foresee the need to scale to multiple machines (either
due to index size or amount of traffic), it may be best to put all the data
in a single index with different type fields (#4); this certainly
Have you tried performing an optimize? Solr doesn't seem to fully
integrate all updates into a single index until an optimize is performed.
Jason
On Wed, Sep 10, 2008 at 1:05 PM, sundar shankar [EMAIL PROTECTED]wrote:
Hi All,
We have a cluster of 4 servers for the application and
I had an Optimize earlier. But removed it as it was too grueling and very time
consuming. IS there a way to configure auto optimize in solr. A settings that
should optimize the data in some time or after some records, Similar to what we
have for commit?
Date: Wed, 10 Sep 2008 14:37:11 -0400
OPtimize solved it . Thanks Jason. I am surprised on why solr does this?
Date: Wed, 10 Sep 2008 14:37:11 -0400
From: [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Subject: Re: Question on how index works - runs out of disk space!
Have you tried performing an optimize? Solr doesn't
: I need to implement a Query similar to DisjunctionMaxQuery, the only
: difference would
: be it should score based on sum of score of sub queries' scores instead of
: max.
BooleanQuery computes scores that are the sub of hte subscores -- you just
need to disable the coordFactor (there is a
On 5-Sep-08, at 5:01 PM, Ravindra Sharma wrote:
I am looking for an example if anyone has done any custom scoring with
Solr/Lucene.
I need to implement a Query similar to DisjunctionMaxQuery, the only
difference would
be it should score based on sum of score of sub queries' scores
instead
: OPtimize solved it . Thanks Jason. I am surprised on why solr does this?
this gets into some complicated discussions about the underlying Lucnee
index format, this is discussed at a very low level in the Lucene docs...
http://lucene.apache.org/java/2_3_2/fileformats.html
...but at a
Thats brilliant. I am just starting to wonder if there anything at all
that you guys haven't thought about ;) Thanks that setting should be
really useful.
Date: Wed, 10 Sep 2008 15:26:57 -0700
From: [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Subject: RE: Question on how index works -
I created a JIRA issue for this and attached a patch:
https://issues.apache.org/jira/browse/SOLR-768
wojtekpia wrote:
I would like to use (abuse?) the dataimporter.last_index_time variable in
my full-import query, but it looks like that variable is only set when
running a delta-import.
16 matches
Mail list logo