I guess this is the best idea . Let us have a new BatchHttpSolrServer
which can help achieve this
--Noble
On Thu, Dec 4, 2008 at 7:14 PM, Yonik Seeley [EMAIL PROTECTED] wrote:
On Thu, Dec 4, 2008 at 8:39 AM, Mark Miller [EMAIL PROTECTED] wrote:
Kick off some indexing more than once - eg, post a
Hi All,
I am searching a term on Solr by using wildcard character * like this :
http://delpearsonwebapps:8080/apache-solr-1.3.0/core51043/select/?q=alle*
here the search term(word) is : alle*
This query gives me proper result , but as i give dismaxrequest as parameter
in the query , no
payalsharma wrote:
Hi All,
I am searching a term on Solr by using wildcard character * like this :
http://delpearsonwebapps:8080/apache-solr-1.3.0/core51043/select/?q=alle*
here the search term(word) is : alle*
This query gives me proper result , but as i give dismaxrequest as
hi joshua,
i'm having the same problem as yours.
just curious, have you found any fix for this?
thnks
Joshua Reedy wrote:
I have been using a stable dev version of 1.3 for a few months.
Today, I began testing the final release version, and I encountered a
strange problem.
The only thing
We have been having this problem also. and have resorted to just
stripping control characters before sending the text for indexing:
preg_replace('@[\x00-\x08\x0B\x0C\x0E-\x1F]@', '', $text);
-Peter
On Tue, Dec 9, 2008 at 7:59 AM, knietzie [EMAIL PROTECTED] wrote:
hi joshua,
i'm having the
Hi folks,
I'm working on creating a schema which will accommodate the following
(likely common) scenario and was hoping for some best practices:
We have stories which are objects culled from various fields in our
database. We currently index them with a bunch of meta-data for faceting,
sorting,
Only a few control characters are legal in XML. Removing everthing
but newlines, space, and tab is the right thing to do. --wunder
On 12/9/08 5:45 AM, Peter Wolanin [EMAIL PROTECTED] wrote:
We have been having this problem also. and have resorted to just
stripping
control characters before
Hello,
I think I have not all understand with Solr... :(
I have got datas in a database and I use DataImportHandler to index it, but
I have a question for the next. What are the best solution?
1. Have an light index, which not stored data and do requests in the
database to get it.
Or
2. Have
On Tue, Dec 9, 2008 at 10:36 AM, BenDede [EMAIL PROTECTED] wrote:
Hello,
I think I have not all understand with Solr... :(
I have got datas in a database and I use DataImportHandler to index it, but
I have a question for the next. What are the best solution?
1. Have an light index, which
I'm pretty sure * isn't supported by DisMax.
From the Solr Wiki on DisMaxRequestHandler overview
http://wiki.apache.org/solr/DisMaxRequestHandler?highlight=(dismax)#head
-ce5517b6c702a55af5cc14a2c284dbd9f18a18c2
This query handler supports an extremely simplified subset of the
Lucene
Hi Jacob,
One option is that if you later index additional content, you also index the
old content you previously added to Solr. In other words, keep both the
meta-data and the other (e.g. PDF) data in a same record/document in 1 index.
If you can't do that, no, there is no join across
Tracy,
I think Iván de Prado's patch is the latest. Porting to 1.4-dev would be good,
too.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: SOLR lists [EMAIL PROTECTED]
To: Solr Users List solr-user@lucene.apache.org
Sent: Tuesday,
I have not looked at Field Collapsing in a long time. If someone made
an effort to bring it up-to-date, i'll review it.
It would be great to get Field Collapsing in 1.4
ryan
On Dec 9, 2008, at 12:46 PM, Otis Gospodnetic wrote:
Tracy,
I think Iván de Prado's patch is the latest. Porting
Thanks for the reply. I figured there is no simple solution here. I am
parsing the query in my code separating out negations, assertions and such
and building the final SOLR query to issue. I simply ue the boost as given
by the user. If none given, I use a default boost for title url matches.
-
Hi,
i have this config in my solrconfig.xml
requestHandler name=dismax class=solr.DisMaxRequestHandler
lst name=defaults
str name=echoParamsexplicit/str
float name=tie0/float
str name=qf
field1^1.1 field2^1.2 field3^1.3 field4^1.4 field5^1.5
/str
str
Hi,
A while ago, we had a field called word which was used as a spelling
field. We switched this to spell. When querying our solr instance with
just q=*:*, we get back the expected results. When querying our solr
instance with q=*:*wt=json, we get this (below). When setting the qt to
dismax, the
Hi,
I'm trying to use field collapsing with our SOLR but I just can't seem
to get it to do anything.
I've downloaded a dist copy of solr 1.3 and applied Ivan de Prado's
patch - reading through the source code, the patch definitely was
applied successfully (all the changes are in the
Hi Matt,
You need to edit your solrconfig.xml and look for the word word in the dismax
section of the config and change it to spell.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Matt Mitchell [EMAIL PROTECTED]
To:
Actually, the dismax thing was a bad example. So, forget about the qt param
for now. I did however, search the schema and didn't find a reference to
word. The problem comes in when I switch the wt param from xml to json (or
ruby).
q=*:*wt=xml == success
q=*:*wt=json == error
q=*:*wt=ruby == error
There is probably a document in your index with the field word.
The json writers may be less tolerant when encountering a field that
is not known.
We should perhaps change the json/text based writers to handle this
case gracefully also.
-Yonik
On Tue, Dec 9, 2008 at 5:18 PM, Matt Mitchell
Thanks Yonik. Should submit this as a bug ticket? Currently it's not a deal
breaker as we're setting fl manually anyway.
Matt
On Tue, Dec 9, 2008 at 5:38 PM, Yonik Seeley [EMAIL PROTECTED] wrote:
There is probably a document in your index with the field word.
The json writers may be less
Steve,
I need this too. As my previous posting said, I adapted the 1.2 field
collapsing back at the beginning of the year, so I'm somewhat familiar.
I'll try and get a look this weekend. It's the earliest I''m likely to
get spare cycles. I'll post any results.
Tracy
On Dec 9, 2008, at
Otis,
If I get it working in 1.3, I'll be happy to take a shot at a patch
for 1.4.
Tracy
On Dec 9, 2008, at 12:46 PM, Otis Gospodnetic wrote:
Tracy,
I think Iván de Prado's patch is the latest. Porting to 1.4-dev
would be good, too.
Otis
--
Sematext -- http://sematext.com/ --
On Tue, Dec 9, 2008 at 5:45 PM, Matt Mitchell [EMAIL PROTECTED] wrote:
Thanks Yonik. Should submit this as a bug ticket? Currently it's not a deal
breaker as we're setting fl manually anyway.
Yes, please do.
-Yonik
Thanks a lot Ken for your inputs.
Regards,
Sourav
-Original Message-
From: Ken Krugler [mailto:[EMAIL PROTECTED]
Sent: Monday, December 08, 2008 12:41 PM
To: solr-user@lucene.apache.org
Subject: RE: Limitations of Distributed Search
Any inputs on this would be really helpful.
Hi Tracy,
Well, I managed to get it working (I think) but the weird thing is, in
the XML output it gives both recordsets (the filtered and unfiltered -
filtered second). In the JSON (the one I actually use anyway, at
least) I only get the filtered results (as expected).
In my core's
Hi,
We are seeing a strange behavior with snappuller
We have 2 cores Hotel Location
Here are the steps we perform
1. index hotel on master server
2. index location on master server
3. execute snapshooter for hotel core on master server
4. execute snapshooter
27 matches
Mail list logo