: I'm just looking into transitioning from solr 1.2 to 1.3 (trunk). I
: have some legacy handler code (called "AdvancedRequestHandler") that
: used to work with 1.2 but now throws an exception using 1.3 (latest
: nightly build). The exception is this:
The short answer is: right after you call "s
hi Julio,
delete my previous response. In your schema , 'id' is the uniqueKey.
make 'comboid' the unique key. Because that is the target field name
coming out of the entity 'owners'
--Noble
On Tue, Jun 3, 2008 at 9:46 AM, Noble Paul നോബിള് नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> The field 'id' is r
The field 'id' is repeated for pet also rename it to something else
say
--Noble
On Tue, Jun 3, 2008 at 3:28 AM, Julio Castillo <[EMAIL PROTECTED]> wrote:
> Shalin,
> I experimented with it, and the null pointer exception has been taken care
> of. Thank you.
>
> I have a different p
: city in the Republic of Georgia). I've already written an efficient lookup
: function. I just don't know how to call it during analysis because I don't
: know how to access an instance of SolrCore from within a token filter
: object.
There were reasons why we didn't make analysis factories
S
I am using solrj to getting results from solr
It is hard to deal with different type of SolrServerException since there is
no errorCode.
I think it is necessary to add errorCode to solrj
On 2-Jun-08, at 2:09 PM, Norskog, Lance wrote:
Solr 1.2 ignores the 'number of documents' attribute. It honors the
"every 30 minutes" attribute.
Only if you specify both, I think. There was a bug in the
implementation.
-Mike
Lance
-Original Message-
From: [EMAIL PROTECTED] [m
On Mon, Jun 02, 2008 at 03:46:49PM -0700, Chris Hostetter wrote:
> : % ruby inspect-solr-result.rb s1.rb
> : result['responseHeader']['params']['rows'] => 50
> : result['responseHeader']['params']['start'] => 1
> :
> : result['response']['numFound'] => 36
> : result['respon
: % ruby inspect-solr-result.rb s1.rb
: result['responseHeader']['params']['rows'] => 50
: result['responseHeader']['params']['start'] => 1
:
: result['response']['numFound'] => 36
: result['response']['start'] => 1
: result['response']['docs'].size
: 2 - The error message (I don't have that right now) was about encoding being
: declared twice in header.jsp, or something like that.
: 3 - The @page entry in header.jsp is:
: <%@ page contentType="text/html; charset=utf-8" pageEncoding="UTF-8"%>
: 4 - It won't work on Weblogic 10 (didn't test in
We are testing some items out with our data, and using the 1.3 dev builds
In this case:
Solr Specification Version: 1.2.2008.05.10.02.06.17
Solr Implementation Version: 1.3-dev ${svnversion} - mockbuild - 2008-05-10
02:06:17
We are using wt=ruby and with the results are are seeing a discrepe
Shalin,
I experimented with it, and the null pointer exception has been taken care
of. Thank you.
I have a different problem now. I believe it is a syntax/specification
problem.
When importing data, I got the following exceptions:
SEVERE: Exception while adding:
SolrInputDocumnt[{comboId=comboId(
Ok,
Sorry. When I sent the email I was not on my work PC.
1 - I am using solr trunk.
2 - The error message (I don't have that right now) was about encoding being
declared twice in header.jsp, or something like that.
3 - The @page entry in header.jsp is:
<%@ page contentType="text/html; charset=utf
:
: Aparently there was a problem at header.jsp at the solr war. After I removed
: the content attribute from the first @page entry on the file, everything
: began to work on the admin interface.
can you elaborate on which version of SOlr you are using (1.2, or a
nightly snapshot?) and what exat
On 30-May-08, at 9:51 PM, Dallan Quass wrote:
One more clarification -- I don't need to do this for every token in
the
text; just for "place" fields in the document. Each document has
1-3 place
fields that need to be converted to standard form when the document is
indexed.
There is a spe
It seems that the DisMaxRequestHandler tries hard to handle any query
that the user can throw at it.
From http://wiki.apache.org/solr/DisMaxRequestHandler:
"Quotes can be used to group phrases, and +/- can be used to denote
mandatory and optional clauses ... but all other Lucene query parser
s
How often does your collection change or get updated?
You could also have a slight alternative, which is to create a real
small and simple Lucene index that contains your translations and then
do it pre-indexing. The code for such a searcher is quite simple,
albeit it isn't Solr.
Otherwi
Solr 1.2 ignores the 'number of documents' attribute. It honors the
"every 30 minutes" attribute.
Lance
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik
Seeley
Sent: Sunday, June 01, 2008 6:47 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr ind
What are you looking to do? Lucene inherently uses RAMDirectory under
the covers during indexing, but not sure if that is your interest.
-Grant
On Jun 1, 2008, at 5:09 PM, s d wrote:
Can i use RAMDirectory in SOLR?Thanks,
S
--
Grant Ingersoll
http://www.lucidimagin
See http://wiki.apache.org/solr/DisMaxRequestHandler
Namely, "-" is the prohibited operator, thus, -- really is
meaningless. You either need to escape them or remove them
-Grant
On Jun 2, 2008, at 7:14 AM, Bram de Jong wrote:
hello all,
just a small note to say that the dismax query par
> Use smaller caches unless you need bigger ones for some reason.
>
> class="solr.search.LRUCache"
> size="65536"
> initialSize="16384"
> autowarmCount="16384"/>
>
> If you aren't faceting, I'd make this smaller.
I'm faceting quite heavily... My plan is to give the user a ta
Aparently there was a problem at header.jsp at the solr war. After I removed
the content attribute from the first @page entry on the file, everything
began to work on the admin interface.
2008/6/2 Alexander Ramos Jardim <[EMAIL PROTECTED]>:
> Hello,
>
> Is there any special concerns about deployi
Yonik Seeley wrote:
There is your issue: type "string" indexes the whole field value as a
single token.
You want type "text" like you have on the name field.
yep, i noticed that right after i hit send. things are working now.
sorry, i did say i was a newbie!
-jsd-
On Mon, Jun 2, 2008 at 2:55 PM, Jon Drukman <[EMAIL PROTECTED]> wrote:
> Yonik Seeley wrote:
>>
>> Verify all the fields you want to search on indexed
>> Verify that the query is being correctly built by adding
>> debugQuery=true to the request
>
> here is the schema.xml extract:
>
>required="t
Yonik Seeley wrote:
Verify all the fields you want to search on indexed
Verify that the query is being correctly built by adding
debugQuery=true to the request
here is the schema.xml extract:
required="true" />
here is the debugQuery output. i have no idea how to read
Jon,
As a nearly ex-newbie you are experiencing some similar things I did.
If you are using the default set-up of Solr, make sure in your
schema.xml you are indexing the fields you want to search, at least
for now, as text fields. One way you can scale this easily for
example if your sche
On Mon, Jun 2, 2008 at 1:49 PM, Bram de Jong <[EMAIL PROTECTED]> wrote:
> On Mon, Jun 2, 2008 at 10:13 AM, Erick Erickson <[EMAIL PROTECTED]> wrote:
>> But are you sure you're not just masking the problem? That is, your limit
>> may now be 90,000 queries...
>>
>> I always assume this kind of thing
Verify all the fields you want to search on indexed
Verify that the query is being correctly built by adding
debugQuery=true to the request
-Yonik
On Mon, Jun 2, 2008 at 1:53 PM, Jon Drukman <[EMAIL PROTECTED]> wrote:
> I am brand new to Solr. I am trying to get a very simple setup running.
>
I am brand new to Solr. I am trying to get a very simple setup running.
I've got just a few fields: name, description, tags. I am only able
to search on the default field (name) however. I tried to set up the
dismax config to search all the fields, but I never get any results on
the other f
On Mon, Jun 2, 2008 at 10:13 AM, Erick Erickson <[EMAIL PROTECTED]> wrote:
> But are you sure you're not just masking the problem? That is, your limit
> may now be 90,000 queries...
>
> I always assume this kind of thing is a memory leak somewhere, have you
> any tools to monitor your memory consum
But are you sure you're not just masking the problem? That is, your limit
may now be 90,000 queries...
I always assume this kind of thing is a memory leak somewhere, have you
any tools to monitor your memory consumption and see if that's ever-rising?
Best
Erick
On Mon, Jun 2, 2008 at 10:38 AM, B
Ai! That was exactly it. I increased the VM mem, and all is running
fine (34K queries and rising!)!
Thanks a lot,
- Bram
On Mon, Jun 2, 2008 at 3:32 PM, Chris <[EMAIL PROTECTED]> wrote:
> Maybe you jetty need to turning
> how many memory in your system ?
> Can you show the processes informa
Hello,
Is there any special concerns about deploying Solr in Weblogic 10?
I tried to do that but I get jsp errors, complaining about headers include
twice in the headers.jsp file.
--
Alexander Ramos Jardim
I see, thanks for the fast response,
Stefan Oestreicher
--
Dr. Maté GmbH
Stefan Oestreicher / Entwicklung
[EMAIL PROTECTED]
http://www.netdoktor.at
Tel Buero: + 43 1 405 55 75 24
Fax Buero: + 43 1 405 55 75 55
Alser Str. 4 1090 Wien Altes AKH Hof 1 1.6.6
-Ursprüngliche Nachricht-
Von:
On Jun 2, 2008, at 9:51 AM, Stefan Oestreicher wrote:
(http://www.nabble.com/-jira--Commented:-(SOLR-553)-Highlighter-does-not-mat
ch-phrase-queries-correctly-p17234014.html) It deployed without
problems and
the info section in the admin panel correctly (?) reports the solr
version
as "Solr
Hi,
in order to use phrase highlighting I built a war from the current svn
trunk.
(http://www.nabble.com/-jira--Commented:-(SOLR-553)-Highlighter-does-not-mat
ch-phrase-queries-correctly-p17234014.html) It deployed without problems and
the info section in the admin panel correctly (?) reports the
Maybe you jetty need to turning
how many memory in your system ?
Can you show the processes information with the java processes ?
above
Chris
2008/6/2 Bram de Jong <[EMAIL PROTECTED]>:
> Hello all,
>
>
> Still running tests on solr using the example jetty container. I've
> been g
Hello all,
Still running tests on solr using the example jetty container. I've
been getting nice performance. However, suddenly between 15400 and
15600 queries, I get a very serious drop in performance, and this
every time I run my test, independent of what I'm searching for. The
performance STAY
hello all,
just a small note to say that the dismax query parser crashes on:
q = "apple -- pear"
I'm running through a stored batch of my users' searches and it went
down on the double dash :)
- Bram
--
http://freesound.iua.upf.edu
http://www.smartelectronix.com
http://www.musicdsp.org
On Sat, May 31, 2008 at 7:03 AM, Chris Hostetter
<[EMAIL PROTECTED]> wrote:
>
> : For date faceting, count missing the order doesn't matter either, and
> : there it's given as a comma-separated list.
>
> Either you are mistaken, or i don't understand your statement. date
> faceting works just like
39 matches
Mail list logo