This kind of thing is what I was getting at in SOLR-344
(https://issues.apache.org/jira/browse/SOLR-344). There I said I'd post a
prototype Java API - but for now, I've had to give up and go back to my
home-grown Lucene-based code.
-Original Message-
From: Ravish Bhagdev [mailto:[EMAIL
Only if you think the rest of Solr would be better written in JRuby too!
-Original Message-
From: Erik Hatcher [mailto:[EMAIL PROTECTED]
Sent: 31 August 2007 02:57
To: solr-user@lucene.apache.org
Subject: Re: performance questions
On Aug 30, 2007, at 6:31 PM, Mike Klaas wrote:
Or you could write your own Analyzer and Tokenizer to produce single values
corresponding, say, to the start of each range.
Jon
-Original Message-
From: Jae Joo [mailto:[EMAIL PROTECTED]
Sent: 27 August 2007 16:46
To: solr-user@lucene.apache.org
Subject: Re: range index
I could
I've got a Lucene-based search implementation which searches over documents
in a CMS and weeds out those hits which aren't accessible to the user
carrying out the search. The raw search results are returned as an
iterator, and I wrap another iterator around this to silently consume the
the
information that you would use to determine the availability
of the record to any given user, and then construct the
filter based on the current user.
-Original Message-
From: Jonathan Woods [mailto:[EMAIL PROTECTED]
Sent: Monday, August 27, 2007 10:00 AM
To: solr-user
-user@lucene.apache.org
Subject: Re: range index
Any sample code and howto write Analyzer and Tockenizer available?
Jae
On 8/27/07, Jonathan Woods [EMAIL PROTECTED] wrote:
Or you could write your own Analyzer and Tokenizer to
produce single
values corresponding, say, to the start
.
Don't index which users have permission, index which type of
user has permission. Then _filter_ based on that.
-Original Message-
From: Jonathan Woods [mailto:[EMAIL PROTECTED]
Sent: Monday, August 27, 2007 10:26 AM
To: solr-user@lucene.apache.org
Subject: RE: Filtering using
I don't think you should apologise for highlighting embedded usage. For
circumstances in which you're at liberty to run a Solr instance in the same
JVM as an app which uses it, I find it very strange that you should have to
use anything _other_ than embedded, and jump through all the unnecessary
?
On 8/15/07, Jonathan Woods [EMAIL PROTECTED] wrote:
I'm trying to understand how best to integrate directly with Solr
(Java-to-Java in the same JVM) to make the most of its query
optimisation - chiefly, its caching of queries which merely filter
rather than rank results.
I notice
I'm trying to understand how best to integrate directly with Solr
(Java-to-Java in the same JVM) to make the most of its query optimisation -
chiefly, its caching of queries which merely filter rather than rank
results.
I notice that SolrIndexSearcher maintains a filter cache and so does
Thanks, Lance.
I recall reading that Lucene is used in a superfast RDF query engine:
http://www.deri.ie/about/press/releases/details/?uid=55ref=213.
Jon
-Original Message-
From: Lance Norskog [mailto:[EMAIL PROTECTED]
The Protégé project at Stanford has nice tools for editing
You could try committing updates more frequently, or maybe optimising the
index beforehand (and even during!). I imagine you could also change the
Solr config, if you have access to it, to tweak indexing (or index creation)
parameters - http://wiki.apache.org/solr/SolrConfigXml should be of use
Maybe there's a different way, in which path-like values like this are
treated explicitly.
I use a similar approach to Matthew at www.colfes.com, where all pages are
generated from Lucene searches according to filters on a couple of
hierarchical categories ('spaces'), i.e. subject and
13 matches
Mail list logo