hi, all
how to search 'us' to get 'united states'?
through synonyms filter ?
--steven.li
Hi guys
do you have any idea where come form this problem.
Don't get what did I miss there ??
thanks,
sunnyfr wrote:
Hi
I would like to get how is a snapshot really. It's obviously a hard link
to the files.
But it just contain the last update ??
My problem is ... Ive cronjob to
Hello,
I am not that experienced but managed to get a Solr index going by
copying the example dir from the distribution (1.3 released version)
and changing the fields in schema.xml to my needs. As I said everything
is working very well so far.
Now I need a second index on the same machine and the
But I have some problems setting this up. As long as I try the
multicore
sample everything works but when I copy my schema.xml into the
multicore/core0/conf dir I only get 404 error messages when I enter
the
admin url.
what is the url you are hitting?
Do you see links from the index
Hi,
Has anybody tried the combination of EmbeddedSolrServer only for
indexing and CommonHttpSolrServer only for searching?
So in my architecture with the EmbeddedSolrServer I want to use the
advantage of direct API calls for indexing purpose and for searching I
would rely on HTTP requests.
I
Is Solr 1.4 (and its nice SLF4J logging) in a state ready for
intensive
production usage?
While it is not officially recommended, trunk is quite stable.
Of course back up and make sure to test well before deploying anything
real.
ryan
yes. This works fine.
But make sure only one SolrServer is writing to the index at a time.
Also note that if you use the EmbeddedSolrServer to index and another
one to read, you will need to call commit/ on the 'read only' server
to refresh the index view (the work commit is a bit
Hi,
I have a few queries regarding this:
1. Does this mean that committing on the indexing (Embedded) server does
not reflect the document changes when we fire a search through another
(HTTP) server?
2. What happens to the commit fired on the indexing server? Can I remove
that and just commit on
Hi
I'd like to use multi-valued dynamic fields.
Example:
dynamicField name=*_s type=string indexed=true stored=true/
!-- a normal dynamic field for strings --
dynamicField name=*_sm type=string indexed=true stored=true
multiValued=true / !-- a dynamic field for multi-valued string
On 09.02.2009 15:40 Ryan McKinley wrote:
But I have some problems setting this up. As long as I try the
multicore
sample everything works but when I copy my schema.xml into the
multicore/core0/conf dir I only get 404 error messages when I enter
the
admin url.
what is the url you are
Keep in mind that the way lucene/solr work is that the results are
constant from when you open the searcher. If new documents are added
(without re-opening the searcher) they will not be seen.
commit/ tells solr to re-open the index and see the changes.
1. Does this mean that
Hey Renaud - in the future, its probably best to direct Gaze questions
(unless it directly relates to Solr) to supp...@lucidimagination.com
mailto:supp...@lucidimagination.com.
Gaze is a tool thats stores RequestHandler statistics avgs (over small
intervals) for long time ranges, and then
Hi,
I asked the same question a few days ago. Using multiValued dynamic fields
is fine even if the documentation or examples do not say anything about it,
Cheers,
Bruno
2009/2/9 Ian Sugar iansu...@gmail.com
Hi
I'd like to use multi-valued dynamic fields.
Example:
dynamicField
On Feb 9, 2009, at 10:40 AM, Michael Lackhoff wrote:
On 09.02.2009 15:40 Ryan McKinley wrote:
But I have some problems setting this up. As long as I try the
multicore
sample everything works but when I copy my schema.xml into the
multicore/core0/conf dir I only get 404 error messages when I
On 09.02.2009 17:01 Ryan McKinley wrote:
Check your solrconfig.xml you probably have somethign like this:
!-- Used to specify an alternate directory to hold all index data
other than the default ./data under the Solr home.
If replication is in use, this should match the
Hi Mark,
Mark Miller wrote:
Hey Renaud - in the future, its probably best to direct Gaze questions
(unless it directly relates to Solr) to supp...@lucidimagination.com
mailto:supp...@lucidimagination.com.
Right, I was not aware of this mailing list.
Gaze is a tool thats stores
(I think I have a horrible subject line but I wasnt sure how to
properly explain myself).
I have a text field that I store last names in (and everything is
lowercased prior to insertion, not sure if that matters).
The field is described as:
field name=last_name type=text indexed=true
Otis Gospodnetic wrote:
I'd say: Make sure you don't commit more frequently than the time it takes for your
searcher to warm up, or else you risk searcher overlap and pile-up.
cool. i found a place in our code where we were committing the same
thing twice in very rapid succession. fingers
Rupert,
Try using string field type instead of text and test it out with some
unusual/rare last name patterns. For example, try it with last names that
consist of more than one word and see if you are happy with those results.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
The default highlighter output is bogus if you're trying to use the
snippets in a web browser. With the default em/em delimiters, the
temptation is to just stick the snippets in an innerHTML property, but
the problem is that other HTML special characters ( and ) are not
escaped. For example, a
On Mon, Feb 9, 2009 at 2:59 PM, Jeffrey Baker jwba...@gmail.com wrote:
The default highlighter output is bogus if you're trying to use the
snippets in a web browser. With the default em/em delimiters, the
temptation is to just stick the snippets in an innerHTML property, but
the problem is
Hello,
I am wondering if the UpdateResponse status codes are documented somewhere?
I haven't been able to find them. I know 0 is success..
Thanks,
Mark
I've been able to reduce these GC outages by:
1) Optimizing my schema. This reduced my index size by more than 50%
2) Smaller cache sizes. I started with filterCache, documentCache
queryCache sizes of ~10,000. They're now at ~500
3) Reduce heap allocation. I started at 27 GB, now I'm 'only'
I tried sorting using a function query instead of the Lucene sort and found
no change in performance. I wonder if Lance's results are related to
something specific to his deployment?
--
View this message in context:
In my schema I have two copies of my numeric fields: one with the original
value (used for display, sort), and one with a rounded version of the
original value (used for range queries).
When I use my rounded field for numeric range queries (e.g.
q=RoundedValue:[100 TO 1000]), I see very
: The behavior I would like is identical to 'tagging' each document with the
: list-id/user/order and then using standard faceting to show what lists
: documents are in and what users have put the docs into a list.
:
: But - I would like the main index to be read only. The index needs to be
:
: How can I hack the existing script to support multiple rsync module
you might want to just consult some rsyncd resources to answer this
question, i believe adding a new [modname] block is how you add a
module, with the path/comment keys listed underneight, however...
1) i don't believe it's
: As I understood lucene's boost, if you search for John Le Carre it will
: give better score to the results that contains just the searched string that
: results that have, for example, 50 words and the search is contained in the
: words.
:
: In Solr, my goal is to give more score to the docs
: I am trying to test relevancy of results with the q.alt field on a Dismax
: Request Handler. Term level boosting based on bq information in
: solrconfig.xml works fine. However field level boosting based on the qf
: information in solrconfig.xml doesn't seem to work.
:
: Query
:
: I have indexed my data as custom123, customer, custom for the
: UserName field. I need to search the records for exact match, when I
: am trying to search with UserName:customer I am finding the records
: where UserName is custom123 and custom.
:
: As per my understanding solr splits the
: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
: SingleInstanceLock: write.lock
: at org.apache.lucene.store.Lock.obtain(Lock.java:85)
: at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140)
are there any other ERROR messages in your log before
: Subject: Severe errors in solr configuration
It sounds like you solved your problem, but a few things to clarify for
people who might find this thread later...
: java.security.AccessControlException: access denied (java.io.FilePermission
: /var/lib/tomcat6/solr/solr.xml read) at
:
: I would like to get how is a snapshot really. It's obviously a hard link to
: the files.
: But it just contain the last update ??
the nature of lucene indexes is that files are never modified -- only
created, or deleted.
this makes rsyncing very efficient when updates have been made to an
: OK, so java.util.logging has no way of sending error messages to a separate
: log without writing your own Handler/Filter code.
: If we just skip over the absurdity of that, and the rage it makes me feel,
FWIW: that's a slight mischaracterization of java.util.logging (JUL): the
API framework
: We have a standard solr install that we use across a lot of different uses.
: In that install is a custom search component that loads a lot of data in its
: inform() method. This means the data is initialized on solr boot. Only about
: half of our installs actually ever call this search
Just an update on my own research:
I have discovered the 'ParallelReader' class (subclass of IndexReader) in
lucene, which is designed for searching across multiple indexes.
This appears to suit our needs - and I do not expect will be too difficult
to integrate into Solr.
--
View this message
: Now all that is left is a more cosmetic change I would like to make:
: I tried to place the solr.xml in the example dir to get rid of the
: -Dsolr.solr.home=multicore for the start and changed the first entry
: from core0 to solr and moved the core1 dir from multicore directly
: under the
Mark,
I'm not solrj user, but I think you don't need to check status code.
Solr server always return 0 for status when success. If something goes
wrong,
Solr server returns HTTP 400/500 response, then you'll get an Exception.
Koji
Mark Ferguson wrote:
Hello,
I am wondering if the
0 is actually a communication failure (can't connect at all).
200 is good
Solr returns 400s when if bails. I always thought this was strange,
because I thought 500 is an application error (what I would expect)
and 400 is a general HTTP error.
Best,
J
On Tue, Feb 10, 2009 at 7:22 AM, Koji
One other person has reported this to me off-list, and I just
encountered it myself. ExtractingRequestHandler does not handle plain
text files properly (no text is extracted). Here's an example:
curl
And yes, the file does have textual content :)
And I tried both ext.resource.name and stream.contentType to no avail.
Erik
On Feb 9, 2009, at 10:17 PM, Erik Hatcher wrote:
One other person has reported this to me off-list, and I just
encountered it myself. ExtractingRequestHandler
41 matches
Mail list logo