1: modify ur schema.xml:
like
fieldtype name=text_cn class=solr.TextField
analyzer class=chineseAnalyzer/
analyzer
2: add your field:
field name=urfield type=text_cn indexd=true stored=true/
3: add your analyzer to {solr_dir}\lib\
4: rebuild newsolr and u will find it in
On Wed, Jun 3, 2009 at 1:59 AM, anuvenk anuvenkat...@hotmail.com wrote:
I have to search over multiple fields so passing everything in the 'q'
might
not be neat. Can something be done with the facet.query to accomplish this.
I'm using the facet parameters. I'm not familiar with java so not
Yes, Erick, I did. Actually the course of events was as follows. I started
with the example config files (solrconfig.xml schema.xml) and added my own
fields. In my search I have 2 clauses: for a phrase and for a set of
keywords. And from the very beginning it worked fine. Until on the second
day
Hi all:
I want to contrib memcache implement solr cache (only test query result cache)
patch for solr 1.3 http://code.google.com/p/solr-side/issues/detail?id=1can=1
solr-memcache.zip http://solr-side.googlecode.com/files/solr-memcache.zip
please raise this as an issue in Jira
https://issues.apache.org/jira/browse/SOLR
let us see what others think about this
On Wed, Jun 3, 2009 at 1:14 PM, chenl...@yahoo.com.cn wrote:
Hi all:
I want to contrib memcache implement solr cache (only test query result cache)
patch for solr 1.3
It's definitely not proper documentation but maybe can give you a hand:
http://www.derivante.com/2009/04/27/100x-increase-in-solr-performance-and-throughput/
Martin Davidsson-2 wrote:
I've tried to read up on how to decide, when writing a query, what
criteria goes in the q parameter and
wow! that was a good read!!!
On Wed, Jun 3, 2009 at 2:23 PM, Marc Sturlese marc.sturl...@gmail.comwrote:
It's definitely not proper documentation but maybe can give you a hand:
http://www.derivante.com/2009/04/27/100x-increase-in-solr-performance-and-throughput/
Martin Davidsson-2 wrote:
https://issues.apache.org/jira/browse/SOLR-1197
--- 09年6月3日,周三, chenl...@yahoo.com.cn chenl...@yahoo.com.cn 写道:
发件人: chenl...@yahoo.com.cn chenl...@yahoo.com.cn
主题: How contrib for solr memcache query cache
收件人: solr-user@lucene.apache.org
日期: 2009年6月3日,周三,下午3:44
Hi all:
I want to
Hi!
to be short, where to start with the subject?
Any pointers to some [semi-]functional solutions that crawl the web as a
normal crawler, take care about html parsing, etc, and feed the crawled
stuff as solr-documents per add ?
regards!
Hello,
My goal is to get an index for alphabetical faceting of titles. For this I'm
trying to define a fieldType meant to index first letter of text, with
stopwords removed. My problem is that without WordDelimiterFilterFactory
stopwords are not removed, and with it I end up with 2 tokens (and
Gena,
Besides droids (simpler, smaller components you can put together) there is also
Nutch, a bigger beast for large scale crawling that index crawled pages into
Solr - http://lucene.apache.org/nutch .
Otis
- Original Message
From: Gena Batsyan gbat...@gmail.com
To:
Anshuman, thanks for you input. I will try that, I can understand what you are
trying.
Marcus, I did not understand how your KeyworkTokenizer work. Is that I have to
define a septate field like what we have in example schema and call that field.
This what I came up with.
fieldType
Yeah, that's the point. Once you have this, you can use copyField as was
wrote above with the string example.
Bny Jo wrote:
Anshuman, thanks for you input. I will try that, I can understand what you
are trying.
Marcus, I did not understand how your KeyworkTokenizer work. Is that I
I've been hitting my head against a wall all morning trying to figure
this out and haven't managed to get anywhere and wondered if anybody
here can help.
I have defined a field type
fieldType name=text_au class=solr.TextField
positionIncrementGap=100
analyzer
tokenizer
So, is there an ability to perform filtering as I described?
On Mon, Jun 1, 2009 at 22:24, Alex Shevchenko caeza...@gmail.com wrote:
But I don't need to sort using this value. I need to cut results, where
this value (for particular term of query!) not in some range.
On Mon, Jun 1, 2009 at
This is fixed in trunk. The next nightly build will have this fix. Thanks!
On Tue, Jun 2, 2009 at 9:49 PM, Steffen B. s.baumg...@fhtw-berlin.dewrote:
Glad to hear that it's not a problem with my setup.
Thanks for taking care of it! :)
Shalin Shekhar Mangar wrote:
On Tue, Jun 2, 2009 at
James,
I don't see the error, but this is exactly what Solr Admin's analysis page will
quickly help you with! :)
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: James Grant james.gr...@semantico.com
To: solr-user@lucene.apache.org
I don't think this is possible without changing Solr.
Or maybe it's possible with a custom Search Component that looks at all hits
and checks the df (document frequency) for a term in each document? Sounds
like a very costly operation...
Otis
--
Sematext -- http://sematext.com/ -- Lucene -
I am working with an in index of ~10 million documents. The index
does not change often.
I need to preform some external search criteria that will return some
number of results -- this search could take up to 5 mins and return
anywhere from 0-10M docs.
I would like to use the output of
Hi,
I'm adding the MoreLikeThis functionality to my search.
1. Do I understand it right that the query:
q=id:1mlt=truemlt.fl=content
will bring back documents in which the most important terms of the content
field are partly the same as those of the content field of the doc with
id=1?
2. Also,
Hi,
I have an index in wich I am always indexing the same documents
(re-indexing).
So I need to search for them by their number of segment.
When I ask solrj for the documents by their segment [for example:
solrj.query(segment:20090603142546);] , he doesn't return any thing. I
checked the
Hey everyone!
I just wanted to give a BIG THANKS for everyone who came. We had over a
dozen people, and a few got lost at UW :) [I would have sent this update
earlier, but I flew to Florida the day after the meeting].
If you didn't come, you missed quite a bit of learning and topics. Such as:
On Wed, Jun 3, 2009 at 1:53 AM, Marc Sturlese marc.sturl...@gmail.comwrote:
It's definitely not proper documentation but maybe can give you a hand:
http://www.derivante.com/2009/04/27/100x-increase-in-solr-performance-and-throughput/
Martin Davidsson-2 wrote:
I've tried to read up on
Hi,
Could you please start a new thread?
Thanks,
Otis
- Original Message
From: sunnyfr johanna...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, June 3, 2009 10:20:06 AM
Subject: Re: Solr vs Sphinx
Hi guys,
I work now for serveral month on solr and really you
Hello,
It's ugly, but the first thing that came to mind was ThreadLocal.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: David Giffin da...@giffin.org
To: solr-user@lucene.apache.org
Sent: Wednesday, June 3, 2009 1:57:42 PM
Subject:
On Jun 3, 2009, at 5:09 AM, James Grant wrote:
I've been hitting my head against a wall all morning trying to
figure this out and haven't managed to get anywhere and wondered if
anybody here can help.
I have defined a field type
fieldType name=text_au class=solr.TextField
I must precise that I am running nutch-solr-integration and both schema.xml
are the same in nutch or in solr.
--
View this message in context:
http://www.nabble.com/Solr-search-by-segment-tp23856569p23859728.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hey there,
Anyone got any advice on which caches (filterCache, queryResultCache,
documentCache, fieldValueCache) should be implemented using the
solr.FastLRUCache in solr 1.4 and what are the pros cons
vs the solr.LRUCache.
Thanks Robert.
--
View this message in context:
I happened to revisit this post that I had started long time back. I'm still
using the same query time synonyms. Now i want to be able to map cities to
states in the synonyms and continuing to have this issue with the multi-word
synonyms. Could you please explain what you've done to overcome this
I tried adding some city to state mappings in the synonyms file. I'm using
the dismax handler for phrase matching. So as when i add more more city
to state mappings, I end up with zero results for state based searches.
Eg: ca,california,los angeles
ca,california,san diego
A small addition to my earlier post. I wonder if its because of the 'mm'
param, which requires that until 3 words in search phrase, all the words
should be matched. If i alter this now, i'd get ir-relevant results for a
lot of popular 1, 2, 3 word search terms. How to solve for this?
anuvenk
Sunspot: A Solr-Powered Search Engine for Ruby
http://www.linux-mag.com/id/7341
glen
http://zzzoot.blogspot.com/
--
-
I am implementing solr on Centos server. It involves handling
multi-languages. Where is the best place to look for developers experienced
in solr who may be interested in a little consulting work. Mostly to give
some guidance, etc. IRC is rather quite.
Thank you :)
33 matches
Mail list logo