When facet.sort is used, the facet fields are sorted by the count
in the reply string when using python output. However, after calling
eval(), the sort order seems to be lost. Not sure if anyone has come
up with a way to avoid this problem.
Using the JSON output with a JSON parser for Python shou
Right now, each style/size is a separate document.
I agree, it makes far more sense to have each size be associates with
only one style. I think I'll pursue that route at first here, and
then for grouping similar items I may have to do some facet magic
(facets to show links for product_cate
Ahh, ok.
I'll check out Saxon-B and XSLT templates.
++
| Matthew Runo
| Zappos Development
| [EMAIL PROTECTED]
| 702-943-7833
++
On May 2, 2007, at 3:57 PM, Brian Whitman wrote:
Hi Matthew,
You might be able to just get away with just using facets, depending on
whether your goal is to provide a clickable list of styles_ids to the user,
or if you want to only return one search result for each style_id.
For a list of clickable styles, it's basic faceting, and works really
On May 2, 2007, at 6:55 PM, Matthew Runo wrote:
I was wondering - is it possible to search and group the results by
a given field?
For example, I have an index with several million records. Most of
them are different sizes of the same style_id.
I'd love to be able to do.. group.by=style_i
Hello!
I was wondering - is it possible to search and group the results by a
given field?
For example, I have an index with several million records. Most of
them are different sizes of the same style_id.
I'd love to be able to do.. group.by=style_id or something like that
in the results
: I'm wondering how to delete a range of documents with
: a range filter instead of a query. I want to remove all docs with a
: creation date within two dates.
:
: As far as I remember range filters are much quicker then queries in lucene.
Never fear, the default query parser in Solr does a lot o
: I feel like I might be missing something, and there is in fact a way to
: use a custom HitCollector and benefit from caching, but I just don't see
: it now.
I can't think of any easy way to do what you describe ... you can always
use the low level IndexSearcher methods with a custom HitCollecto
: For example I have the composite word "wishlist" in my document. I can
: easily find the document by using the search string "wishlist" or "wish*"
: but I don't get any result with "list".
what you are describing is basically a substring search problem ...
sometimes this can be dealt with by us
Hi,
I have a situation where I have some external weight information that I'd like
to use for the computation of the final "weighted" ranking and I'm trawling
through Solr sources for a good place to plug this in. What I have is an index
in which each Document has an identifier that I can map
Hi.
First off, thanks for a nice piece of software.
I'm wondering how to delete a range of documents with
a range filter instead of a query. I want to remove all docs with a
creation date within two dates.
As far as I remember range filters are much quicker then queries in lucene.
/Johan
I tried, but ran into a missing ant file:
lucene-nightly\build.xml:7: Cannot find common-build.xml imported from
C:\download\lucene-nightly\build.xml
I've posted to the lucene dev list as well; will try the lucene user list too.
- mps
Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
Try building your own jar (ant jar-core in lucene's trunk):
strings /home/otis/dev/repos/lucene/java/trunk/build/lucene-core-2.2-dev.jar |
grep -i clover
I'll have a look at the nightly later, but you should also bring up that issue
on [EMAIL PROTECTED] list.
Otis
. . . . . . . . . . . . . .
Otis,
Thanks for the response, that list should be very useful!
Charlie
-Original Message-
From: Otis Gospodnetic [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 02, 2007 11:13 AM
To: solr-user@lucene.apache.org
Subject: Re: NullPointerException (not schema related)
Charlie,
There is n
Charlie,
There is nothing built into Solr for that. But you can use any of the numerous
free proxies/load balancers. Here is a collection that I've got:
http://www.simpy.com/user/otis/search/load%2Bbalance+OR+proxy
Otis
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simpy -- http
Try it on the nightly build, dude:
[EMAIL PROTECTED] tmp]# strings lucene-core-nightly.jar | grep -i clover|more
org/apache/lucene/LucenePackage$__CLOVER_0_0.class
org/apache/lucene/analysis/Analyzer$__CLOVER_1_0.class
org/apache/lucene/analysis/CachingTokenFilter$__CLOVER_2_0.class
org/apach
As far as I know, there is no clover dependency, at least not in the trunk
version of Solr. I tried this cheap trick:
$ strings lib/lucene-core-2.1.0.jar | grep -i clover
Otis
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simpy -- http://www.simpy.com/ - Tag - Search - Sh
Hi Lutz,
That is because neither Solr nor Lucene (the indexing/searching toolkit that
Solr runs on top of) know anything about compound words. Noting there knows
that the English word "wishlist" is a compounded word. You'd have to write
your own analyzer and tokenizer that examines each word/
I am a newbie to Solr and found it very easy to get started!
However, now I am stuck at this issue of dealing with correlated vector fields.
for example
the data on scientific publications. It will have a list of authors and their
respective organization. Sample data can be represented as:
Towar
I just downloaded the latest nightly build of Lucene and compiled it with the
solr 1.1.0 source, and now leading + trailing wildcards work like a charm.
The only issue is, the lucene-core .jar file seems to have a runtime
dependency on clover.jar. Does anyone know if this is intentional, or
The collection distribution scripts relies on hard links and rsync. It
seems that both maybe avaialble on Windows
hard links:
http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/fsutil_hardlink.mspx?mfr=true
rsync:
http://samba.anu.edu.au/rsync/download.html
I say ma
Hi,
I have a search problem with composite words.
For example I have the composite word "wishlist" in my document. I can
easily find the document by using the search string "wishlist" or "wish*"
but I don't get any result with "list".
I can do a fuzzy search but this gives me too many results.
Hi Chrisitian,
> It is not sufficient to set the encoding in the XML but
> you need an additional HTTP header to set the encoding ("Content-type:
> text/xml; charset=UTF-8")
Thanks, that's what I was missing.
Gereon
Gereon,
The four bytes do not look like a valid utf-8 encoded character. 4-byte
characters in utf-8 start with the binary sequence "0...". (For reference
see the excellent wikipedia article on utf-8 encoding).
Your problem looks like someone interpreted your valid 2-byte utf-8 encoded
chara
Hi,
I have a question regarding UTF-8 encodings, illustrated by the
utf8-example.xml file. This file contains raw, unescaped UTF8 characters,
for example the "e acute" character, represented as two bytes 0xC3 0xA9.
When this file is added to Solar and retrieved later, the XML output
contains a fou
i know this is a stupid question, but are there any collection
distribution scripts for windows available ?
thanks !
26 matches
Mail list logo