On Fri, Aug 26, 2011 at 10:17 AM, Moore, Gary gary.mo...@ars.usda.gov wrote:
I have a number of chemical names containing commas which I'm mapping in
index_synonyms.txt thusly:
2\,4-D-butotyl=Aqua-Kleen,BRN 1996617,Bladex-B,Brush killer 64,Butoxy-D
3,CCRIS 8562
According to the sample
On Fri, Aug 26, 2011 at 11:16 AM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Fri, Aug 26, 2011 at 10:17 AM, Moore, Gary gary.mo...@ars.usda.gov wrote:
I have a number of chemical names containing commas which I'm mapping in
index_synonyms.txt thusly:
2\,4-D-butotyl=Aqua-Kleen,BRN
On Thu, Aug 25, 2011 at 5:19 PM, Michael Ryan mr...@moreover.com wrote:
10,000,000 document index
Internal Document id is 32 bit unsigned int
Max Memory Used by a single cache slot in the filter cache = 32 bits x
10,000,000 docs = 320,000,000 bits or 38 MB
I think it depends on where exactly
On Tue, Aug 23, 2011 at 7:11 AM, Samarendra Pratap samarz...@gmail.com wrote:
We are upgrading solr 1.4 (with collapsing patch solr-236) to solr 3.3. I
was looking for the required changes in query parameters (or parameter
names) if any.
There should be very few (but check CHANGES.txt as
On Tue, Aug 23, 2011 at 2:17 PM, Glenn s...@t2.zazu.com wrote:
Question about batch updates (performing a delete and add in same
request, as described at bottom
of http://wiki.apache.org/solr/UpdateXmlMessages):
http://wiki.apache.org/solr/UpdateXmlMessages%29: is the order
guaranteed? If a
On Tue, Aug 23, 2011 at 3:38 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Tue, Aug 23, 2011 at 2:17 PM, Glenn s...@t2.zazu.com wrote:
Question about batch updates (performing a delete and add in same
request, as described at bottom
of http://wiki.apache.org/solr/UpdateXmlMessages
On Fri, Aug 19, 2011 at 10:36 AM, alexander sulz a.s...@digiconcept.net wrote:
using lsof I think I pinned down the problem: too many open files!
I already doubled from 512 to 1024 once but it seems there are many SOCKETS
involved,
which are listed as can't identify protocol, instead of real
On Wed, Aug 17, 2011 at 5:56 PM, Jason Toy jason...@gmail.com wrote:
I've only set set minimum memory and have not set maximum memory. I'm doing
more investigation and I see that I have 100+ dynamic fields for my
documents, not the 10 fields I quoted earlier. I also sort against those
On Fri, Aug 12, 2011 at 9:53 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
It turned out that there is a sorting issue with solr 3.3.
As fas as I could trace it down currently:
4 docs in the index and a search for *:*
sorting on field dccreator_sort in descending order
On Fri, Aug 12, 2011 at 1:04 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Fri, Aug 12, 2011 at 9:53 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
It turned out that there is a sorting issue with solr 3.3.
As fas as I could trace it down currently:
4 docs in the index
On Fri, Aug 12, 2011 at 2:08 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Fri, Aug 12, 2011 at 1:04 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Fri, Aug 12, 2011 at 9:53 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
It turned out that there is a sorting issue
I've checked in an improved TestSort that adds deleted docs and
randomizes things a lot more (and fixes the previous reliance on doc
ids not being reordered).
I still can't reproduce this error though.
Is this stock solr? Can you verify that the documents are in the
wrong order also (and not just
On Wed, Aug 10, 2011 at 5:57 AM, Amit Sawhney sawhney.a...@gmail.com wrote:
Hi All,
I am trying to sort the results on a unix timestamp using this query.
http://localhost:8983/solr/update?commit=true;
Are you saying that the curl command just hung and never returned?
-Yonik
http://www.lucidimagination.com
Also: I managed to get a Thread Dump (attached).
regards
Am 05.08.2011 15:08, schrieb Yonik Seeley:
On Fri, Aug 5, 2011 at 7:33 AM
On Mon, Aug 8, 2011 at 5:12 PM, Erik Hatcher erik.hatc...@gmail.com wrote:
Great question. But how would that get returned in the response?
It is a drag that the header is lost when results are written in CSV, but
there really isn't an obvious spot for that information to be returned.
I
On Sat, Aug 6, 2011 at 11:31 AM, Paul Libbrecht p...@hoplahup.net wrote:
Le 6 août 2011 à 02:09, Yonik Seeley a écrit :
On Fri, Aug 5, 2011 at 7:30 PM, Paul Libbrecht p...@hoplahup.net wrote:
my solr is coming to slowly reach its memory limits (8Gb) and the stats
displays me a reasonable
On Sat, Aug 6, 2011 at 1:35 PM, Paul Libbrecht p...@hoplahup.net wrote:
Le 6 août 2011 à 17:37, Yonik Seeley a écrit :
I have a custom query-handler and a custom response writer.
Do you always retrieve the searcher via
SolrQueryRequest.getSearcher()? If so, there should be no problem
On Sat, Aug 6, 2011 at 2:17 PM, Paul Libbrecht p...@hoplahup.net wrote:
This is convincing me... I'd like to experiment and close.
So, how can I be sure this is the right thing?
I would have thought adding a document and committing would have created a
Searcher in my current usage but I do
On Sat, Aug 6, 2011 at 2:30 PM, Paul Libbrecht p...@hoplahup.net wrote:
Le 6 août 2011 à 20:21, Yonik Seeley a écrit :
It is creating a new searcher, but then closing the old searcher after
all currently running requests are done using it (that's what the
reference counting is for).
After
the stuff in the example folder, the only changes i made was enable
logging and changing the port to 8985.
I'll try getting a thread dump if it happens again!
So far its looking good with having allocated more memory to it.
Am 04.08.2011 16:08, schrieb Yonik Seeley:
On Thu, Aug 4, 2011 at 8:09
On Fri, Aug 5, 2011 at 7:30 PM, Paul Libbrecht p...@hoplahup.net wrote:
my solr is coming to slowly reach its memory limits (8Gb) and the stats
displays me a reasonable fieldCache (1800) but 4820 searchers. That sounds a
bit much to me, each has been opened in its own time since the last
On Thu, Aug 4, 2011 at 8:09 AM, alexander sulz a.s...@digiconcept.net wrote:
Thank you for the many replies!
Like I said, I couldn't find anything in logs created by solr.
I just had a look at the /var/logs/messages and there wasn't anything
either.
What I mean by crash is that the process
On Thu, Aug 4, 2011 at 11:21 AM, matthew.fow...@thomsonreuters.com wrote:
Hi Yonik
So I tested the join using the sample data below and the latest trunk. I
still got the same behaviour.
HOWEVER! In this case it was nothing to do with the patch or solr version. It
was the tokeniser
when only 2 match the criteria.
i.e. docs where G1 is present in multi valued code field. Why should
the last document be included in the result of the join?
Thank you,
Matt
-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik
Seeley
Sent: 01
On Mon, Aug 1, 2011 at 11:16 AM, Mark static.void@gmail.com wrote:
We have around 10million documents that are in our index and about 10% of
them have some extra statistics that are calculated on a daily basis which
are then index and used in our function queries. This reindexing comes at
On Mon, Aug 1, 2011 at 12:58 PM, matthew.fow...@thomsonreuters.com wrote:
I have been using the JOIN patch
https://issues.apache.org/jira/browse/SOLR-2272 with great success.
However I have hit a case where it doesn't seem to be working. It
doesn't seem to work when joining to a multi-valued
On Thu, Jul 28, 2011 at 10:24 AM, Peter Wolanin
peter.wola...@acquia.com wrote:
Thanks for the feedback. I'll have look more at how geohash works.
Looking at the sample schema more closely, I see:
fieldType name=double class=solr.TrieDoubleField
precisionStep=0 omitNorms=true
On Wed, Jul 27, 2011 at 9:01 AM, Peter Wolanin peter.wola...@acquia.com wrote:
Looking at the example schema:
http://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_3_3/solr/example/solr/conf/schema.xml
the solr.PointType field type uses double (is this just an example
field, or
On Wed, Jul 27, 2011 at 7:17 AM, Tarjei Huse tar...@scanmine.com wrote:
On 06/01/2011 08:22 AM, Jason Rutherglen wrote:
Thanks Shashi, this is oddly coincidental with another issue being put
into Solr (SOLR-2193) to help solve some of the NRT issues, the timing
is impeccable.
Hmm, does anyone
that manipulates scores.
http://wiki.apache.org/solr/CommonQueryParameters#Caching_of_filters
-Yonik
http://www.lucidimagination.com
On Fri, Jul 22, 2011 at 4:27 PM, Yonik Seeley yo...@lucidimagination.com
wrote:
On Fri, Jul 22, 2011 at 4:11 PM, Brian Lamb
brian.l...@journalexperts.com wrote
IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux amd64-64
I'm confused why MMapDirectory is getting used with the IBM JVM... I
had thought it would default to NIOFSDirectory on Linux w/ a non
Oracle JVM.
Are you specifically selecting MMapDirectory in solrconfig.xml?
Can you try the Oracle JVM
On Fri, Jul 22, 2011 at 9:44 AM, Yonik Seeley
yo...@lucidimagination.com wrote:
IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux amd64-64
I'm confused why MMapDirectory is getting used with the IBM JVM... I
had thought it would default to NIOFSDirectory on Linux w/ a non
Oracle JVM.
I
OK, best guess is that you're going over some per-process address space limit.
Try seeing what ulimit -a says.
-Yonik
http://www.lucidimagination.com
On Fri, Jul 22, 2011 at 12:51 PM, mdz-munich
sebastian.lu...@bsb-muenchen.de wrote:
Hi Yonik,
thanks for your reply!
Are you specifically
virtual memory (kbytes, -v) 27216080
file locks (-x) unlimited/
Best regards,
Sebastian
Yonik Seeley-2-2 wrote:
OK, best guess is that you're going over some per-process address space
limit.
Try seeing what ulimit -a says.
-Yonik
http
On Fri, Jul 22, 2011 at 4:11 PM, Brian Lamb
brian.l...@journalexperts.com wrote:
I've noticed some peculiar scoring issues going on in my application. For
example, I have a field that is multivalued and has several records that
have the same value. For example,
arr name=references
On Thu, Jul 21, 2011 at 4:47 PM, Olson, Ron rol...@lbpc.com wrote:
Is there an easy way to find out which field matched a term in an OR query
using Solr? I have a document with names in two multi-valued fields and I am
searching for Smith, using the query A_NAMES:smith OR B_NAMES:smith. I
] on behalf of Yonik Seeley
[yo...@lucidimagination.com]
Sent: Tuesday, July 19, 2011 9:40 PM
To: solr-user@lucene.apache.org
Subject: Re: defType argument weirdness
On Tue, Jul 19, 2011 at 1:25 PM, Naomi Dushay ndus...@stanford.edu wrote:
Regardless, I thought that defType=dismaxq
On Wed, Jul 20, 2011 at 10:58 AM, Sowmya V.B. vbsow...@gmail.com wrote:
Which is the best way to read Solr's JSON output, from a Java code?
You could use SolrJ - it handles parsing for you (and uses the most
efficient binary format by default).
There seems to be a JSONParser in one of the jar
On Wed, Jul 20, 2011 at 12:16 PM, Remy Loubradou
remyloubra...@gmail.com wrote:
Hi,
I was writing a Solr Client API for Node and I found an error on this page
http://wiki.apache.org/solr/UpdateJSON ,on the section Update Commands the
JSON is not valid because there are duplicate keys and two
On Tue, Jul 19, 2011 at 3:20 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: Quite probably ... you typically can't assume that a FieldCache can be
: constructed for *any* field, but it should be a safe assumption for the
: uniqueKey field, so for that initial request of the mutiphase
On Tue, Jul 19, 2011 at 6:49 PM, solr nps solr...@gmail.com wrote:
My documents have two prices retail_price and current_price. I want to
get products which have a sale of x%, the x is dynamic and can be specified
by the user. I was trying to achieve this by using fq.
If I want all sony tv's
On Tue, Jul 19, 2011 at 1:25 PM, Naomi Dushay ndus...@stanford.edu wrote:
Regardless, I thought that defType=dismaxq=*:* is supposed to be
equivalent to q={!defType=dismax}*:* and also equivalent to q={!dismax}*:*
Not quite - there is a very subtle distinction.
{!dismax} is short for
On Mon, Jul 18, 2011 at 10:53 AM, Nicholas Chase nch...@earthlink.net wrote:
Very glad to hear that NRT is finally here! But my question is this: will
things still come to a standstill during a commit?
New updates can now proceed in parallel with a commit, and
searches have always been
On Mon, Jul 18, 2011 at 12:48 PM, Kanduru, Ajay (NIH/NLM/LHC) [C]
akand...@mail.nih.gov wrote:
I am trying to optimize performance of solr with our collection. The
collection has 208M records with index size of about 80GB. The machine has
16GB and I am allocating about 14GB to solr.
I am
On Mon, Jul 18, 2011 at 3:44 PM, Timothy Tagge tplimi...@gmail.com wrote:
Solr version: 1.4.1
I'm having some trouble with certain queries run against my Solr
index. When a query starts with a single letter followed by a space,
followed by another search term, the query runs endlessly and
On Sun, Jul 17, 2011 at 10:38 AM, Jeff Schmidt j...@535consulting.com wrote:
I don't want to query for a particular facet value, but rather have Solr do a
grouping of facet values. I'm not sure about the appropriate nomenclature
there. But, I have a multi-valued field named process that can
On Thu, Jul 14, 2011 at 8:42 AM, Zoltan Altfatter altfatt...@gmail.com wrote:
Would be interested in the status of the development in returning the
distance in a spatial query?
This is a feature in trunk (pseudo-fields).
For example:
fl=id,score,geodist()
-Yonik
Something is wrong with your indexing.
Is wc an indexed field? If not, change it so it is, then re-index your data.
If so, I'd recommend starting with the example data and filter for
something like popularity:[6 TO 10] to convince yourself it works,
then figuring out what you did differently in
On Sat, Jul 9, 2011 at 8:04 PM, Lance Norskog goks...@gmail.com wrote:
Does the Join feature work with Range queries?
Not in any generic manner - joins are based on exact matches of
indexed tokens only.
But if you wanted something specific enough like same year, then you
could index that year
On Fri, Jul 8, 2011 at 4:11 AM, Thomas Heigl tho...@umschalt.com wrote:
How should I proceed with this problem? Should I create a JIRA issue or
should I cross-post on the dev mailing list? Any suggestions?
Yes, this definitely sounds like a bug in the 3.3 grouping (looks like
it forgets to
On Mon, Jul 4, 2011 at 11:54 AM, Per Newgro per.new...@gmx.ch wrote:
i've tried to add the params for group=true and group.field=myfield by using
the SolrQuery.
But the result is null. Do i have to configure something? In wiki part for
field collapsing i couldn't
find anything.
No specific
On Tue, Jul 5, 2011 at 5:13 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: Correct me if I am wrong: In a standard distributed search with
: QueryComponent, the first query sent to the shards asks for
: fl=myUniqueKey or fl=myUniqueKey,score. When the response is being
: generated to
On Mon, Jul 4, 2011 at 2:07 AM, arian487 akarb...@tagged.com wrote:
I guess I'll have to use something other then SolrCache to get what I want
then. Or I could use SolrCache and just change the code (I've already done
so much of this anwyways...). Anyways thanks for the reply.
You can
On Sun, Jul 3, 2011 at 10:52 PM, arian487 akarb...@tagged.com wrote:
I know the queryResultCache and stuff live only so long as a commit happens
but I'm wondering if the custom caches are like this as well? I'd actually
rather have a custom cache which is not cleared at all.
That's not
OK, I tried a quick test of 1.4.1 vs 3x on optimized indexes
(unoptimized had different numbers of segments so I didn't try that).
3x (as of today) was 28% faster at a large filter query (300 terms in
one big disjunction, with each term matching ~1000 docs).
-Yonik
2011/7/1 Tomás Fernández Löbbe tomasflo...@gmail.com:
I'm not sure I understand what you want to do. To paginate with groups you
can use start and rows as with ungrouped queries. with group.ngroups
(Something I found a couple of days ago) you can show the total number of
groups. group.limit
On Sat, Jul 2, 2011 at 7:34 PM, Benson Margulies bimargul...@gmail.com wrote:
Hey, I don't suppose you could easily tell me the rev in which ngroups
arrived?
1137037 I believe. Grouping originated in Solr, was refactored to a
shared lucene/solr module, including the ability to get the total
On Thu, Jun 30, 2011 at 6:19 PM, Ryan McKinley ryan...@gmail.com wrote:
Hello-
I'm looking for a way to find all the links from a set of results. Consider:
doc
id:1
type:X
link:a
link:b
/doc
doc
id:2
type:X
link:a
link:c
/doc
doc
id:3
type:Y
link:a
/doc
Is
Hmmm, you could comment out the query and filter caches on both 1.4.1 and 3.2
and then run some of the queries to see if you can figure out which are slower?
Do any of the queries have stopwords in fields where you now index
those? If so, that could entirely account for the difference.
-Yonik
Can you get a thread dump to see what is hanging?
-Yonik
http://www.lucidimagination.com
On Wed, Jun 29, 2011 at 11:45 AM, Bob Sandiford
bob.sandif...@sirsidynix.com wrote:
Hi, all.
I'm hoping someone has some thoughts here.
We're running Solr 3.1 (with the patch for SolrQueryParser.java to
On Wed, Jun 29, 2011 at 1:43 PM, Shawn Heisey s...@elyograg.org wrote:
Just now, three of the six shards had documents deleted, and they took
29.07, 27.57, and 28.66 seconds to warm. The 1.4.1 counterpart to the 29.07
second one only took 4.78 seconds, and it did twice as many autowarm
On Wed, Jun 29, 2011 at 4:32 PM, eks dev eks...@googlemail.com wrote:
req.getSearcher().getFirstMatch(t) != -1;
Yep, this is currently the fastest option we have.
-Yonik
http://www.lucidimagination.com
On Wed, Jun 29, 2011 at 3:28 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Wed, Jun 29, 2011 at 1:43 PM, Shawn Heisey s...@elyograg.org wrote:
Just now, three of the six shards had documents deleted, and they took
29.07, 27.57, and 28.66 seconds to warm. The 1.4.1 counterpart
On Sat, Jun 25, 2011 at 5:56 AM, marthinal jm.rodriguez.ve...@gmail.com wrote:
sfield, pt and d can all be specified directly in the spatial
functions/filters too, and that will override the global params.
Unfortunately one must currently use lucene query syntax to do an OR.
It just makes it
On Fri, Jun 24, 2011 at 2:11 PM, marthinal jm.rodriguez.ve...@gmail.com wrote:
Yonik Seeley-2-2 wrote:
On Tue, Sep 21, 2010 at 12:12 PM, dan sutton lt;danbsut...@gmail.comgt;
wrote:
I was looking at the LatLonType and how it might represent multiple
lon/lat
values ... it looks to me like
I just tried branch_3x and couldn't reproduce this.
Looks like maybe there is something wrong with your build, or some old
class files left over somewhere being picked up.
-Yonik
http://www.lucidimagination.com
On Wed, Jun 22, 2011 at 10:15 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
Thanks for the problem report. It turns out we didn't check for a
null pointer when there were no terms in a field for a segment.
I've just committed a fix to trunk.
-Yonik
http://www.lucidimagination.com
On Wed, Jun 22, 2011 at 10:28 PM, Jason Toy jason...@gmail.com wrote:
I am trying to
On Tue, Jun 21, 2011 at 2:15 AM, Rafał Kuć r@solr.pl wrote:
Hello!
Once again thanks for the response ;) So the solution is to generate
the data files once again and either adding the space after doubled
encapsulator
Maybe...
I can't tell if the file is encoded correctly or not since I
This works fine for me:
curl http://localhost:8983/solr/update/csv -H
'Content-type:text/plain' -d 'id,name
1,aaa bbb ccc'
-Yonik
http://www.lucidimagination.com
On Mon, Jun 20, 2011 at 3:17 PM, Rafał Kuć r@solr.pl wrote:
Hello!
I have a question about the CSV update handler. Lets say
Multi-valued CSV fields are double encoded.
We start with: aaa bbbccc'
Then decoding one leve, we get: aaa bbbccc
Decoding again to get individual values results in a decode error
because the encapsulator appears unescaped in the middle of the second
value (i.e. invalid CSV).
One easier way to
On Mon, Jun 20, 2011 at 11:25 PM, Shawn Heisey elyog...@elyograg.org wrote:
On 6/20/2011 8:08 PM, entdeveloper wrote:
Technically, yes, it's valid json, but most libraries treat the json
objects
as maps, and with multiple add elements as the keys, you cannot properly
deserialize.
As an
On Fri, Jun 17, 2011 at 1:30 AM, pravesh suyalprav...@yahoo.com wrote:
If you are sending whole CSV in a single HTTP request using curl, why not
consider sending it in smaller chunks?
Smaller chunks should not matter - Solr streams from the input (i.e.
the whole thing is not buffered in
What version of Solr is this?
Can you show steps to reproduce w/ the example server and data?
-Yonik
http://www.lucidimagination.com
On Wed, Jun 15, 2011 at 7:25 AM, Marc Sturlese marc.sturl...@gmail.com wrote:
Hey there,
I've noticed a very odd behaviour with the snapinstaller and commit
On Wed, Jun 15, 2011 at 2:21 PM, pravesh suyalprav...@yahoo.com wrote:
I would need some help in minimizing the CPU load on the new system. Could
possibly NIOFSDirectory attributes to high CPU?
Yes, it's a feature! The CPU is only higher because the threads
aren't blocked on IO as much.
So the
On Sun, Jun 12, 2011 at 9:10 PM, Johannes Goll johannes.g...@gmail.com wrote:
However, sporadically, Jetty 6.1.2X (shipped with Solr 3.1.)
sporadically throws Socket connect exceptions when executing distributed
searches.
Are you using the exact jetty.xml that shipped with the solr example
On Thu, Jun 9, 2011 at 9:23 AM, Jason Toy jason...@gmail.com wrote:
I want to be able to run a query like idf(text, 'term') and have that data
returned with my search results. I've searched the docs,but I'm unable to
find how to do it. Is this possible and how can I do that ?
In trunk,
On Fri, Jun 10, 2011 at 8:31 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
Nice! Will SOLR-1298 with aliasing also work with an external file field since
that can be a source of a function query as well?
Haven't tried it, but it definitely should!
-Yonik
http://www.lucidimagination.com
2011/6/9 Denis Kuzmenok forward...@ukr.net:
Hi, everyone.
I have fields:
text fields: name, title, text
boolean field: isflag (true / false)
int field: popularity (0 to 9)
Now i do query:
defType=edismax
start=0
rows=20
fl=id,name
q=lg optimus
fq=
qf=name^3 title text^0.3
On Thu, Jun 9, 2011 at 3:31 PM, Helmut Hoffer von Ankershoffen
helmut...@googlemail.com wrote:
Hi,
there seems to be no way to index CSV using the DataImportHandler.
Looking over the features you want, it looks like you're starting from
a CSV file (as opposed to CSV stored in a database).
Is
On Thu, Jun 9, 2011 at 4:07 PM, Helmut Hoffer von Ankershoffen
helmut...@googlemail.com wrote:
Hi,
yes, it's about CSV files loaded via HTTP from shops to be fed into a
shopping search engine.
The CSV Loader cannot map fields (only field values) etc.
You can provide your own list of
The boost qparser should do the trick if you want a multiplicative boost.
http://lucene.apache.org/solr/api/org/apache/solr/search/BoostQParserPlugin.html
-Yonik
http://www.lucidimagination.com
On Wed, Jun 8, 2011 at 9:22 AM, Alex Grilo a...@umamao.com wrote:
Hi,
I'm trying to use bf
On Wed, Jun 8, 2011 at 1:21 PM, Jamie Johnson jej2...@gmail.com wrote:
Thanks exactly what I was looking for.
With this new field used just for sorting is there a way to have it be case
insensitive?
From the example schema:
!-- lowercases the entire field value, keeping it as a single
On Tue, Jun 7, 2011 at 9:35 AM, Jamie Johnson jej2...@gmail.com wrote:
I am currently experimenting with the Solr Cloud code on trunk and just had
a quick question. Lets say my setup had 3 nodes a, b and c. Node a has
1000 results which meet a particular query, b has 2000 and c has 3000.
One way is to use the boost qparser:
http://search-lucene.com/jd/solr/org/apache/solr/search/BoostQParserPlugin.html
q={!boost b=productValueField}shops in madrid
Or you can use the edismax parser which as a boost parameter that
does the same thing:
defType=edismaxq=shops in
On Tue, Jun 7, 2011 at 1:01 PM, Jamie Johnson jej2...@gmail.com wrote:
Thanks Yonik. I have a follow on now, how does Solr ensure consistent
results across pages? So for example if we had my 3 theoretical solr
instances again and a, b and c each returned 100 documents with the same
score and
On Tue, Jun 7, 2011 at 12:34 PM, Luis Cappa Banda luisca...@gmail.com wrote:
*Expression*: A B C D E F G H I
As written, this is equivalent to
*Expression*: A default_field:B default_field:C default_field:D
default_field:E default_field:F default_field:G default_field:H
default_field:I
Try
On Fri, Jun 3, 2011 at 1:02 PM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
Is it just me, or would others like things like:
* The ability to tell Solr (by passing some URL param?) to skip one or more of
its caches and get data from the index
Yeah, we've needed this for a long time, and
Dan, this doesn't really have anything to do with your filter on the
Status field except that it causes different documents to be selected.
The root cause is a schema mismatch with your index.
A string field (or so the schema is saying it's a string field) is
returning null for a value, which is
On Thu, May 19, 2011 at 6:40 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: It is fairly simple to generate facets for ranges or 'buckets' of
: distance in Solr:
: http://wiki.apache.org/solr/SpatialSearch#How_to_facet_by_distance.
: What isnt described is how to generate the links for
On Thu, May 19, 2011 at 8:52 AM, martin_groenhof
martin.groen...@yahoo.com wrote:
How do you construct a query in java for spatial search ? not the default
solr REST interface
It depends on what you are trying to do - a spatial request (as
currently implemented in Solr) is typically more than
On Thu, May 19, 2011 at 9:56 AM, Erik Fäßler erik.faess...@uni-jena.de wrote:
I have a few questions concerning the field cache method for faceting.
The wiki says for enum method: This was the default (and only) method for
faceting multi-valued fields prior to Solr 1.4. . And for fc method:
On Wed, May 18, 2011 at 10:50 AM, Gabriele Kahlout
gabri...@mysimpatico.com wrote:
Hello,
I'm wondering if Solr Test framework at the end of the day always runs an
embedded/jetty server (which is the only way to interact with solr, i.e. no
web server -- no solr) or in the tests they interact
On Wed, May 18, 2011 at 11:14 AM, Gabriele Kahlout
gabri...@mysimpatico.com wrote:
On Wed, May 18, 2011 at 5:09 PM, Yonik Seeley yo...@lucidimagination.com
wrote:
On Wed, May 18, 2011 at 10:50 AM, Gabriele Kahlout
gabri...@mysimpatico.com wrote:
Hello,
I'm wondering if Solr Test
On Wed, May 18, 2011 at 1:24 PM, Paul Dlug paul.d...@gmail.com wrote:
I updated to the latest branch_3x (r1124339) and I'm now getting the
error below when trying a delete by query or id. Adding documents with
the new format works as do the commit and optimize commands. Possible
regression due
, May 18, 2011 at 1:29 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Wed, May 18, 2011 at 1:24 PM, Paul Dlug paul.d...@gmail.com wrote:
I updated to the latest branch_3x (r1124339) and I'm now getting the
error below when trying a delete by query or id. Adding documents with
the new format
On Tue, May 17, 2011 at 6:07 PM, Burton-West, Tom tburt...@umich.edu wrote:
If I have a query with a filter query such as : q=artfq=history and then
run a second query q=artfq=-history, will Solr realize that it can use
the cached results of the previous filter query history (in the filter
On Tue, May 17, 2011 at 6:17 PM, Markus Jelsma
markus.jel...@openindex.io wrote:
I'm not sure. The filter cache uses your filter as a key and a negation is a
different key. You can check this easily in a controlled environment by
issueing these queries and watching the filter cache statistics.
On Tue, May 17, 2011 at 6:57 PM, Jonathan Rochkind rochk...@jhu.edu wrote:
(changed subject for this topic). Weird. I'm seeing it wrong myself, and
have for a while -- I even wrote some custom pre-processor logic at my app
level to work around it. Weird, I dunno.
Wait. Queries with -one OR
On Sun, May 15, 2011 at 1:48 PM, Michael McCandless
luc...@mikemccandless.com wrote:
Could you please revert your commit, until we've reached some
consensus on this discussion first?
Huh?
I thought everyone was in agreement that we needed more field types
for different languages?
I added my
On Mon, May 16, 2011 at 5:30 AM, Michael McCandless
luc...@mikemccandless.com wrote:
To be clear, I'm asking that Yonik revert his commit from yesterday
(rev 1103444), where he added text_nwd fieldType and dynamic fields
*_nwd to the example schema.xml.
So... your position is that until the
801 - 900 of 2724 matches
Mail list logo