class=solr.SynonymFilterFactory
synonyms=synonyms_city_facet.txt ignoreCase=true expand=false /
filter class=solr.LowerCaseFilterFactory /
/analyzer
/fieldType
please suggest me please.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620
* occupied by a cache (filter
cache, doc cache ...)? I don't find such information within the stats page.
Regards
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
information here. I think somewhere in the process there
should be a crawling happening and I missed it out.
Just wanted to see if some one could help me pointing this out and where I
went wrong in the process. Forgive my foolishness and thanks for your
patience.
Cheers,
Abi
--
Markus
, 2011 at 7:09 PM, Markus Jelsma
markus.jel...@openindex.iowrote:
The parsed data is only sent to the Solr index of you tell a segment to
be indexed; solrindex crawldb linkdb segment
If you did this only once after injecting and then the consequent
fetch,parse,update,index sequence then you
to be done to avoid this
high memory usage ?
Thanks,
Rachita
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Bing Li,
One should be conservative when setting Xmx. Also, just setting Xmx might not
do the trick at all because the garbage collector might also be the issue
here. Configure the JVM to output debug logs of the garbage collector and
monitor the heap usage (especially the tenured generation)
I should also add that reducing the caches and autowarm sizes (or not using
them at all) drastically reduces memory consumption when a new searcher is
being prepares after a commit. The memory usage will spike at these events.
Again, use a monitoring tool to get more information on your
Add it to the CATALINA_OPTS, on Debian systems you could edit
/etc/default/tomcat
On Thursday 10 February 2011 12:27:59 Xavier SCHEPLER wrote:
-Dlog4j.configuration=$CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.pr
operties
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com
?
Thanks in advance,
Xavier
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Markus Jelsma wrote:
Oh, now looking at your log4j.properties, i believe it's wrong. You
declared INFO as rootLogger but you use SOLR.
-log4j.rootLogger=INFO
+log4j.rootLogger=SOLR
try again
On Thursday 10 February 2011 09:41:29 Xavier Schepler wrote:
Hi,
I added “slf4j-log4j12
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
or the result page generation.
How to proceed, what else to check?
Regards,
Bernd
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
not see that link on that page./p pWho's got
write access to wikis pages?brbrbr/p
pSent from Yahoo! Mail on Android/p
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
.. Is there any other
differences? Is it a good idea to use this free distribution?
Greg
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
I've seen that before on a 3.1 check out after i compiled the clustering
component, copied the jars and started Solr. For some reason , recompiling
didn't work and doing an ant clean in front didn't fix it either. Updating to a
revision i knew did work also failed.
I just removed the entire
On Debian you can edit /etc/default/tomcat6
hi,
i am using solr1.4 with apache tomcat. to enable the
clustering feature
i follow the link
http://wiki.apache.org/solr/ClusteringComponent
Plz help me how to add-Dsolr.clustering.enabled=true to $CATALINA_OPTS.
after that which
Garg wrote:
On Wednesday 16 February 2011 02:41 PM, Markus Jelsma wrote:
On Debian you can edit /etc/default/tomcat6
hi,
i am using solr1.4 with apache tomcat. to enable the
clustering feature
i follow the link
http://wiki.apache.org/solr/ClusteringComponent
Plz
)
All i know is that it was unable to download but the reason eludes me.
Sometimes, a machine rolls out many of these errors and increasing the index
size because it can't handle the already downloaded data.
Cheers,
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620
I have no idea, seems you haven't compiled Carrot2 or haven't included all
jars.
On Wednesday 16 February 2011 11:29:30 Isha Garg wrote:
On Wednesday 16 February 2011 03:32 PM, Markus Jelsma wrote:
What distro are you using? On at least Debian systems you can put
expensive is setting the termVector on a field?
Takes up additional disk space and RAM. Can be a lot.
Thanks - Tod
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
In my own Solr 1.4, I am pretty sure that running an index optimize does
give me significant better performance. Perhaps because I use some
largeish (not huge, maybe as large as 200k) stored fields.
200.000 stored fields? I asume that number includes your number of documents?
Sounds crazy =)
Closing a core will shutdown almost everything related to the workings of a
core. Update and search handlers, possible warming searchers etc.
Check the implementation of the close method:
Thanks for the answers, more questions below.
On 2/16/2011 3:37 PM, Markus Jelsma wrote:
200.000 stored fields? I asume that number includes your number of
documents? Sounds crazy =)
Nope, I wasn't clear. I have less than a dozen stored field, but the
value of a stored field can
Hi,
That depends (as usual) on your scenario. Let me ask some questions:
1. what is the sum of documents for your applications?
2. what is the expected load in queries/minute
3. what is the update frequency in documents/minute and how many documents per
commit?
4. how many different
You can also easily abuse shards to query multiple cores that share parts of
the schema. This way you have isolation with the ability to query them all.
The same can, of course, also be achieved using a sinlge index with a simple
field identying the application and using fq on that one.
Yes,
it brings the
slave to it's knees, workaround was to extend the poll interval,
though not ideal.
Cheers,
Dan
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
a fail-over cluster
and more useful features. I haven't tried Katta.
Thanks so much!
LB
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
like: This query has a build-in inconsistency because the two dates
you have specified requires documents to be before AND after these date.
But this is far future...
Regards,
Christian Sonne Jensen
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06
and puts them like in first position or documents with word
manager in second position or so
thanks
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
and supplying??..
for example:
if a query is made on q=solr in my index... i get a results of 25
documents... what is it calculating?? i am very keen to know its way of
calculation of score and ordering of results
Regards,
satya
--
Markus Jelsma - CTO - Openindex
http
with sortMissingLast=true or
sortMissingFirst=true can result in incorrectly sorted results.
-Yonik
http://lucidimagination.com
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
CouchDB is a good piece of software for some scenario's and easy to use. It
has update handlers to which you could attach a small program that takes the
input, transforms it to Solr XML and send it over.
CouchDB lucene is a bit different. It lacks the power of Solr but allows and
you need to
Sure
http://wiki.apache.org/solr/UpdateXmlMessages#A.22delete.22_by_ID_and_by_Query
Hi,
I'm wondering if it's possible to delete documents in m'y index by date
range?
I've got a field in my schema: indexed_date in date type and i would like
to remove docs older than 90 days.
Thanks
Yes. But did you actually search the mailing list or Solr's wiki? I guess not.
Here it is:
http://wiki.apache.org/solr/UpdateRequestProcessor
Can fields created by copyField instructions be processed by
UpdateProcessors?
Or only raw input fields can?
So far my experiment is suggesting the
Hi,
You're right, it's illegal syntax to use other functions in the ms function,
which is a pity indeed.
However, you reduce the score by 50% for each year. Therefore paging through
the results shouldn't make that much of a difference because the difference in
score with NOW+2 minutes has a
a copyField-like operation
using UpdateProcessor. It doesn't talk about relationship between
the copyField operation proper and UpdateProcessors.
Kuro
On 2/22/11 3:00 PM, Markus Jelsma markus.jel...@openindex.io wrote:
Yes. But did you actually search the mailing list or Solr's wiki? I
have to come up with a fix for
this, or get rid of the boost function altogether.
Stephen Duncan Jr
www.stephenduncanjr.com
On Tue, Feb 22, 2011 at 6:09 PM, Markus Jelsma
markus.jel...@openindex.iowrote:
Hi,
You're right, it's illegal syntax to use other functions in the ms
Hi,
I may have misread it all but SolrJ is the Java client and you don't need it
for a pretty AJAX interface.
Cheers,
Hello list,
I'm in the process of trying to implement Ajax within my Solr-backed webapp
I have been reading both the Solrj wiki as well as the tutorial provided
via the
Hi,
The params you have suggest you're planning to use SweetSpotSimilarity. There
already is a factory you can use in Jira.
https://issues.apache.org/jira/browse/SOLR-1365
Cheers,
Hi,
I'm trying to use a CustomSimilarityFactory and pass in per-field
options from the schema.xml, like so:
Hi,
Scaling might be required. How large is the index going to be in number of
documents, fields and bytes and what hardware do you have? Powerful CPU's and a
lot of RAM will help. And, how many queries per second do you expect? And how
many updates per minute?
Depending on average document
Hi,
I'd guess a non-200 HTTP response code would be more appropriate indeed but
it's just a detail.
A successful replication will change a few things on the slave:
- increment of generation value
- updated indexVersion value
- lastReplication will have a new timestamp
You can also check for a
No, Solr returns facets ordered alphabetically or count.
Hello everybody,
Is it possibile to order the facet results on some ranking score?
I was doing a query with or operator and sometimes the first facet
have inside of them only result with small rank and not important.
This cause that
You don't want to use 0.8 if you're parsing PDF.
Your best bet is perhaps upgrading to latest 1.4 branch, i.e. 1.4.2-dev
(http://svn.apache.org/repos/asf/lucene/solr/branches/branch-1.4/) It
includes Tika 0.8-SNAPSHOT and is a compatible drop-in (war/jar)
replacement with lots of other bug
DismaxQParser's mm parameter might help you out:
http://wiki.apache.org/solr/DisMaxQParserPlugin#mm_.28Minimum_.27Should.27_Match.29
Is there any place where a detailed tutorial about all the Java files of
Apache Solr(under Src folder) is available.?
I want to study them as my purpose is to
Yes, you need to add the field text of type Text or use content instead of
text.
Hello list,
I have recently been working on some JS (ajax solr) and when using Firebug
I am alerted to an error within the JS file as below. It immediately
breaks on line 12 stating that 'doc.text' is
If filterCache hitratio is low then just disable it in solrconfig by deleting
the section or setting its values to 0.
Based on what I've read here and what I could find on the web, it seems
that each fq clause essentially gets its own results cache. Is that
correct?
We have a corporate
fieldCache.
I've check LUCENE-1890 but am unsure if that's the issue. Anyt thoughts on
this one?
https://issues.apache.org/jira/browse/LUCENE-1890
Cheers,
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Traditionally, people forget to reindex ;)
Hi all,
The problem was that my fields were defined as type=string instead of
type=text. Once I corrected that, it seems to be fixed. The only part
that still is not working though is the search across all fields.
For example:
Are there pending commits on the master?
I was curious why would the size be dramatically different even though
the index versions are the same?
One is 1.2 Gb, and on the slave it is 512 MB
I would think they should both be the same size no?
Thanks
copy so not to lose
space?
Thanks
On Tue, Mar 1, 2011 at 3:26 PM, Mike Franonkongfra...@gmail.com wrote:
No pending commits, what it looks like is there are almost two copies
of the index on the master, not sure how that happened.
On Tue, Mar 1, 2011 at 3:08 PM, Markus
is how I was able to get the regular search working. I have not
however been able to get the search across all fields to work.
On Tue, Mar 1, 2011 at 3:01 PM, Markus Jelsma
markus.jel...@openindex.iowrote:
Traditionally, people forget to reindex ;)
Hi all,
The problem was that my
,
Dan
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
and
remove or modify replication.properties.
On Wednesday 02 March 2011 15:03:54 Mike Franon wrote:
Is it ok if I just delete the old copies manually? or maybe run a
script that does it?
On Tue, Mar 1, 2011 at 7:47 PM, Markus Jelsma
markus.jel...@openindex.io wrote:
Indeed, the slave should
=org.apache.solr.handler.component.StatsComponent
double name=time0.0/double
/lst
-
lst name=org.apache.solr.handler.component.DebugComponent
double name=time0.0/double
/lst
/lst
/lst
/lst
/response
On Tue, Mar 1, 2011 at 7:57 PM, Markus Jelsma
markus.jel...@openindex.iowrote:
Hmm, please provide
this functionality?
Thanks
Beside the point, why do you need such function?
If you give us more information/background of your needs, it might help
responders.
regards,
Koji
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
If you're confortable with XSL you can create a transformer and use Solr's
XSLTResponseWriter to do the job.
http://wiki.apache.org/solr/XsltResponseWriter
Hi all,
This list has proven itself quite useful since I got started with Solr. I'm
wondering if it is possible to dictate the XML that
Hi,
I remember reading somewhere that undeploying an application in Tomcat won't
release memory, thus repeating the cycle will indeed exhaust the permgen. You
could enable garbage collection of the permgen.
HotSpot can do this for you but it depends on using CMS which you might not
want to
Nice! It makes multi core navigation a lot easier. What license do the icons
have?
Hi List,
given that fact that my java-knowledge is sort of non-existing .. my
idea was to rework the Solr Admin Interface.
Compared to CouchDBs Futon or the MongoDB Admin-Utils .. not that fancy,
but it
Use either the string fieldType or a field with very little analysis
(KeywordTokenizer + LowercaseFilter).
How to obtain perfect match with dismax query??
es:
i want to search hello i love you with deftype=dismax in the title field
and i want to obtain results which title is exactly
Well, an RDBMS can be very fast but Solr using fq can be very fast as well.
Just try fq=group:sportsfq=createdtime:you time
Dear all,
I have started to learn Solr for two months. At least right now, my system
runs good in a Solr cluster.
I have a question when implementing one feature in
you @iorixxx.
Could you point me where I can find a good docs on how to do this ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-TermsComponent-space-in-term-tp189
8889p2624429.html Sent from the Solr - User mailing list archive at
Nabble.com.
--
Markus Jelsma
to the example Solr instance.
I have tried a few things however they seem to be for the file on the same
server as solr, in my case I am pushing the document from a windows
machine to Solr for indexing.
Ta
Ken
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620
Anyone here with some thoughts on this issue?
Hi,
Yesterday's error log contains something peculiar:
ERROR [solr.search.SolrCache] - [pool-29-thread-1] - : Error during auto-
warming of key:+*:*
(1.0/(7.71E-8*float(ms(const(1298682616680),date(sort_date)))+1.0))^20.0:ja
A request handler can have first-components and last-components and also just
plain components. List all your stuff in components and voila. Don't forget to
also add debug, facet and other default components if you need them.
Le 8 mars 2011 à 23:03, Chris Hostetter a écrit :
: in my schema I
Great work!
On Wednesday 09 March 2011 11:20:41 Tommaso Teofili wrote:
Hi all,
I just improved the Solr UIMA integration wiki page [1] so if anyone is
using it and/or has any feedback it'd be more than welcome.
Regards,
Tommaso
[1] : http://wiki.apache.org/solr/SolrUIMA
--
Markus Jelsma
- 5GB Xmx
- Solr2 for Update-Request - delta every Minute - 4GB Xmx
--
View this message in context:
http://lucene.472066.n3.nabble.com/NRT-in-Solr-tp2652689p2654696.html Sent
from the Solr - User mailing list archive at Nabble.com.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com
RAMdisk
...but the index resides on disk doesn't it??? lol
-Original Message-
From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
Sent: Wednesday, March 09, 2011 9:06 AM
To: solr-user@lucene.apache.org
Subject: Re: True master-master fail-over without data gaps
Hi,
Hi,
In one of the environments i'm working on (4 Solr 1.4.1. nodes with
replication, 3+ million docs, ~5.5GB index size, high commit rate (~1-2min),
high query rate (~50q/s), high number of updates (~1000docs/commit)) the nodes
continuously run out of memory.
During development we frequently
:(
TIA
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
will
be
applied to that.
useFilterForSortedQuerytrue/useFilterForSortedQuery
--
TIA
Andy
On Thu, Mar 10, 2011 at 10:33 AM, Markus Jelsma
markus.jel...@openindex.iowrote:
Is there no generic parameter store in the Solr module you can use for
passing
the sort parameter
, but the actual value of
quantity*price (e.g. product(5,2.21) == 11.05)?
Many thanks
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
continues updates and without stress tests. Firing manual
queries with different values for the bf parameter don't show any difference
in the values listed on the stats page.
Someone cares to provide an explanation?
Thanks
On Wednesday 09 March 2011 22:21:19 Markus Jelsma wrote:
Hi,
In one
from
excessive memory consumption:
recip(ms(NOW/PRECISION,DATE_FIELD),TIME_FRACTION,1,1)
On Thursday 10 March 2011 15:14:25 Markus Jelsma wrote:
Well, it's quite hard to debug because the values listed on the stats page
in the fieldCache section don't make much sense. Reducing precision
You need to reindex.
On Monday 14 March 2011 14:04:00 Ahsan |qbal wrote:
Hi All
Is there any way to drop term vectors from already built index file.
Regards
Ahsan Iqbal
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
or should i use
core swaping?
Thanks.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
In solrconfig there might be a autocommit section enabled.
On Monday 14 March 2011 14:18:42 lame wrote:
I don't commit at all we use Dataimporter, but I have a feeling that
it could be done by DIH (autocommit is it possible)?
2011/3/14 Markus Jelsma markus.jel...@openindex.io:
Do you
this instead?
Thanks for any advice,
Jonathan
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
master be also updated with nex index? Our scripts showed
that master still has the old index (see my first email).
Thanks
2011/3/14 Markus Jelsma markus.jel...@openindex.io:
In solrconfig there might be a autocommit section enabled.
On Monday 14 March 2011 14:18:42 lame wrote:
I don't
document would be boosted by the number in the field
boost_score.
Unfortunately, I have no idea how to implement this actually but I'm hoping
that's where you all can come in.
Thanks,
Brian Lamb
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
advice?
Cheers,
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
://issues.apache.org/jira/browse/SOLR-2015
On Monday 14 March 2011 16:47:24 Markus Jelsma wrote:
Hi,
In Solr 1.4.1 we don't have feature to disable automatic generation of
phrase queries. The phrase queries are generated thanks of the word
delimiter filter i use. The problem is, i cannot use the QS
use omitTermFreqAndPositions. If
you use omitNorms you'll always see a norm of 1.
Phew, this stuff is hard for me to talk about clearly. If that made any
sense, do I have it right? If so, that's exactly what I want to try
out, excellent.
On 3/14/2011 10:48 AM, Markus Jelsma wrote:
You
that is the case.
If you don't have any other ideas I'll probably try reindexing second
core, than swap cores and run delta import (to import documets added
in the meantime).
Thanks
2011/3/14 Markus Jelsma markus.jel...@openindex.io:
These settings don't affect a commit
Hi Doğacan,
Are you, at some point, running out of heap space? In my experience, that's
the common cause of increased load and excessivly high response times (or time
outs).
Cheers,
Hello everyone,
First of all here is our Solr setup:
- Solr nightly build 986158
- Running solr inside
Hello,
2011/3/14 Markus Jelsma markus.jel...@openindex.io
Hi Doğacan,
Are you, at some point, running out of heap space? In my experience,
that's the common cause of increased load and excessivly high response
times (or time
outs).
How much of a heap size would be enough? Our
Nope, no OOM errors.
That's a good start!
Insanity count is 0 and fieldCAche has 12 entries. We do use some boosting
functions.
Btw, I am monitoring output via jconsole with 8gb of ram and it still goes
to 8gb every 20 seconds or so,
gc runs, falls down to 1gb.
Hmm, maybe the garbage
You might also want to add the following switches for your GC log.
JAVA_OPTS=$JAVA_OPTS -verbose:gc -XX:+PrintGCTimeStamps
-XX:+PrintGCDetails - Xloggc:/var/log/tomcat6/gc.log
-XX:+PrintGCApplicationConcurrentTime
-XX:+PrintGCApplicationStoppedTime
Also, what JVM version are you using and
That depends on your GC settings and generation sizes. And, instead of
UseParallelGC you'd better use UseParNewGC in combination with CMS.
See 22: http://java.sun.com/docs/hotspot/gc1.4.2/faq.html
It's actually, as I understand it, expected JVM behavior to see the heap
rise to close to it's
,
2011/3/14 Markus Jelsma markus.jel...@openindex.io
That depends on your GC settings and generation sizes. And, instead of
UseParallelGC you'd better use UseParNewGC in combination with CMS.
JConsole now shows a different profile output but load is still high and
performance is still bad
:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
I've never had a real problem with memory, so I've not done any kind of
auditing. I probably should, but time is a limited resource.
Shawn
On 3/14/2011 2:29 PM, Markus Jelsma wrote:
That depends on your GC settings and generation sizes. And, instead
!
-
loredanaebook.it
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-admin-page-timed-out-and-index-upd
ating-issues-tp2664429p2676437.html Sent from the Solr - User mailing list
archive at Nabble.com.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com
it with the root work in the index.
I verified this by looking at analysis.jsp.
Is there an option to expand the stemmer to include all combinations of the
word? Like include 's, ly, etc?
Other options besides protection?
Bill
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com
looked through the configuration files and cannot see any other place other
than solrconfig.xml where that would be set so what am I doing incorrectly?
Thanks,
Brian Lamb
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
that is odd...
can you let us know exactly what verison of Solr/Lucne you are using (if
it's not an official release, can you let us know exactly what the version
details on the admin info page say, i'm curious about the svn revision)
Of course, that's the stable 1.4.1.
can you also
Actually, i dug in the logs again and surprise, it sometimes still occurs with
`random` queries. Here's are a few snippets from the error log. Somewhere
during that time there might be OOM-errors but older logs are unfortunately
rotated away.
2011-03-14 00:25:32,152 ERROR
is strictly prohibited. If you have
received the message in error, please advise the sender by reply
email and delete the message. Thank you.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06
. If you have
received the message in error, please advise the sender by reply
email and delete the message. Thank you.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Hi,
It works just as expected, but not in a phrase query. Get rid of your quotes
and you'll be fine.
Cheers,
Should 1.4.1 dismax query parser be able to handle pure negative queries
like:
q=-foo
q=-foo -bar
It kind of seems to me trying it out that it can NOT. Can anyone else
for putting in the quotes in the email, I actually don't have
tests in my quotes, just tried again to make sure.
And I always get 0 results on a pure negative Solr 1.4.1 dismax query. I
think it does not actually work?
On 3/17/2011 3:52 PM, Markus Jelsma wrote:
Hi,
It works just
in context:
http://lucene.472066.n3.nabble.com/How-to-get-stopwords-and-synonyms-files
-for-several-lanuages-tp2698494p2698494.html Sent from the Solr - User
mailing list archive at Nabble.com.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
501 - 600 of 1541 matches
Mail list logo