I think you're asking if the (very temporary on trunk) faceting bug is
fixed. The answer is yes.
Erik
On May 29, 2009, at 3:10 AM, Jörg Agatz wrote:
i the Bug fixt in the news Nightliy Bilds?
Hi guys,
I didnt ennoy you for ages now ... hope everybody is fine ... I've an issue
with my replication
I was wondering ... after a while replication doesnt work anymore ...
we have a script which enable or not replication every 2hours and this
morning it didnt pull anything
and it's maybe
what would be the url to ping to replicate
like http://slave_host:port/solr/replication?command=enablepoll
thanks
--
View this message in context:
http://www.nabble.com/replication-solr-1.4-tp23777206p23777272.html
Sent from the Solr - User mailing list archive at Nabble.com.
Very interesting: FieldsWriter thinks it's written 12 bytes to the fdx
file, yet the directory says the file does not exist.
Can you re-run with this new patch? I'm suspecting that FieldsWriter
wrote to one segment, but somehow we are then looking at the wrong
segment. The attached patch prints
Hey there,
I am testing MoreLikeThis feaure (with MoreLikeThis component and with
MoreLikeThis handler) and I am getting lots of duplicates. I have noticed
that lots of the similar documents returned are duplicates. To avoid that I
have tried to use the field collapsing patch but it's not taking
Jorg - the rest of that exception would be mighty handy! Please share
the entire details.
Erik
On May 29, 2009, at 7:38 AM, Jörg Agatz wrote:
also,
i have after using the Nightly bUILD FROM tODY, 29.05.2009
BUT THE sAME ERROR...
HTTP ERROR 500
null
HTTP ERROR: 500
null
java.lang.NullPointerException
at java.io.StringReader.init(StringReader.java:33)
at org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:169)
at
org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:78)
at
It's probably not the size of the query cache, but the size of the
FieldCache entries that are used for sorting and function queries
(that's the only thing that should be allocating huge arrays like
that).
What fields do you sort on or use function queries on? There may be a
way to decrease the
I have been able to create my custom field. The problem is that I have laoded
in the solr core a couple of HashMapsid_doc,value_influence_sort from a DB
with values that will influence in the sort. My problem is that I don't know
how to let my custom sort have access to this HashMaps.
I am a bit
What are you really trying to accomplish here? Because index time boostingis
a way of saying I care about matches in this field of this document
X times more than other documents whereas search time boosting
expresses elevate the relevance of any document where this term matches
From your
There is no FieldCache entries in solrconfig.xml ( BTW we are running version
1.2.0)
-Original Message-
From: Yonik Seeley [mailto:ysee...@gmail.com]
Sent: Friday, May 29, 2009 9:12 AM
To: solr-user@lucene.apache.org
Subject: Re: Java OutOfmemory error during autowarming
It's probably
On Fri, May 29, 2009 at 1:44 PM, Francis Yakin fya...@liquid.com wrote:
There is no FieldCache entries in solrconfig.xml ( BTW we are running
version 1.2.0)
Lucene FieldCache entries are created when you sort on a field or when
you use a field in a function query.
-Yonik
I know, but the FieldCache is not in the solrconfig.xml
-Original Message-
From: Yonik Seeley [mailto:ysee...@gmail.com]
Sent: Friday, May 29, 2009 10:47 AM
To: solr-user@lucene.apache.org
Subject: Re: Java OutOfmemory error during autowarming
On Fri, May 29, 2009 at 1:44 PM, Francis
Hi Mike,I don't see a patch file here?
Could another explanation be that the fdx file doesn't exist yet / has been
deleted from underneath Lucene?
I'm constantly CREATE-ing and UNLOAD-ing Solr cores, and more importantly,
moving the bundled cores around between machines. I find it much more
Hi All,
I would like to provide an admin interface (in a different system) that
would update the synonyms.txt file and automatically inform a set of Solr
instances that are being replicated to update their synonyms.txt file too.
This discussion shows a possible solution:
Hi ,
When i give a query like the following ,why does it become a phrase query
as shown below?
The field type is the default text field in the schema.
str name=querystringvolker-blanz/str
str name=parsedqueryPhraseQuery(content:volker blanz)/str
Also when i have special characters in the query
16 matches
Mail list logo