Ok, thanks for your response, Mark!
Cheers,
Martin
On Tue, Oct 14, 2014 at 1:59 AM, Mark Miller markrmil...@gmail.com wrote:
I think it's just cruft I left in and never ended up using anywhere. You
can ignore it.
- Mark
On Oct 13, 2014, at 8:42 PM, Martin Grotzke
martin.grot
Hi,
can anybody tell me the meaning of ZkStateReader.SYNC? All other state
related constants are clear to me, I'm only not sure about the semantics
of SYNC.
Background: I'm working on an async solr client
(https://github.com/inoio/solrs) and want to add SolrCloud support - for
this I'm reusing
Hi,
we want to use the LBHttpSolrServer (4.0/trunk) and specify a preferred
server. Our use case is that for one user request we make several solr
requests with some heavy caching (using a custom request handler with a
special cache) and want to make sure that the subsequent solr requests
are
Hi,
I just submitted an issue with patch for this:
https://issues.apache.org/jira/browse/SOLR-3318
Cheers,
Martin
On 04/04/2012 03:53 PM, Martin Grotzke wrote:
Hi,
we want to use the LBHttpSolrServer (4.0/trunk) and specify a preferred
server. Our use case is that for one user request we
Hi,
is it possible to determine the memory consumption (heap space) per core
in solr trunk (4.0-SNAPSHOT)?
I just unloaded a core and saw the difference in memory usage, but it
would be nice to have a smoother way of getting the information without
core downtime.
It would also be interesting,
--
Martin Grotzke
http://twitter.com/martin_grotzke
signature.asc
Description: OpenPGP digital signature
)
at
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:427)
--
Martin Grotzke
http://twitter.com/martin_grotzke
signature.asc
Description: OpenPGP digital signature
from the Solr - User mailing list archive at Nabble.com.
--
Martin Grotzke
http://www.javakaffee.de/blog/
Hi,
recently we're experiencing OOMEs (GC overhead limit exceeded) in our
searches. Therefore I want to get some clarification on heap and cache
configuration.
This is the situation:
- Solr 1.4.1 running on tomcat 6, Sun JVM 1.6.0_13 64bit
- JVM Heap Params: -Xmx8G -XX:MaxPermSize=256m
Hi,
as the biggest parts of our jvm heap are used by solr caches I asked myself
if it wouldn't make sense to run solr caches backed by terracotta's
bigmemory (http://www.terracotta.org/bigmemory).
The goal is to reduce the time needed for full / stop-the-world GC cycles,
as with our 8GB heap the
On Tue, Jan 25, 2011 at 2:06 PM, Markus Jelsma
markus.jel...@openindex.iowrote:
On Tuesday 25 January 2011 11:54:55 Martin Grotzke wrote:
Hi,
recently we're experiencing OOMEs (GC overhead limit exceeded) in our
searches. Therefore I want to get some clarification on heap and cache
is an open issue
Thanx for the pointer! SOLR-866 is even better suited for us - after
reading SOLR-433 again I realized that it targets scripts based
replication (what we're going to leave behind us).
Cheers,
Martin
Best
Erick
On Sun, Dec 12, 2010 at 8:30 PM, Martin Grotzke
, Martin Grotzke
martin.grot...@googlemail.com wrote:
Hi,
when thinking further about it it's clear that
https://issues.apache.org/jira/browse/SOLR-433
would be even better - we could generate the spellechecker indices on
commit/optimize on the master and replicate them to all slaves.
Just
Hi,
the spellchecker component already provides a buildOnCommit and
buildOnOptimize option.
Since we have several spellchecker indices building on each commit is
not really what we want to do.
Building on optimize is not possible as index optimization is done on
the master and the slaves don't
my thread when solr/tomcat is
shutdown (I couldn't see any shutdown or destroy method in
SearchComponent)?
Thanx for your feedback,
cheers,
Martin
--
Martin Grotzke
http://twitter.com/martin_grotzke
that little
interest. Anything wrong with it?
Cheers,
Martin
On Mon, Dec 13, 2010 at 2:04 AM, Martin Grotzke
martin.grot...@googlemail.com wrote:
Hi,
the spellchecker component already provides a buildOnCommit and
buildOnOptimize option.
Since we have several spellchecker indices building
scratch?
Cheers,
Martin
-Hoss
--
Martin Grotzke
http://www.javakaffee.de/blog/
Hi,
that there's no feedback indicates that our plans/preferences are
fine. Otherwise it's now a good opportunity to feed back :-)
Cheers,
Martin
On Wed, Dec 8, 2010 at 2:48 PM, Martin Grotzke
martin.grot...@googlemail.com wrote:
Hi,
we're just planning to move from our replicated single
Hi,
we're just planning to move from our replicated single index setup to
a replicated setup with multiple cores.
We're going to start with 2 cores, but the number of cores may
change/increase over time.
Our replication is still based on scripts/rsync, and I'm wondering if
it's worth moving to
On Tue, Nov 30, 2010 at 7:51 PM, Martin Grotzke
martin.grot...@googlemail.com wrote:
On Tue, Nov 30, 2010 at 3:09 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Tue, Nov 30, 2010 at 8:24 AM, Martin Grotzke
martin.grot...@googlemail.com wrote:
Still I'm wondering, why this issue does
an
agreement on the correct solution and a patch available?
Thanx cheers,
Martin
Mike
On Mon, Nov 29, 2010 at 7:14 AM, Martin Grotzke
martin.grot...@googlemail.com wrote:
Hi,
after an upgrade from solr-1.3 to 1.4.1 we're getting an
ArrayIndexOutOfBoundsException for a query with rows=0
On Tue, Nov 30, 2010 at 3:09 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Tue, Nov 30, 2010 at 8:24 AM, Martin Grotzke
martin.grot...@googlemail.com wrote:
Still I'm wondering, why this issue does not occur with the plain
example solr setup with 2 indexed docs. Any explanation?
It's
- for a quick fix - I'll change our app so that there's no
sort param specified when rows=0.
Thanx cheers,
Martin
--
Martin Grotzke
http://twitter.com/martin_grotzke
%3A-issues-td19845539.html).
Cheers,
Martin
On Mon, 2008-10-06 at 22:45 +0200, Martin Grotzke wrote:
On Mon, 2008-10-06 at 09:00 -0400, Grant Ingersoll wrote:
On Oct 6, 2008, at 3:51 AM, Martin Grotzke wrote:
Hi Jason,
what about multi-word searches like harry potter? When I do
below.
On Wed, Oct 1, 2008 at 7:11 AM, Martin Grotzke [EMAIL PROTECTED]
wrote:
Now I'm thinking about the source-field in the spellchecker (spell):
how should fields be analyzed during indexing, and how should the
queryAnalyzerFieldType be configured.
I followed the conventions
On Mon, 2008-10-06 at 09:00 -0400, Grant Ingersoll wrote:
On Oct 6, 2008, at 3:51 AM, Martin Grotzke wrote:
Hi Jason,
what about multi-word searches like harry potter? When I do a search
in our index for harry poter, I get the suggestion harry
spotter (using spellcheck.collate=true
Hi,
I'm just starting with the spellchecker component provided by solr - it
is really cool!
Now I'm thinking about the source-field in the spellchecker (spell):
how should fields be analyzed during indexing, and how should the
queryAnalyzerFieldType be configured.
If I have brands like e.g.
On Thu, 2007-10-25 at 10:48 -0400, Yonik Seeley wrote:
On 10/25/07, Max Scheffler [EMAIL PROTECTED] wrote:
Is it possible that the prefix-processing ignores the filters?
Yes, It's a known limitation that we haven't worked out a fix for yet.
The issue is that you can't just run the prefix
On Mon, 2007-10-29 at 13:31 -0400, Yonik Seeley wrote:
On 10/29/07, Martin Grotzke [EMAIL PROTECTED] wrote:
On Thu, 2007-10-25 at 10:48 -0400, Yonik Seeley wrote:
On 10/25/07, Max Scheffler [EMAIL PROTECTED] wrote:
Is it possible that the prefix-processing ignores the filters?
Yes
Hello,
I'm just thinking about a solution for a type ahead functionality
that shall suggest terms that the user can search for, and that
displays how many docs are behind that search (like google suggest).
When I use facet.prefix and facet.field=text, where text is my catchall
field (and default
at query time, but the docs in the
wiki recommend to expand synonyms at index time...
What are your experiences? Would you also suggest to use them when
indexing?
On Thu, 2007-10-11 at 17:32 +0200, Thomas Traeger wrote:
Martin Grotzke schrieb:
Try the SnowballPorterFilterFactory with German2
Hi Daniel,
thanx for your suggestions, being able to export a large synonyms.txt
sounds very well!
Thx cheers,
Martin
On Wed, 2007-10-10 at 23:38 +0200, Daniel Naber wrote:
On Wednesday 10 October 2007 12:00, Martin Grotzke wrote:
Basically I see two options: stemming and the usage
Hello,
with our application we have the issue, that we get different
results for singular and plural searches (german language).
E.g. for hose we get 1.000 documents back, but for hosen
we get 10.000 docs. The same applies to t-shirt or t-shirts,
of e.g. hut and hüte - lots of cases :)
This is
--
Martin Grotzke
http://www.javakaffee.de/blog/
signature.asc
Description: This is a digitally signed message part
Hello,
in my custom request handler, I want to determine which fields are
constrained by the user.
E.g. the query (q) might be ipod AND brand:apple and there might
be a filter query (fq) like color:white (or more).
What I want to know is that brand and color are constrained.
AFAICS I could use
On Tue, 2007-08-21 at 11:52 +0200, Ard Schrijvers wrote:
you're missing the key piece that Ard alluded to ... the
there is one
ordere list of all terms stored in the index ... a TermEnum lets you
iterate over this ordered list, and the
IndexReader.terms(Term) method
lets you
comparing the time needed byu different methods, you're also timing
different fields.
this actually makes a lot of sense since there are probably a lot fewer
unique values for the cat field, so there are a lot fewer discrete values
to deal with when computing counts.
-Hoss
--
Martin
Hi,
I have a custom Facet implementation that extends SimpleFacets
and overrides getTermCounts( String field ).
For the price field I calculate available ranges, for this I
have to read the values for this field. Right this looks like
this:
public NamedList getTermCounts( final String field
Hi all,
I have a document with a name field like this:
field name='name'MP3-Player, Apple, #xBB;iPod nano#xAB;, silber,
4GB/field
and want to find apple. Unfortunately, I only find apple,...
Can anybody help me with this?
The schema.xml containts the following field definition
field name=name
On Thu, 2007-07-05 at 11:56 -0700, Mike Klaas wrote:
On 5-Jul-07, at 11:43 AM, Martin Grotzke wrote:
Hi all,
I have a document with a name field like this:
field name='name'MP3-Player, Apple, #xBB;iPod nano#xAB;, silber,
4GB/field
and want to find apple. Unfortunately, I only find
On Thu, 2007-07-05 at 12:39 -0700, Thiago Jackiw wrote:
Is there a way for a record to belong to multiple facets? If so, how
would one go about implementing it?
What I'd like to accomplish would be something like:
record A:
name=John Doe
category_facet=Cars
category_facet=Electronics
stddev could be computed fairly easily ... there's a
formula for that that works well in a single pass over a bunch of values
right?
-Hoss
--
Martin Grotzke
http://www.javakaffee.de/blog/
signature.asc
Description: This is a digitally signed message part
On Tue, 2007-06-26 at 16:48 -0700, Mike Klaas wrote:
On 26-Jun-07, at 3:01 PM, Martin Grotzke wrote:
AFAICS I do not have the possibility to specify range queries in my
application, as I do not have a clue what's the lowest and highest
price in the search result and what are good ranges
,
Martin
We are working on a solr plugin.
-John
On 6/26/07, Mike Klaas [EMAIL PROTECTED] wrote:
On 26-Jun-07, at 3:01 PM, Martin Grotzke wrote:
AFAICS I do not have the possibility to specify range queries in my
application, as I do not have a clue what's the lowest and highest
algorithms that can be employed: equal frequency per facet count, equal
sized ranges, rounded ranges, etc.
I just had a conversation with our customer and they also want to
have it like this - adjusting with a new facet constraint...
Cheers,
Martin
- will
--
Martin Grotzke
http
Hi,
my documents (products) have a price field, and I want to have
a dynamically calculated range facet for that in the response.
E.g. I want to have this in the response
price:[* TO 20] - 23
price:[20 TO 40] - 42
price:[40 TO *] - 33
if prices are between 0 and 60
but
price:[* TO 100] - 23
trying to do something like facet.field=* would be a
very bad idea even if it was supported.
http://issues.apache.org/jira/browse/SOLR-247
-Hoss
--
Martin Grotzke
http://www.javakaffee.de/blog/
signature.asc
Description: This is a digitally signed message part
that... :)
Cheers,
Martin
What do the experts think about this?
Tom
--
Martin Grotzke
http://www.javakaffee.de/blog/
signature.asc
Description: This is a digitally signed message part
On Wed, 2007-06-20 at 12:59 +0200, Thomas Traeger wrote:
Martin Grotzke schrieb:
On Tue, 2007-06-19 at 19:16 +0200, Thomas Traeger wrote:
[...]
I think it would be really nice, if I don't have to know which facets
fields are there at query time, instead just import attributes
On Wed, 2007-06-20 at 12:49 -0700, Chris Hostetter wrote:
: I solve this problem by having metadata stored in my index which tells
: my custom request handler what fields to facet on for each category ...
: How do you define this metadata?
this might be a good place to start, note that
On Thu, 2007-06-14 at 11:32 +0100, Daniel Alheiros wrote:
Hi
I've been using one Java client I got from a colleague but I don't know
exactly its version or where to get any update for it. Base package is
org.apache.solr.client (where there are some common packages) and the client
main
On Tue, 2007-05-22 at 13:06 -0400, Erik Hatcher wrote:
On May 22, 2007, at 11:31 AM, Martin Grotzke wrote:
You need to specify the constrants (facet.query or facet.field
params)
Too bad, so we would have either to know the schema in the application
or provide queries for index metadata
On Tue, 2007-05-22 at 15:10 -0400, Erik Hatcher wrote:
On May 22, 2007, at 1:36 PM, Martin Grotzke wrote:
For sure, perhaps the schema field element could be extended by an
attribute isfacet
There is no effective difference between a facet field and any
other indexed field. What fields
wiki page?
Thanks,
Mike Austin
--
Martin Grotzke
http://www.javakaffee.de/blog/
signature.asc
Description: This is a digitally signed message part
54 matches
Mail list logo