This may have been introduced by changes made to solve
https://issues.apache.org/jira/browse/SOLR-5968
I created https://issues.apache.org/jira/browse/SOLR-6501 to track the new
bug.
On Tue, Sep 9, 2014 at 4:53 PM, Mike Hugo m...@piragua.com wrote:
Hello,
With Solr 4.7 we had some queries
Hello,
With Solr 4.7 we had some queries that return dynamic fields by passing in
a fl=*_exact parameter; this is not working for us after upgrading to Solr
4.10.0. This appears to only be a problem when requesting wildcarded
fields via SolrJ
With Solr 4.10.0 - I downloaded the binary and set
Hello,
We recently upgraded to Solr Cloud 4.7 (went from a single node Solr 4.0
instance to 3 node Solr 4.7 cluster).
Part of out application does an automated traversal of all documents that
match a specific query. It does this by iterating through results by
setting the start and rows
I should add each node has 16G of ram, 8GB of which is allocated to the
JVM. Each node has about 200k docs and happily uses only about 3 or 4gb of
ram during normal operation. It's only during this deep pagination that we
have seen OOM errors.
On Mon, Mar 17, 2014 at 3:14 PM, Mike Hugo m
by a commit under) SOLR-5875:
https://issues.apache.org/jira/browse/SOLR-5875.
If you can build from source, it would be great if you could confirm the
fix addresses the issue you're facing.
This fix will be part of a to-be-released Solr 4.7.1.
Steve
On Mar 17, 2014, at 4:14 PM, Mike Hugo m
for release
propogation to the Apache mirrors): i.e., next Friday-ish.
Steve
On Mar 17, 2014, at 4:40 PM, Mike Hugo m...@piragua.com wrote:
Thanks Steve,
That certainly looks like it could be the culprit. Any word on a release
date for 4.7.1? Days? Weeks? Months?
Mike
On Mon
' and 'rows'?
4 or 5 requests still seems a very low limit to be running into an OOM
issues though, so perhaps it is both issues combined?
Ta,
Greg
On 18 March 2014 07:49, Mike Hugo m...@piragua.com wrote:
Thanks!
On Mon, Mar 17, 2014 at 3:47 PM, Steve Rowe sar...@gmail.com wrote
Greg and I are talking about the same type of parallel.
We do the same thing - if I know there are 10,000 results, we can chunk
that up across multiple worker threads up front without having to page
through the results. We know there are 10 chunks of 1,000, so we can have
one thread process
After a collection has been created in SolrCloud, is there a way to modify
the Replication Factor?
Say I start with a few nodes in the cluster, and have a replication factor
of 2. Over time, the index grows and we add more nodes to the cluster, can
I increase the replication factor to 3?
API. Or
perhaps it's in 4.7, I don't know, this JIRA issue is a little confusing as
it's still open, though it looks like stuff has been committed:
https://issues.apache.org/jira/browse/SOLR-5130
--
Mark Miller
about.me/markrmiller
On March 12, 2014 at 10:40:15 AM, Mike Hugo (m
but returns 0 results
{!surround}(common lisp OR assembly language) W (programming)
On Tue, May 21, 2013 at 8:32 AM, Jack Krupansky j...@basetechnology.comwrote:
I'll make sure to include that specific example in the new Solr book.
-- Jack Krupansky
-Original Message- From: Mike Hugo
-Original Message- From: Mike Hugo
Sent: Tuesday, May 21, 2013 11:26 AM
To: solr-user@lucene.apache.org
Subject: Re: Expanding sets of words
I'll buy that book :)
Does this work with mutli-word terms?
(common lisp or assembly language)
(programming or coding or development)
I tried
Is there a way to query for combinations of two sets of words? For
example, if I had
(java or groovy or scala)
(programming or coding or development)
Is there a query parser that, at query time, would expand that into
combinations like
java programming
groovy programming
scala programming
java
**indent=true
The LucidWorks Search query parser also supports NEAR, BEFORE, and AFTER
operators, in conjunction with OR and - to generate span queries:
q=(java OR groovy OR scala) BEFORE:0 (programming OR coding OR development)
-- Jack Krupansky
-Original Message- From: Mike Hugo
Does anyone know if a version of ConcurrentUpdateSolrServer exists that
would use the size in memory of the queue to decide when to send documents
to the solr server?
For example, if I set up a ConcurrentUpdateSolrServer with 4 threads and a
batch size of 200 that works if my documents are small.
Explicitly running an optimize on the index via the admin screens solved
this problem - the correct counts are now being returned.
On Tue, May 22, 2012 at 4:33 PM, Mike Hugo m...@piragua.com wrote:
We're testing a snapshot of Solr4 and I'm looking at some of the responses
from the Luke request
We're testing a snapshot of Solr4 and I'm looking at some of the responses
from the Luke request handler. Everything looks good so far, with the
exception of the distinct attribute which (in Solr3) shows me the
distinct number of terms for a given field.
Given the request below, I'm consistently
, February 15, 2012 at 4:39 PM, Em wrote:
Hello Mike,
have a look at Solr's Schema Browser. Click on FIELDS, select label
and have a look at the number of distinct (term-)values.
Regards,
Em
Am 15.02.2012 23:07, schrieb Mike Hugo:
Hello,
We're building an auto suggest component based
Hello,
We're building an auto suggest component based on the label field of
documents. Is there a way to see how many terms are in the dictionary, or
how much memory it's taking up? I looked on the statistics page but didn't
find anything obvious.
Thanks in advance,
Mike
ps- here's the
in tracking this down Mike!
I'm going to start looking into this now...
-Yonik
lucidimagination.com
On Thu, Jan 26, 2012 at 11:06 PM, Mike Hugo m...@piragua.com wrote:
I created issue https://issues.apache.org/jira/browse/SOLR-3062 for this
problem. I was able to track it down to something
I've been looking into this a bit further and am trying to figure out why
the FQ isn't getting applied.
Can anyone point me to a good spot in the code to start looking at how FQ
parameters are applied to query results in Solr4?
Thanks,
Mike
On Thu, Jan 26, 2012 at 10:06 PM, Mike Hugo m
Hello,
I'm trying out the Solr JOIN query functionality on trunk. I have the
latest checkout, revision #1236272 - I did the following steps to get the
example up and running:
cd solr
ant example
java -jar start.jar
cd exampledocs
java -jar post.jar *.xml
Then I tried a few of the sample
() method, returning all documents in a random access way
) - before that commit the join / fq functionality works as expected /
documented on the wiki page. After that commit it's broken.
Any assistance is greatly appreciated!
Thanks,
Mike
On Thu, Jan 26, 2012 at 11:04 AM, Mike Hugo m...@piragua.com
-
From: Mike Hugo [mailto:m...@piragua.com]
Sent: Tuesday, January 24, 2012 3:56 PM
To: solr-user@lucene.apache.org
Subject: Re: HTMLStripCharFilterFactory not working in Solr4?
Thanks for the responses everyone.
Steve, the test method you provided also works for me. However
We recently updated to the latest build of Solr4 and everything is working
really well so far! There is one case that is not working the same way it
was in Solr 3.4 - we strip out certain HTML constructs (like trademark and
registered, for example) in a field as defined below - it was working in
LegacyHTMLStripCharFilterFactory to get the previous behavior.
See https://issues.apache.org/jira/browse/LUCENE-3690 for more details.
-Yonik
http://www.lucidimagination.com
On Tue, Jan 24, 2012 at 1:34 PM, Mike Hugo m...@piragua.com wrote:
We recently updated to the latest build of Solr4 and everything is
working
://svn.apache.org/viewvc/lucene/dev/trunk/modules/analysis/common/src/java/org/apache/lucene/analysis/standard/StandardTokenizerImpl.jflex?view=markup
.
The behavior you're seeing is not consistent with the above test.
Steve
-Original Message-
From: Mike Hugo [mailto:m...@piragua.com
27 matches
Mail list logo