arch Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Tue, Feb 23, 2021 at 10:39 AM Timothy Potter
> wrote:
>
> > The Lucene PMC is pleased to announce the release of Apache Solr 8.8.1.
> >
> >
> > Solr is the popular, blazing fast, open source NoSQ
The Lucene PMC is pleased to announce the release of Apache Solr 8.8.1.
Solr is the popular, blazing fast, open source NoSQL search platform from
the Apache Lucene project. Its major features include powerful full-text
search, hit highlighting, faceted search, dynamic clustering, database
To add to what Shawn said, RC's are made available to anyone
interested in testing them and that helps us find bugs before release.
RC2 for 8.8.1 is available for testing now, see dev mailing list for location.
Please download it and verify it is stable for your use cases and environment.
Tim
devices.
>
> [image: Grey_LI] <http://www.linkedin.com/company/unisys> [image:
> Grey_TW] <http://twitter.com/unisyscorp> [image: Grey_YT]
> <http://www.youtube.com/theunisyschannel>[image: Grey_FB]
> <http://www.facebook.com/unisyscorp>[image: Grey_Vimeo]
>
be a quick work-around for this problem if
possible in your env. Otherwise, we'll have to include a fix into 8.8.1
Cheers,
Tim
On Mon, Feb 8, 2021 at 8:27 AM Timothy Potter wrote:
> What version of SolrJ is embedded in your uleaf.ear file? There have been
> changes in how we deal with URLs
What version of SolrJ is embedded in your uleaf.ear file? There have been
changes in how we deal with URLs stored in ZK in 8.8 --> SOLR-12182
On Fri, Feb 5, 2021 at 2:34 AM Flowerday, Matthew J <
matthew.flower...@gb.unisys.com> wrote:
> Hi There
>
>
>
> I have been checking out the latest
May 19, 2017 at 10:44 AM, Timothy Potter <thelabd...@gmail.com>
> wrote:
>
>> I'm executing a streaming expr and get this error:
>>
>> Caused by: org.apache.solr.common.SolrException: Could not load
>> collection from Z
I'm executing a streaming expr and get this error:
Caused by: org.apache.solr.common.SolrException: Could not load
collection from ZK:
MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002
at
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1098)
ng clause. The
>>> query
>>> should work if you use count(*). We could consider this a regression, but
>>> I
>>> think this will be a won't fix.
>>>
>>> Joel Bernstein
>>> http://joelsolr.blogspot.com/
>>>
>>> On Tue, May 16, 2017 at
This SQL used to work pre-calcite:
SELECT movie_id, COUNT(*) as num_ratings, avg(rating) as aggAvg FROM
ratings GROUP BY movie_id HAVING num_ratings > 100 ORDER BY aggAvg ASC
LIMIT 10
Now I get:
Caused by: java.io.IOException: -->
http://192.168.1.4:8983/solr/ratings_shard2_replica1/:Failed to
;
>
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Tue, Jul 26, 2016 at 3:58 PM, Timothy Potter <thelabd...@gmail.com>
> wrote:
>
>> Does anyone have an example of just POST'ing a streaming expression to
>> the /stream handler from SolrJ client code? i.e.
Does anyone have an example of just POST'ing a streaming expression to
the /stream handler from SolrJ client code? i.e. I don't want to parse
and execute the streaming expression on the client side, rather, I
want to post the expression to the server side.
Currently, my client code is a big copy
gt; would never see this error if the /stream handler is executing the
> expression.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Tue, Jul 26, 2016 at 10:44 AM, Timothy Potter <thelabd...@gmail.com>
> wrote:
>
>> it's from a unit test, but not sure why
a manual
> call to the /stream handler?
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, Jul 21, 2016 at 12:28 PM, Timothy Potter <thelabd...@gmail.com>
> wrote:
>
>> I'm working with 6.1.0 release and I have a single SolrCloud instance
>&g
I'm working with 6.1.0 release and I have a single SolrCloud instance
with 1 shard / 1 replica. Somehow I'm triggering this, which from what
I can see, means workers == 0, but how? Shouldn't workers default to 1
I should mention that my streaming expression doesn't include any
workers, i.e. it is
Got an answer from Hossman in another channel ... this syntax was not
officially supported and is no longer valid, i.e. my code must change
;-)
On Mon, Jul 18, 2016 at 8:02 AM, Timothy Potter <thelabd...@gmail.com> wrote:
> I have code that uses the DateMathParser and this used to wo
I have code that uses the DateMathParser and this used to work in 5.x
but is no longer accepted in 6.x:
time:[NOW-2DAY TO 2016-07-19Z]
org.apache.solr.common.SolrException: Invalid Date in Date Math
String:'2016-07-19Z'
at org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:241)
> wrote:
>
>> The image is the correct flow. Are you using workers?
>>
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Mon, May 23, 2016 at 7:16 PM, Timothy Potter <thelabd...@gmail.com>
>> wrote:
>>
>>> This imag
ed that may or may not be host
> to any replicas of that collection.
>
> At least I think that's what's up, but then again this is
> new to me too.
>
> Which bits of the doc anyway? Sounds like some
> clarification is in order.
>
> Best,
> Erick
>
> On Mon, Ma
I've seen docs and diagrams that seem to indicate a streaming
expression can utilize all replicas of a shard but I'm seeing only 1
replica per shard (I have 2) being queried.
All replicas are on the same host for my experimentation, could that
be the issue? What are the circumstances where all
t;> Supporting SQL greater and less then predicates should not be too
>> difficult. Feel free to create a jira ticket for this.
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Sat, May 21, 2016 at 10:55 AM, Timothy Potter <thelabd...@
How would I do something like: find all docs using a geofilt, e.g.
SELECT title_s
FROM movielens
WHERE location_p='{!geofilt d=90 pt=37.773972,-122.431297 sfield=location_p}'
This fails with:
{"result-set":{"docs":[
{"EXCEPTION":"java.util.concurrent.ExecutionException:
java.io.IOException:
this gives expected result:
SELECT title_s, COUNT(*) as cnt
FROM movielens
WHERE genre_ss='action' AND rating_i='[4 TO 5]'
GROUP BY title_s
ORDER BY cnt desc
LIMIT 5
but using >= 4 doesn't give same results (my ratings are 1-5):
SELECT title_s, COUNT(*) as cnt
I hope 256MB of Xss is a typo and you really meant 256k right?
On Mon, Nov 16, 2015 at 4:58 AM, Behzad Qureshi
wrote:
> Hi All,
>
> I am using Tomcat server with solr 4.10.3. I want to shift to Jetty as
> replacement of Tomcat server but I am not getting
would launching the java process with javaw help here?
On Thu, Oct 29, 2015 at 4:03 AM, Zheng Lin Edwin Yeo
wrote:
> Yes, this is the expected behaviour. Once you close the command window,
> Solr will stop running. This has happened to me several times. Just to
> check,
You should fix your log4j.properties file to no log to console ...
it's there for the initial getting started experience, but you don't
need to send log messages to 2 places.
On Tue, Oct 20, 2015 at 10:42 AM, Shawn Heisey wrote:
> On 10/20/2015 9:19 AM, Eric Torti wrote:
>>
Hi Vijay,
I'm not sure what's wrong here ... have you posted to the Slider
mailing list? Also, which version of Java are you using when
interacting with Slider? I know it had some issues with Java 8 at one
point. Which version of Slider so I can try to reproduce ...
Cheers,
Tim
On Thu, Aug 27,
Hi Vijay,
Verify the ResourceManager URL and try passing the --manager param to
explicitly set the ResourceManager URL during the create step.
Cheers,
Tim
On Mon, Aug 17, 2015 at 4:37 AM, Vijay Bhoomireddy
vijaya.bhoomire...@whishworks.com wrote:
Hi,
Any help on this please?
Thanks
Hi Olivier,
Can you look at the collections to see if there are leader initiated
recovery nodes in the ZooKeeper tree? Go into the Solr Admin UI -
Cloud panel - Tree view and drill into one of the collections that's
not recovering /collections/collection/leader_initiated_recovery/
You could try
What are your cache sizes? Max doc?
Also, what GC settings are you using? 6GB isn't all that much for a
memory-intensive app like Solr, esp. given the number of facet fields
you have. Lastly, are you using docvalues for your facet fields? That
should help reduce the amount of heap needed to
can you try with double-quotes around the zk connect string?
bin\solr.cmd -e cloud -z localhost:2181,localhost:2182,localhost:2183
On Mon, Jul 6, 2015 at 2:59 AM, Adrian Liew adrian.l...@avanade.com wrote:
Hi David,
When I run the command below on a Windows machine using Powershell window:
Hi Edwin,
You'll need to use the bin\solr.cmd to start Solr as it now requires
some additional system properties to be set. Put simply, starting solr
using java -jar start.jar is not supported. Please try bin\solr.cmd
and let us know if you run into any issues. You can set any additional
system
Seems like you should be able to use the ManagedStopFilterFactory with
a custom StorageIO impl that pulls from your db:
http://lucene.apache.org/solr/5_1_0/solr-core/index.html?org/apache/solr/rest/ManagedResourceStorage.StorageIO.html
On Thu, May 28, 2015 at 7:03 AM, Alessandro Benedetti
Hi Edwin,
Are there changes you recommend to bin/solr.cmd to make it easier to
work with NSSM? If so, please file a JIRA as I'd like to help make
that process easier.
Thanks.
Tim
On Mon, May 25, 2015 at 3:34 AM, Zheng Lin Edwin Yeo
edwinye...@gmail.com wrote:
I've managed to get the Solr
Yes, same bug. Fixed in 5.2
On Tue, May 26, 2015 at 9:15 AM, Clemens Wyss DEV clemens...@mysign.ch wrote:
I also noticed that (see my post this morning)
...
SOLR_OPTS=$SOLR_OPTS -Dsolr.allow.unsafe.resourceloading=true
...
Is not taken into consideration (anymore). Same bug?
I'm seeing that RTG requests get routed to any active replica of the
shard hosting the doc requested by /get ... I was thinking only the
leader should handle that request since there's a brief window of time
where the latest update may not be on the replica (albeit usually very
brief) and the
Can you try defining the ZK_HOST in bin\solr.in.cmd instead of passing
it on the command-line?
On Mon, Apr 27, 2015 at 12:10 PM, Erick Erickson
erickerick...@gmail.com wrote:
What version of Solr are you using? 4.10.3? 5.1?
And can we see the full output of your attempt to start Solr? There
14 April 2015 - The Lucene PMC is pleased to announce the release of
Apache Solr 5.1.0.
Solr 5.1.0 is available for immediate download at:
http://www.apache.org/dyn/closer.cgi/lucene/solr/5.1.0
Solr 5.1.0 includes 39 new features, 40 bug fixes, and 36 optimizations
/ other changes from over 60
.
* facet.contains option to limit which constraints are returned.
* Streaming Aggregation for SolrCloud.
* The admin UI now visualizes Lucene segment information.
* Parameter substitution / macro expansion across entire request
On Tue, Apr 14, 2015 at 11:42 AM, Timothy Potter thelabd...@gmail.com
I wrote a simple backup utility for a Collection that uses the
replication handler, see:
https://github.com/LucidWorks/solr-scale-tk/blob/master/src/main/java/com/lucidworks/SolrCloudTools.java#L614
feel free to borrow / steal if useful.
On Mon, Apr 6, 2015 at 12:42 PM, Davis, Daniel (NIH/NLM)
You'll need a python lib that uses a python ZooKeeper client to be
SolrCloud-aware so that you can do RDD like things, such as reading
from all shards in a collection in parallel. I'm not aware of any Solr
py libs that are cloud-aware yet, but it would be a good contribution
to upgrade
Anything in the server-side Solr logs? Also, if you go to the Solr admin
console at http://localhost:8983/solr, do you see the gettingstarted
collection in the cloud panel?
On Mon, Mar 30, 2015 at 1:12 PM, Purohit, Sumit sumit.puro...@pnnl.gov
wrote:
I have a basic Solr 5.0.0 cloud setup
From: Timothy Potter [thelabd...@gmail.com]
Sent: Monday, March 30, 2015 2:18 PM
To: solr-user@lucene.apache.org
Subject: Re: NoNode for /clusterstate.json in solr5.0.0 cloud
Anything in the server-side Solr logs? Also, if you go to the Solr admin
console at http://localhost
, 2015 at 2:32 PM, Purohit, Sumit sumit.puro...@pnnl.gov
wrote:
Thanks Tim,
i had to make some changes in my local spark-solr clone to build it for
sorl5.
If its ok, i can commit these to github.
thanks
sumit
From: Timothy Potter [thelabd
Are you using a SolrJ client from 4.x to connect to a Solr 5 cluster?
On Wed, Mar 18, 2015 at 1:32 PM, Adnan Yaqoob itsad...@gmail.com wrote:
I'm getting following exception while trying to upload document on
SolrCloud using CloudSolrServer.
Exception in thread main
Before I open a JIRA, I wanted to put this out to solicit feedback on what
I'm seeing and what Solr should be doing. So I've indexed the following 8
docs into a 2-shard collection (Solr 4.8'ish - internal custom branch
roughly based on 4.8) ... notice that the 3 grand-children of 2-1 have
dup'd
I think the next step here is to ship Solr with the war already extracted
so that Jetty doesn't need to extract it on first startup -
https://issues.apache.org/jira/browse/SOLR-7227
On Tue, Mar 10, 2015 at 10:15 AM, Erick Erickson erickerick...@gmail.com
wrote:
If I'm understanding your problem
I think Mark found something similar -
https://issues.apache.org/jira/browse/SOLR-6838
On Sat, Feb 14, 2015 at 2:05 AM, Erick Erickson erickerick...@gmail.com
wrote:
Exactly how are you issuing the commit? I'm assuming you're
using SolrJ. the server.commit(whatever, true) waits for the
Hi Vijay,
We're working on SOLR-6816 ... would love for you to be a test site for any
improvements we make ;-)
Curious if you've experimented with changing the mergeFactor to a higher
value, such as 25 and what happens if you set soft-auto-commits to
something lower like 15 seconds? Also, make
Correct. Solr 5.0 is not a Web application; any WAR or Web app'ish things
in Solr 5 are implementation details that may change in the future. The ref
guide will include some content about how to migrate to Solr 5 from 4.
On Tue, Feb 10, 2015 at 9:48 AM, Matt Kuiper matt.kui...@issinc.com wrote:
The bin/solr script in 4 didn't do a good job at allowing you to control
the location of the redirected console log or gc log, so you'll probably
have to hack that script a bit. The location of the main Solr log can be
configured in the example/resources/log4j.properties
This has been improved in
Just soliciting some advice from the community ...
Let's say I have a 10-node SolrCloud cluster and have a single collection
with 2 shards with replication factor 10, so basically each shard has one
replica on each of my nodes.
Now imagine one of those nodes starts getting into a bad state and
Indeed - Hoss is correct ... it's a problem with the example in the
book ... my apologies for the confusion!
On Tue, Sep 30, 2014 at 3:57 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: Thanks for the response, yes the way you describe I know it works and is
: how I get it to work but
Probably need to look at it running with a profiler to see what's up.
Here's a few additional flags that might help the GC work better for
you (which is not to say there isn't a leak somewhere):
-XX:MaxTenuringThreshold=8 -XX:CMSInitiatingOccupancyFraction=40
This should lead to a nice
You need to also verify the node hosting the replica is a live node
(/live_nodes). From SolrJ, you can call:
clusterState.getLiveNodes().contains(node).
As for API, there is CLUSTERSTATE provided by the Collection API, but
it's not consulting /live_nodes (which is a bug) - I'll open a ticket.
On
https://issues.apache.org/jira/browse/SOLR-6481
On Thu, Sep 4, 2014 at 12:32 PM, Timothy Potter thelabd...@gmail.com wrote:
You need to also verify the node hosting the replica is a live node
(/live_nodes). From SolrJ, you can call:
clusterState.getLiveNodes().contains(node).
As for API
You can set the storageDir init-arg in the solrconfig.xml for the
RestManager for each core. However, since it is at the core config
level, you can't have a different storageDir per language. Here's an
example of how to configure the RestManager in solrconfig.xml to
customize the storageDir:
Hi,
Using the coreAdmin mergeindexes command to merge an index into a
leader (SolrCloud mode on 4.9.0) and the replica does not do a snap
pull from the leader as I would have expected. The merge into the
leader worked like a charm except I had to send a hard commit after
that (which makes sense).
like we should write a test and make it work.
--
Mark Miller
about.me/markrmiller
On August 19, 2014 at 1:20:54 PM, Timothy Potter (thelabd...@gmail.com) wrote:
Hi,
Using the coreAdmin mergeindexes command to merge an index into a
leader (SolrCloud mode on 4.9.0) and the replica does not do
It will always be supported under the current architecture as
SolrCloud uses master/slave style replication to bring replicas back
in-sync with leaders if a replica is too far out of date (currently,
too far 100 missed updates). So if it fits your architecture better,
then use it!
On Mon, Aug 4,
You'll need to scp the JAR files to all nodes in the cluster. ZK is
not a great distribution mechanism for large binary files since it has
a 1MB znode size limit (by default)
On Thu, Jul 31, 2014 at 10:26 AM, P Williams
williams.tricia.l...@gmail.com wrote:
Hi,
I have an existing collection
Hi Ameya,
Tough to say without more information about what's slow. In general,
when I've seen Solr index that slow, it's usually related to some
complex text analysis, for instance, are you doing any phonetic
analysis? Best thing to do is attach a Java profiler (e.g. JConsole or
VisualVM) using
Hi Ian,
What's the CPU doing on the leader? Have you tried attaching a
profiler to the leader while running and then seeing if there are any
hotspots showing. Not sure if this is related but we recently fixed an
issue in the area of leader forwarding to replica that used too many
CPU cycles
Hi Modassar,
Have you tried hitting the cores for each replica directly (instead of
using the collection)? i.e. if you had col_shard1_replica1 on node1,
then send the optimize command to that core URL directly:
curl -i -v http://host:port/solr/col_shard1_replica1/update; -H
Hi Zane,
re 1: as an alternative to shard splitting, you can just overshard the
collection from the start and then migrate existing shards to new
hardware as needed. The migrate can happen online, see collection API
ADDREPLICA. Once the new replica is online on the new hardware, you
can unload
Hi Greg,
Sorry for the slow response. The general thinking is that you
shouldn't worry about which nodes host leaders vs. replicas because A)
that can change, and B) as you say, the additional responsibilities
for leader nodes is quite minimal (mainly per-doc version management
and then
Hi Mark,
Sorry for the trouble! I've now made the ami-1e6b9d76 AMI public;
total oversight on my part :-(. Please try again. Thanks Hoss for
trying to help out on this one.
Cheers,
Tim
On Fri, Jun 6, 2014 at 6:46 PM, Mark Gershman montan...@gmail.com wrote:
Thanks, Hoss.
I did substitute the
I have an example in Solr In Action that uses the
PatternReplaceCharFilterFactory and now it doesn't work in 4.7.0.
Specifically, the fieldType is:
fieldType name=text_microblog class=solr.TextField
positionIncrementGap=100
analyzer
charFilter
wrote:
On 3/1/2014 12:15 PM, Timothy Potter wrote:
The PatternReplaceCharFilterFactory (PRCF) is used to collapse
repeated letters in a term down to a max of 2, such as #yu would
be #yumm
When I run some text through this analyzer using the Analysis form,
the output is as if the resulting
Hi Dave,
Have you looked at the TermsComponent?
http://wiki.apache.org/solr/TermsComponent It is easy to wire into an
existing request handler and allows you to return the top terms for a
field. Example server even includes an example request handler that
uses it:
searchComponent name=terms
Yes, I've done this ... but I had to build my own utility to update
clusterstate.json (for reasons I can't recall now). So make your
changes to clusterstate.json manually and then do something like the
following with SolrJ:
public static void updateClusterstateJsonInZk(CloudSolrServer
of those things, even though a lot of them are fairly low hanging
fruit. Hardening takes the priority :(
- Mark
On Nov 19, 2013, at 12:42 PM, Timothy Potter thelabd...@gmail.com wrote:
You're thinking is always one-step ahead of me! I'll file the JIRA
Thanks.
Tim
On Tue, Nov 19
I've been thinking about how SolrCloud deals with write-availability using
in-sync replica sets, in which writes will continue to be accepted so long
as there is at least one healthy node per shard.
For a little background (and to verify my understanding of the process is
correct), SolrCloud only
, at 12:42 PM, Timothy Potter thelabd...@gmail.com wrote:
You're thinking is always one-step ahead of me! I'll file the JIRA
Thanks.
Tim
On Tue, Nov 19, 2013 at 10:38 AM, Mark Miller markrmil...@gmail.com
wrote:
Yeah, this is kind of like one of many little features that we have just
Good questions ... From my understanding, queries will work if Zk goes down
but writes do not work w/o Zookeeper. This works because the clusterstate
is cached on each node so Zookeeper doesn't participate directly in queries
and indexing requests. Solr has to decide not to allow writes if it
Hi Bruno,
Have you looked into Solr's facet support? If I'm reading your post
correctly, this sounds like the classic case for facets. Each time the user
selects a facet, you add a filter query (fq clause) to the original query.
http://wiki.apache.org/solr/SolrFacetingOverview
Tim
On Wed, Oct
Sounds correct - you probably want to use an invariant parameter in
solrconfig.xml, something along the lines of:
lst name=invariants str name=fqdocset:0/str /lst
Where docset is the new field you add to the schema to determine which set
a document belongs to. You might also consider adding a
if an another solution exists.
Facet seems to be the good solution.
Bruno
Le 23/10/2013 17:03, Timothy Potter a écrit :
Hi Bruno,
Have you looked into Solr's facet support? If I'm reading your post
correctly, this sounds like the classic case for facets. Each time the
user
selects a facet
Trying to add some information about core.properties and auto-discovery in
Solr in Action and am at a loss for what to tell the reader is the purpose
of this feature.
Can anyone point me to any background information about core
auto-discovery? I'm not interested in the technical implementation
Exactly the insight I was looking for! Thanks Yonik ;-)
On Fri, Sep 20, 2013 at 10:37 AM, Yonik Seeley yo...@lucidworks.com wrote:
On Fri, Sep 20, 2013 at 11:56 AM, Timothy Potter thelabd...@gmail.com
wrote:
Trying to add some information about core.properties and auto-discovery
in
Solr
I've been thinking about this one too and was curious about using the Solr
Entity support in the DIH to do the import from one DC to another (for the
lost docs). In my mind, one configures the DIH to use the
SolrEntityProcessor with a query to capture the docs in the DC that stayed
online, most
for TermsComponent
and the second spell check as an example.
Best
Erick
On Tue, Jul 30, 2013 at 2:07 PM, Timothy Potter thelabd...@gmail.com wrote:
Going over the comments in SOLR-1316, I seemed to have lost the
forrest for the trees. What is the benefit of using the spellcheck
based suggester over
Going over the comments in SOLR-1316, I seemed to have lost the
forrest for the trees. What is the benefit of using the spellcheck
based suggester over something like the terms component to get
suggestions as the user types?
Maybe it is faster because it builds the in-memory data structure on
1) Depends on your document routing strategy. It sounds like you could
be using the compositeId strategy and if so, there's still a hash
range assigned to each shard, so you can split the big shards into
smaller shards.
2) Since you're replicating in 2 places, when one of your servers
crash,
Quick behavior check on whether Solr continues to process queries and
index documents during a collection reload?
For example, after I upload new config documents to Zookeeper, I issue
a reload command using the collections API. Of course this propagates
a core reload across all nodes in the
Log messages?
On Wed, Jul 24, 2013 at 1:37 AM, Neil Prosser neil.pros...@gmail.com wrote:
Great. Thanks for your suggestions. I'll go through them and see what I can
come up with to try and tame my GC pauses. I'll also make sure I upgrade to
4.4 before I start. Then at least I know I've got
Apologize if this is not the correct way to request mailing list admin
support but it's pretty clear that wired...@yahoo.com is spamming this
list and should be booted out.
Tim
. Both servers were running 4.3.1. I've since upgraded to 4.4.0.
If you need any more information or want me to do any filtering let me know.
On 24 July 2013 15:50, Timothy Potter thelabd...@gmail.com wrote:
Log messages?
On Wed, Jul 24, 2013 at 1:37 AM, Neil Prosser neil.pros...@gmail.com
On Mon, Jul 22, 2013 at 12:52 PM, Timothy Potter [via Lucene]
ml-node+s472066n4079518...@n3.nabble.com wrote:
There is but I couldn't get it to work in my environment on Jetty, see:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201306.mbox/%3CCAJt9Wnib+p_woYODtrSPhF==v8Vx
Curious what the use case is for this? Zookeeper is not an HTTP
service so loading it in Jetty by itself doesn't really make sense. I
also think this creates more work for the Solr team especially since
setting up a production ensemble shouldn't take more than a few
minutes once you have the nodes
Hi Matt,
This feature is commonly known as deep paging and Lucene and Solr have
issues with it ... take a look at
http://solr.pl/en/2011/07/18/deep-paging-problem/ as a potential
starting point using filters to bucketize a result set into sets of
sub result sets.
Cheers,
Tim
On Tue, Jul 23,
I saw something similar and used an absolute path to my JAR file in
solrconfig.xml vs. a relative path and it resolved the issue for me.
Not elegant but worth trying, at least to rule that out.
Tim
On Mon, Jul 22, 2013 at 7:51 AM, Abeygunawardena, Niran
niran.abeygunaward...@proquest.co.uk
A couple of things I've learned along the way ...
I had a similar architecture where we used fairly low numbers for
auto-commits with openSearcher=false. This keeps the tlog to a
reasonable size. You'll need something on the client side to send in
the hard commit request to open a new searcher
Why was it down? e.g. did it OOM? If so, the recommended approach is
kill the process on OOM vs. leaving it in the cluster in a zombie
state. I had similar issues when my nodes OOM'd is why I ask. That
said, you can get the /clusterstate.json which contains Zk's status of
a node using a request
There is but I couldn't get it to work in my environment on Jetty, see:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201306.mbox/%3CCAJt9Wnib+p_woYODtrSPhF==v8Vx==mDBd_qH=x_knbw-bn...@mail.gmail.com%3E
Let me know if you have any better luck. I had to resort to something
hacky but
This is not a problem per se, just want to verify that we're not able
to specify which server shard splits are created as of 4.3.1? From
what I've seen, the new cores for the sub-shards are created on the
leader of the shard being split.
Of course it's easy enough to migrate the new sub-shards to
extra disks working in parallel during the split
(icing on the cake of course).
Cheers,
Tim
On Wed, Jul 17, 2013 at 10:40 AM, Yonik Seeley yo...@lucidworks.com wrote:
On Wed, Jul 17, 2013 at 12:26 PM, Timothy Potter thelabd...@gmail.com wrote:
This is not a problem per se, just want to verify
Recently upgraded to 4.3.1 but this problem has persisted for a while now ...
I'm using the following configuration when starting Jetty:
-XX:OnOutOfMemoryError=/home/solr/oom_killer.sh 83 %p
If an OOM is triggered during Solr web app initialization (such as by
me lowering -Xmx to a value that
short of not using Jetty?
That's a pain, I'd just written a nice OOM handler too!
On 26 June 2013 20:37, Timothy Potter thelabd...@gmail.com wrote:
A little more to this ...
Just on chance this was a weird Jetty issue or something, I tried with
the latest 9 and the problem still
I'm seeing this message in the logs and it seems weird to me that the
instance needs to wait to see more replicas.
2013-06-24 18:12:40,408 [coreLoadExecutor-4-thread-1] INFO
solr.cloud.ShardLeaderElectionContext - Waiting until we see more
replicas up: total=2 found=1 timeoutin=139368
Can
1 - 100 of 196 matches
Mail list logo