a lot.
But seriously, you have a second chance here.
This mostly concerns SolrCloud. That’s why I recommend standalone mode. But
key people know why to do. I know it will happen - but their lives will be
easier if you help.
Lol.
- Mark
On Sat, Nov 30, 2019 at 9:25 PM Mark Miller wrote:
>
. But you
don’t need too.
Mark
On Sat, Nov 30, 2019 at 7:05 PM Dave wrote:
> I’m young here I think, not even 40 and only been using solr since like
> 2008 or so, so like 1.4 give or take. But I know a really good therapist if
> you want to talk about it.
>
> > On Nov 30, 2019, at 6:
Now I have sacrificed to give you a new chance. A little for my community.
It was my community. But it was mostly for me. The developer I started as
would kick my ass today. Circumstances and luck has brought money to our
project. And it has corrupted our process, our community, and our code.
In
It’s going to haunt me if I don’t bring up Hossman. I don’t feel I have to
because who doesn’t know him.
He is a treasure that doesn’t spend much time on SolrCloud and has checked
out of leadership for the large part for reasons I won’t argue with.
Why doesn’t he do much with SolrCloud in a real
I’m including this response to a private email because it’s not something
I’ve brought up and I also think it’s a critical note:
“Yes. That is our biggest advantage. Being Apache. Almost no one seems to
be employed to help other contributors get their work in at the right
level, and all the money
duced at
> some time. Notwithstanding, I do think that the project needs to be more
> open with community commits. The community and open-sourceness of Solr is
> what I used to love over those of ElasticSearch's.
>
> Anyways, keep rocking! You have already left your footprints into the
The people I have identified that I have the most faith in to lead the
fixing of Solr are Ishan, Noble and David. I encourage you all to look at
and follow and join in their leadership.
You can do this.
Mark
--
- Mark
http://about.me/markrmiller
Now one company thinks I’m after them because they were the main source of
the jokes.
Companies is not a typo.
If you are using Solr to make or save tons of money or run your business
and you employee developers, please include yourself in this list.
You are taking and in my opinion Solr is
y I found nothing in solr cloud worth changing from standalone
> for, and just added more complications, more servers, and required becoming
> an expert/knowledgeable in zoo keeper, id rather spend my time developing
> than becoming a systems administrator
>
> On Wed, Nov 27, 2019 at 3
This is your queue to come and make your jokes with your name attached. I’m
sure the Solr users will appreciate them more than I do. I can’t laugh at
this situation because I take production code seriously.
--
- Mark
http://about.me/markrmiller
And if you are a developer, enjoy that Gradle build! It was the highlight
of my year.
On Wed, Nov 27, 2019 at 10:00 AM Mark Miller wrote:
> If you have a SolrCloud installation that is somehow working for you,
> personally I would never upgrade. The software is getting progressively
to work on it in any real fashion since 2012. I’m
sorry I couldn’t help improve the situation for you.
Take it for what it’s worth. To some, not much I’m sure.
Mark Miller
--
- Mark
http://about.me/markrmiller
d are discussing it.
>
>Regards,
>Aled
>
>On Tue, Nov 12, 2019, 1:25 AM Luke Miller wrote:
>
>> Hi,
>>
>>
>>
>> I just noticed that since Solr 8.2 the Apache Solr Reference Guide is not
>> available anymore as PDF.
>>
Hi,
I just noticed that since Solr 8.2 the Apache Solr Reference Guide is not
available anymore as PDF.
Is there a way to perform a full-text search using the HTML manual? E.g. I'd
like to find every hit for "luceneMatchVersion".
* Using the integrated "Page title lookup." does
Hook up a profiler to the overseer and see what it's doing, file a JIRA and
note the hotspots or what methods appear to be hanging out.
On Tue, Sep 3, 2019 at 1:15 PM Andrew Kettmann
wrote:
>
> > You’re going to want to start by having more than 3gb for memory in my
> opinion but the rest of
on legacy cloud mode.
>
> I think I can figure out where the data is being stored for an existing
> (empty) collection, shut that down, swap in the new files, and reload.
>
> But I’m wondering if that’s really the best (or even sane) approach.
>
> Thanks,
>
> — Ken
&
You create MiniSolrCloudCluster with a base directory and then each Jetty
instance created gets a SolrHome in a subfolder called node{i}. So if
legacyCloud=true you can just preconfigure a core and index under the right
node{i} subfolder. legacyCloud=true should not even exist anymore though,
so
Yeah, basically ConcurrentUpdateSolrClient is a shortcut to getting multi
threaded bulk API updates out of the single threaded, single update API.
The downsides to this are: It is not cloud aware - you have to point it at
a server, you have to add special code to see if there are any errors, you
A soft commit does not control merging. The IndexWriter controls merging
and hard commits go through the IndexWriter. A soft commit tells Solr to
try and open a new SolrIndexSearcher with the latest view of the index. It
does this with a mix of using the on disk index and talking to the
It's been a while since I've been in this deeply, but it should be
something like:
sendUpdateOnlyToShardLeaders will select the leaders for each shard as the
load balanced targets for update. The updates may not go to the *right*
leader, but only the leaders will be chosen, followers (non leader
Yeah, the project should never use built in serialization. I'd file a JIRA
issue. We should remove this when we can.
- Mark
On Sun, May 6, 2018 at 9:39 PM Will Currie wrote:
> Premise: During an upgrade I should be able to run a 7.3 pull replica
> against a 7.2 tlog leader.
mand in my shell script to
copy this folder over and it works just fine now.
So again thanks to all of you for your help.
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent
I understand that this has to be done on the command line, but I don't know
where to put this structure or what it should look like. Can you please be
more specific in this answer? I have only been working with Solr for about six
months.
~~~
William Kevin Miller
ECS
This is my first time to try using the core admin API. How do I go about
creating the directory structure?
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent
dataDir=data
I even tried to use the UNLOAD action to remove a core and got the same type of
error as the -bash line above.
I have tried searching online for an answer and have found nothing so far. Any
ideas why this error is occuring.
~~~
William Kevin Miller
eter. How can I go
about doing this?
I am using Solr 6.5.1 and it is running on a linux server using the apache
tomcat webserver.
~~~
William Kevin Miller
[ecsLogo]
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
Thank you all for your responses. I finally got it straightened out. I had
forgotten to change my url from http to https. Dumb mistake on my part.
Consider this issue closed.
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Original Message
I am not using Zookeeper. Is the urlScheme also used outside of Zookeeper?
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Original Message-
From: esther.quan...@lucidworks.com [mailto:esther.quan...@lucidworks.com]
Sent: Wednesday, July 12
erver need to be on a secure server in order to enable SSL.
Additional Info:
Running Solr 6.5.1 on Linux OS
~~~
William Kevin Miller
[ecsLogo]
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
I used the "copyField" and created a text version of the field that I wanted to
search on and am now getting the results I was looking for. Thanks for all
your help.
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Original Message
I do have my fields as strings not text, so I am going to play around with
using the "text". If I continue to have problems, I will post the additional
information you are requesting.
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
---
I forgot to mention that I am using Solr 6.5.1 and I am indexing XML files. My
Solr server is running on a Linux OS.
~~~
William Kevin Miller
[ecsLogo]
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
From: Miller, William K - Norman, OK - Contractor
[mailto:william.k.mil
wer Paddle Arm
~~~
William Kevin Miller
[ecsLogo]
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
Please consider this issue closed as we are looking at moving our xml files to
the solr server for now.
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Original Message-
From: Miller, William K - Norman, OK - Contractor
Sent: Monday, June 12
Thank you for your response. I will look into this link. Also, sorry I did
not specify the file type. I am working with XML files.
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Original Message-
From: Alexandre Rafalovitch
for a
"foreach" attribute. Is there an Entity Processor that can be used to get the
list of files from an https source or am I going to have to use solrj or create
a custom entity processor?
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405
Miller
[ecsLogo]
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
I figured out why it was not re-indexing without changing the timestamp even on
the full import. In my DIH I had a parameter in my top level entity that was
checking for the last indexed time.
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
ble to get the documents to index. You mentioned that the delta import would
need the timestamp to change to index the documents again, but does the full
import need this change as well?
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Original Message
configuration in my
dataConfig file for the DIH, but it fails to index the file. If I make a
change to the xml file that is being indexed and re-index it works.
I don't understand why this is happening. Any help with this will be
appreciated.
~~~
William Kevin Miller
[ecsLogo
April 24, 2017 5:55:29 PM EDT, Daniel Miller <dmil...@amfes.com> wrote:
I'm running Solr 6.4.2 to index my mail server (Dovecot). Searching is
great - but periodically I have Solr errors. Previously, when an error
would occur Solr would terminate. I now have it running as a systemd
service so
I'm running Solr 6.4.2 to index my mail server (Dovecot). Searching is
great - but periodically I have Solr errors. Previously, when an error
would occur Solr would terminate. I now have it running as a systemd
service so it would auto-restart - but it seems like that doesn't solve it.
Some
I have 3 different SolrCloud clusters that share a single set (3) of zookeeper
servers. Each SolrCloud cluster has its own set of collections stored on
Zookeeper. Twice in the past week all 3 clusters have had about a 1 minute
period where all requests stopped coming in. Solr recovers and
On 3/4/2017 12:00 PM, Shawn Heisey wrote:
On 3/3/2017 11:28 PM, Daniel Miller wrote:
What I think I want is create a single collection, with a
shard/replica/core per user. Or maybe I'm wanting a separate
collection per user - which would again mean a single
shard/replica/core. But it seems
On 3/2/2017 5:14 PM, Shawn Heisey wrote:
On 3/2/2017 2:58 PM, Daniel Miller wrote:
I'm asking for some guidance on how I might
optimize Solr.
I use Solr for work. I use Dovecot for personal domains. I have not
used them together. I probably should -- my personal mailbox is many
gigabytes
One of the many features of the Dovecot IMAP server is Solr support.
This obviously provides full-text-searching of stored mails - and it
works great. But...the focus of the Dovecot team and mailing list is
Dovecot configuration. I'm asking for some guidance on how I might
optimize Solr.
That already happens. The ZK client itself will reconnect when it can and
trigger everything to be setup like when the cluster first starts up,
including a live node and leader election, etc.
You may have hit a bug or something else missing from this conversation,
but reconnecting after losing
That is probably partly because of hdfs cache key unmapping. I think I
improved that in some issue at some point.
We really want to wait by default for a long time though - even 10 minutes
or more. If you have tons of SolrCores, each of them has to be torn down,
each of them might commit on
Look at the Overseer host and see if there are any relevant logs for
autoAddReplicas.
- Mark
On Mon, Oct 24, 2016 at 3:01 PM Chetas Joshi wrote:
> Hello,
>
> I have the following configuration for the Solr cloud and a Solr collection
> This is Solr on HDFS and Solr
Could you file a JIRA issue so that this report does not get lost?
- Mark
On Tue, Nov 15, 2016 at 10:49 AM Solr User wrote:
> For those interested, I ended up bundling the customized ACL provider with
> the solr.war. I could not stomach looking at the stack trace in the
Thank you!!
Okay, I think I have that all squared away.
*SpanLastQuery*:
I need something like SpanFirstQuery, except that it would be
SpanLastQuery. Is there a way to get that to work?
*Proximity weighting getting ignored*:
I also need to get span term boosting working.
Here's my query:
"one
Awesome, 0 pre and 1 post works!
I replaced pre with Integer.MAX_VALUE and post with Integer.MAX_VALUE - 5
and it works!
If I replace post with Integer.MAX_VALUE - 4 (or -3, -2, -1, -0), it fails.
But, if it's -(5+), it appears to work.
Thank you guys for suffering through my inexperience with
I saw the second post--the first post was new to me.
We plan on connecting with those people later on, but right now, I'm trying
to write a stop-gap dtSearch compiler until we can at least secure the
funding we need to employ their help.
Right now, I have a very functional query parser, with
would allow you at least to do
> something like:
>
>
>
> termA but not if zyx or yyy appears X words before or Y words after
>
>
>
>
>
>
>
> *From:* Brandon Miller [mailto:computerengineer.bran...@gmail.com]
> *Sent:* Monday, June 20, 2016 2:36 PM
>
Hello, all! I'm a BloombergBNA employee and need to obtain/write a
dtSearch parser for solr (and probably a bunch of other things a little
later).
I've looked at the available parsers and thought that the surround parser
may do the trick, but it apparently doesn't like nested N or W subqueries.
I
Only INFO level, so I suspect not bad...
If that Overseer closed, another node should have picked up where it left
off. See that in another log?
Generally an Overseer close means a node or cluster restart.
This can cause a lot of DOWN state publishing. If it's a cluster restart, a
lot of those
You get this when the Overseer is either bogged down or not processing
events generally.
The Overseer is way, way faster at processing events in 5x.
If you search your logs for .Overseer you can see what it's doing. Either
nothing at the time, or bogged down processing state updates probably.
Two of them are sub requests. They have params isShard=true and
distrib=false. The top level user query will not have distrib or isShard
because they default the other way.
- Mark
On Mon, Jan 11, 2016 at 6:30 AM Syed Mudasseer
wrote:
> Hi,
> I have solr configured on
Not sure I'm onboard with the first proposed solution, but yes, I'd open a
JIRA issue to discuss.
- Mark
On Mon, Jan 11, 2016 at 4:01 AM Konstantin Hollerith
wrote:
> Hi,
>
> I'm using SLF4J MDC to log additional Information in my WebApp. Some of my
> MDC-Parameters even
dataDir and tlog dir cannot be changed with a core reload.
- Mark
On Sat, Jan 9, 2016 at 1:20 PM Erick Erickson
wrote:
> Please show us exactly what you did. and exactly
> what you saw to say that "does not seem to work".
>
> Best,
> Erick
>
> On Fri, Jan 8, 2016 at
He has waitSearcher as false it looks, so all the time should be in the
commit. So that amount of time does sound odd.
I would certainly change those commit settings though. I would not use
maxDocs, that is an ugly way to control this. And one second is much too
aggressive as Erick says.
If you
.@northbaysolutions.net> wrote:
Hi daniel
You need to update your config/scehma file on the path like
'...\solr-dir\server\solr' . When you are done then you can update your
index path in solrconfig.xml.
I hope you got it.
Best,
Zahid
On Thu, Nov 19, 2015 at 1:58 PM, Daniel Miller <dmil
x.
You may need to remove use of 3.x classes that were deprecated in 4.x
https://cwiki.apache.org/confluence/display/solr/Major+Changes+from+Solr+4+to+Solr+5
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
18. nov. 2015 kl. 10.10 skrev Daniel Miller <dmil...@amfes.
Hi!
I'm a very inexperienced user with Solr. I've been using Solr to
provide indexes for my Dovecot IMAP server. Using version 3.x, and
later 4.x, I have been able to do so without too much of a challenge.
However, version 5.x has certainly changed quite a bit and I'm very
uncertain how
If you see "WARNING: too many searchers on deck" or something like that in
the logs, that could cause this behavior and would indicate you are opening
searchers faster than Solr can keep up.
- Mark
On Tue, Nov 17, 2015 at 2:05 PM Erick Erickson
wrote:
> That's what was
You can pass arbitrary params with Solrj. The API usage is just a little
more arcane.
- Mark
On Wed, Nov 11, 2015 at 11:33 PM Sathyakumar Seshachalam <
sathyakumar_seshacha...@trimble.com> wrote:
> I intend to use SolrJ. I only saw the below overloaded commit method in
> documentation
openSearcher is a valid param for a commit whatever the api you are using
to issue it.
- Mark
On Wed, Nov 11, 2015 at 12:32 PM Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Does waitSearcher=false works like you need?
>
> On Wed, Nov 11, 2015 at 1:34 PM, Sathyakumar Seshachalam <
>
Your Lucene and Solr versions must match.
On Thu, Oct 8, 2015 at 4:02 PM Steve wrote:
> I've loaded the Films data into a 4 node cluster. Indexing went well, but
> when I issue a query, I get this:
>
> "error": {
> "msg":
ill go out
> of memory. For memory mapped index files the remaining 24G or what is
> available off of it should be available. Looking at the lsof output the
> memory mapped files were around 10G.
>
> Thanks.
>
>
> On 10/5/15 5:41 PM, Mark Miller wrote:
> > I'd make tw
thread but not to JVM.
>
> On 10/6/15 1:07 PM, Mark Miller wrote:
> > That amount of RAM can easily be eaten up depending on your sorting,
> > faceting, data.
> >
> > Do you have gc logging enabled? That should describe what is happening
> with
> > the heap.
&g
If it's always when using https as in your examples, perhaps it's SOLR-5776.
- mark
On Mon, Oct 5, 2015 at 10:36 AM Markus Jelsma
wrote:
> Hmmm, i tried that just now but i sometimes get tons of Connection reset
> errors. The tests then end with "There are still
Best tool for this job really depends on your needs, but one option:
I have a dev tool for Solr log analysis:
https://github.com/markrmiller/SolrLogReader
If you use the -o option, it will spill out just the queries to a file with
qtimes.
- Mark
On Wed, Sep 23, 2015 at 8:16 PM Tarala, Magesh
wrote:
> Hi - no, i don't think so, it doesn't happen all the time, but too
> frequently. The machine running the tests has a high powered CPU, plenty of
> cores and RAM.
>
> Markus
>
>
>
> -Original message-
> > From:Mark Miller <markrmil...@gmail.co
I'd make two guess:
Looks like you are using Jrocket? I don't think that is common or well
tested at this point.
There are a billion or so bug fixes from 4.6.1 to 5.3.2. Given the pace of
SolrCloud, you are dealing with something fairly ancient and so it will be
harder to find help with older
On Wed, Sep 30, 2015 at 10:36 AM Steve Davids wrote:
> Our project built a custom "admin" webapp that we use for various O
> activities so I went ahead and added the ability to upload a Zip
> distribution which then uses SolrJ to forward the extracted contents to ZK,
> this
Have you used jconsole or visualvm to see what it is actually hanging on to
there? Perhaps it is lock files that are not cleaned up or something else?
You might try: find ~/.ivy2 -name "*.lck" -type f -exec rm {} \;
- Mark
On Wed, Sep 16, 2015 at 9:50 AM Susheel Kumar
apa...@elyograg.org> wrote:
> On 9/16/2015 9:32 AM, Mark Miller wrote:
> > Have you used jconsole or visualvm to see what it is actually hanging on
> to
> > there? Perhaps it is lock files that are not cleaned up or something
> else?
> >
> > You might try: find
outside company network but
> inside it stucks. let me try to see if jconsole can show something
> meaningful.
>
> Thanks,
> Susheel
>
> On Wed, Sep 16, 2015 at 12:17 PM, Shawn Heisey <apa...@elyograg.org>
> wrote:
>
> > On 9/16/2015 9:32 AM, Mark Miller wrot
Perhaps there is something preventing clean shutdown. Shutdown makes a best
effort attempt to publish DOWN for all the local cores.
Otherwise, yes, it's a little bit annoying, but full state is a combination
of the state entry and whether the live node for that replica exists or not.
- Mark
On
I think there is some better classpath isolation options in the works for
Hadoop. As it is, there is some harmonization that has to be done depending
on versions used, and it can get tricky.
- Mark
On Wed, Jun 17, 2015 at 9:52 AM Erick Erickson erickerick...@gmail.com
wrote:
For sure there are
I didn't really follow this issue - what was the motivation for the rewrite?
Is it entirely under: new code should be quite a bit easier to work on for
programmer
types or are there other reasons as well?
- Mark
On Mon, Jun 15, 2015 at 10:40 AM Erick Erickson erickerick...@gmail.com
wrote:
that I'll release at some
point soon that gives us a collections version of the core admin
pane. I'd love to add HDFS support to the UI if there were APIs worth
exposing (I haven't dug into HDFS support yet).
Make sense?
Upayavira
On Mon, Jun 15, 2015, at 07:49 AM, Mark Miller wrote:
I
SolrCloud does not really support any form of rollback.
On Mon, Jun 15, 2015 at 5:05 PM Aurélien MAZOYER
aurelien.mazo...@francelabs.com wrote:
Hi all,
Is DeletionPolicy customization still available in Solr Cloud? Is there
a way to rollback to a previous commit point in Solr Cloud thanks
File a JIRA issue please. That OOM Exception is getting wrapped in a
RuntimeException it looks. Bug.
- Mark
On Wed, Jun 3, 2015 at 2:20 AM Clemens Wyss DEV clemens...@mysign.ch
wrote:
Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available
for Solr.
I am seeing the following
We will have to a find a way to deal with this long term. Browsing the code
I can see a variety of places where problem exception handling has been
introduced since this all was fixed.
- Mark
On Wed, Jun 3, 2015 at 8:19 AM Mark Miller markrmil...@gmail.com wrote:
File a JIRA issue please
, NSSM should have no
problem starting up zookeeper.
Will Miller
Development Manager, eCommerce Services | Online Technology
462 Seventh Avenue, New York, NY, 10018
Office: 212.502.9323 | Cell: 317.653.0614
wmil...@fbbrands.com | www.fbbrands.com
From
A bug fix version difference probably won't matter. It's best to use the
same version everyone else uses and the one our tests use, but it's very
likely 3.4.5 will work without a hitch.
- Mark
On Tue, May 5, 2015 at 9:09 AM shacky shack...@gmail.com wrote:
Hi.
I read on
If copies of the index are not eventually cleaned up, I'd fill a JIRA to
address the issue. Those directories should be removed over time. At times
there will have to be a couple around at the same time and others may take
a while to clean up.
- Mark
On Tue, Apr 28, 2015 at 3:27 AM Ramkumar R.
, not the first level elements.
Could you create a Jira?
On Thu, Apr 16, 2015 at 2:38 PM, Will Miller wmil...@fbbrands.com wrote:
I am seeing some some odd behavior with range facets across multiple
shards. When querying each node directly with distrib=false the facet
returned matches what is expected
Hmm...can you file a JIRA issue with this info?
- Mark
On Fri, Mar 27, 2015 at 6:09 PM Joseph Obernberger j...@lovehorsepower.com
wrote:
I just started up a two shard cluster on two machines using HDFS. When I
started to index documents, the log shows errors like this. They repeat
when I
Doesn't ConcurrentUpdateSolrServer take an HttpClient in one of it's
constructors?
- Mark
On Sun, Mar 22, 2015 at 3:40 PM Ramkumar R. Aiyengar
andyetitmo...@gmail.com wrote:
Not a direct answer, but Anshum just created this..
https://issues.apache.org/jira/browse/SOLR-7275
On 20 Mar 2015
Interesting bug.
First there is the already closed transaction log. That by itself deserves
a look. I'm not even positive we should be replaying the log we
reconnecting from ZK disconnect, but even if we do, this should never
happen.
Beyond that there seems to be some race. Because of the log
If you google replication can cause index corruption there are two jira issues
that are the most likely cause of corruption in a solrcloud env.
- Mark
On Mar 5, 2015, at 2:20 PM, Garth Grimm garthgr...@averyranchconsulting.com
wrote:
For updates, the document will always get routed to
I’ll be working on this at some point:
https://issues.apache.org/jira/browse/SOLR-6237
- Mark
http://about.me/markrmiller
On Feb 25, 2015, at 2:12 AM, longsan longsan...@sina.com wrote:
We used HDFS as our Solr index storage and we really have a heavy update
load. We had met much problems
Perhaps try quotes around the url you are providing to curl. It's not
complaining about the http method - Solr has historically always taken
simple GET's for http - for good or bad, you pretty much only post
documents / updates.
It's saying the name param is required and not being found and since
What is your replication factor and doc size?
Replication can affect performance a fair amount more than it should currently.
For the number of nodes, that doesn’t sound like it matches what I’ve seen
unless those are huge documents or you have some slow analyzer in the chain or
something.
Yes, after 45 seconds a replica should take over as leader. It should
likely explain in the logs of the replica that should be taking over why
this is not happening.
- Mar
On Wed Jan 28 2015 at 2:52:32 PM Joshi, Shital shital.jo...@gs.com wrote:
When leader reaches 99% physical memory on the
Sorry, there is no great workaroud. You might try raising the max idle time
for your container - perhaps that makes it less frequent.
- Mark
On Tue Jan 20 2015 at 1:56:54 PM Nishanth S nishanth.2...@gmail.com wrote:
Thank you Mike.Sure enough,we are running into the same issue you
bq. Is this the correct approach ?
It works, but it might not be ideal. Recent versions of ZooKeeper have an
alternate config for this max limit though, and it is preferable to use
that.
See maxSessionTimeout in
http://zookeeper.apache.org/doc/r3.3.1/zookeeperAdmin.html
- Mark
On Mon Jan 26
I'd have to do some digging. Hossman might know offhand. You might just
want to use @SupressSSL on the tests :)
- Mark
On Mon Jan 12 2015 at 8:45:11 AM Markus Jelsma markus.jel...@openindex.io
wrote:
Hi - in a small Maven project depending on Solr 4.10.3, running unit tests
that extend
1 - 100 of 1587 matches
Mail list logo