Very interested in what you find out with your benchmarking, and whether it
bears out what I've experienced.
Does anyone know when 4.10 is likely to be released?
I'm benchmarking this right now so I'll share some numbers soon.
--
View this message in context:
http://lucene.472066.n3.nabble
I'm benchmarking this right now so I'll share some numbers soon.
On Mon, Jul 28, 2014 at 12:45 AM, Erick Erickson
wrote:
> bq: Whoa! That's awesome!
>
> And scary.
>
> Ian: Thanks a _lot_ for trying this out and reporting back.
>
> Also, let me say that this was a nice writeup, I wish more
bq: Whoa! That's awesome!
And scary.
Ian: Thanks a _lot_ for trying this out and reporting back.
Also, let me say that this was a nice writeup, I wish more people would post
as thorough a problem statement!
Best,
Erick
On Sat, Jul 26, 2014 at 5:08 AM, Shalin Shekhar Mangar <
shalinman...@
Whoa! That's awesome!
On Fri, Jul 25, 2014 at 8:03 PM, ian wrote:
> I've built and installed the latest snapshot of Solr 4.10 using the same
> SolrCloud configuration and that gave me a tenfold increase in throughput,
> so it certainly looks like SOLR-6136 was the issue that was causing my slow
I've built and installed the latest snapshot of Solr 4.10 using the same
SolrCloud configuration and that gave me a tenfold increase in throughput,
so it certainly looks like SOLR-6136 was the issue that was causing my slow
insert rate/high latency with shard routing and replicas. Thanks for your
Hi Tim
Thanks for the info about the bug. I've just looked at the CPU usage for
the leader using JConsole, while my bulk load process was running, inserting
documents into my Solr cloud. Is that what you meant by profiling and
looking for hotspots? I find the CPU usage goes up quite a lot when
Hi Ian,
What's the CPU doing on the leader? Have you tried attaching a
profiler to the leader while running and then seeing if there are any
hotspots showing. Not sure if this is related but we recently fixed an
issue in the area of leader forwarding to replica that used too many
CPU cycles ineffi
That's useful to know, thanks very much. I'll look into using
CloudSolrServer, although I'm using solrnet at present.
That would reduce some of the overhead - but not the extra 200ms I'm getting
for forwarding to the replica when the replica is switched on.
It does seem a very high overhead.
You can use CloudSolrServer (if you're using Java) which will route
documents correctly to the leader of the appropriate shard.
On Tue, Jul 15, 2014 at 3:04 PM, ian wrote:
> Hi Mark
>
> Thanks for replying to my post. Would you know whether my findings are
> consistent with what other people s
Hi Mark
Thanks for replying to my post. Would you know whether my findings are
consistent with what other people see when using SolrCloud?
One thing I want to investigate is whether I can route my updates to the
correct shard in the first place, by having my client using the same hashing
logic a
Updates are currently done locally before concurrently being sent to all
replicas - so on a single update, you can expect 2x just from that.
As for your results, it sounds like perhaps there is more overhead than we
would like in the code that sends to replicas and forwards updates? Someone
wou
11 matches
Mail list logo