be 21 even), and from a single failure you drop to an
even number - then there is the danger of NOT getting quorum.
So ... I can only assume that there is a mechanism in place
inside zk to guarantee this cannot happen, right?
--
Cheers
Jules.
On 05/03/2015 06:47, svante karlsson wrote
...@gmail.com:
Thanks svante.
What if in the cluster of 5 zookeeper only 1 zookeeper goes down, will
zookeeper election can occur with 4 / even number of zookeepers alive?
With Regards
Aman Tandon
On Tue, Mar 3, 2015 at 6:35 PM, svante karlsson s...@csi.se wrote:
synchronous update
synchronous update of state and a requirement of more than half the
zookeepers alive (and in sync) this makes it impossible to have a split
brain situation ie when you partition a network and get let's say 3 alive
on one side and 2 on the other.
In this case the 2 node networks stops serving
You should have memory to fit your whole database in disk cache and then
some more. I prefer to have at least twice that to accommodate startup of
new searchers while still serving from the old.
Less than that performance drops a lot.
Solr home: 185G
If that is your database size then you need
ZK needs a quorum to keep functional so 3 servers handles one failure. 5
handles 2 node failures. If you Solr with 1 replica per shard then stick to
3 ZK. If you use 2 replicas use 5 ZK
.
-- Jack Krupansky
-Original Message- From: svante karlsson
Sent: Thursday, January 23, 2014 6:42 AM
To: solr-user@lucene.apache.org
Subject: how to write an efficient query with a subquery to restrict the
search space?
I have a solr db containing 1 billion records that I'm trying
...@gmail.com] On Behalf Of
svante karlsson
Sent: Friday, January 24, 2014 5:05 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr server requirements for 100+ million documents
I just indexed 100 million db docs (records) with 22 fields (4
multivalued) in 9524 sec using libcurl.
11
inefficient to post one at a
time but I've not done any specific testing to know if 1000 is better that
500
What we're doing now is trying to figure out how to get the query
performance up since is not where we need it to be so we're not done
either...
2014/1/25 svante karlsson s...@csi.se
We
types sometimes lead people with DB backgrounds to
search for *like* which will be slow FWIW.
Best,
Erick
On Sat, Jan 25, 2014 at 5:51 AM, svante karlsson s...@csi.se wrote:
That got away a little early...
The inserter is a small C++ program that uses pglib to speek to postgres
I just indexed 100 million db docs (records) with 22 fields (4 multivalued)
in 9524 sec using libcurl.
11 million took 763 seconds so the speed drops somewhat with increasing
dbsize.
We write 1000 docs (just an arbitrary number) in each request from two
threads. If you will be using solrcloud you
I have a solr db containing 1 billion records that I'm trying to use in a
NoSQL fashion.
What I want to do is find the best matches using all search terms but
restrict the search space to the most unique terms
In this example I know that val2 and val4 is rare terms and val1 and val3
are more
From: saka.csi...@gmail.com saka.csi...@gmail.com on behalf of svante
karlsson s...@csi.se
Sent: Tuesday, January 21, 2014 4:20 PM
To: solr-user@lucene.apache.org
Subject: Trying to config solr cloud
I've been playing around with solr 4.6.0 for some weeks and I'm trying to
get
I've been playing around with solr 4.6.0 for some weeks and I'm trying to
get a solrcloud configuration running.
I've installed two physical machines and I'm trying to set up 4 shards on
each.
I installled a zookeeper on each host as well
I uploaded a config to zookeeper with
13 matches
Mail list logo