My apology, it wasn't my intent to create a huge issue here.  I attended a
MongoDB usergroup meeting last night, and had the priviledge of having an
architectural overview of the product.  I started this discuss off list as a
compliment in what the devs chose for a database, that was it.

In the Architecture of MongoDB, you scale it in Odd numbers of servers
traditionally.  The primary server is chosen by all of the servers based on
specific criteria.  The other servers then become backups that receive their
writes from the Primary, and provide reads to other devices.  When the
primary is lost, the other devices chose a new primary - but there must be a
majority of servers involved in that vote.  If you have two servers, and you
lose one, you no longer have a majority, so technically the remaining server
remains in Read Only mode.  Until there is a vote with the majority, you
can't have a new Primary server.

An Arbiter can resolve that as a voting member of the majority, so having
the Arbiter on the backup server, allows for there to be a vote for the
backup to take over as Primary from the sounds of it.

I would agree that if you are going to have multiple servers, three would be
the number to start with.   They could be virtual servers used for
replication only.

This is a good discussion about MongoDB database architecture.
http://www.10gen.com/presentations/mongosv-2011/deploying-mongodb-for-high-a
vailability


-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Douglas Hubler
Sent: Thursday, May 17, 2012 8:01 AM
To: Discussion list for users of sipXecs software
Subject: Re: [sipx-users] MongoDB

On Thu, May 17, 2012 at 10:27 AM, Tony Graziano
<[email protected]> wrote:
> I would think the script would need to ru. Against the other HA member 
> so that it could determine that "state". I would think pinging a mail 
> server or something else could give off improper results.
>
> Shouldn't there be a way to poll the state of the other HA member to 
> get accurate results?

By definition, this state only occurs when secondary cannot poll the other
HA member.  If the mongo service alone goes down on primary, and arbiter on
primary is ok, secondary will take over.  This is only when primary is
completely AWOL  *or* secondary lost it's link to primary.

Thing to remember here is mongo servers/arbiters already continuously check
the state of each other.  There's no need to reinvent that, this is purely
about getting a false positive for network partition issue v.s. machine is
dead.  In the end, if you do not involve a third party on the network, then
the best you can come up with is an acceptable heuristic.
_______________________________________________
sipx-users mailing list
[email protected]
List Archive: http://list.sipfoundry.org/archive/sipx-users/

_______________________________________________
sipx-users mailing list
[email protected]
List Archive: http://list.sipfoundry.org/archive/sipx-users/

Reply via email to