On Wed, Mar 12, 2014 at 6:01 AM, Егор Голощапов <[email protected]> wrote:

> Hi all.
>
> What I'm trying to do - writing time series data all the time, even if one
> of two nodes failed.
> Please help me figure out, how to handle this situation.
> Very simple case:
>
>    - two servers (147.54.160.217; 147.54.160.218) in one cluster with
>    shared bucket (and one replica)
>    - couchbase enterprise version 2.5 (windows 7)
>    - java couchbase client [<dependency><groupId>*com.couchbase.client*
>    </groupId><artifactId>*couchbase-client*</artifactId><version>
>    *1.4.0-dp*</version></dependency>]
>
> It is the latests client and servers.
>
> I've made a couchbase project to show what and how I'm doing (attached in
> zip file). It is maven project so I hope it will be very simple to start it
> in any IDE.
> In that project there are 2 classes:
>
>    - *CouchbaseUtil.java* (singleton that provide nice API to interract
>    with nodes using couchbase-API)
>    - *CouchbaseMain.java* (infinite cycle that produces data and asks
>    *CouchbaseUtil.java *to store it )
>
> I'm generating data and ask CouchbaseUtil to save it in infinite cycle.
> After that I'm disabling couchbase service in one of the servers and
> expecting client to continue writing to working one, but couchbase client 
> trying
> to save data in not working host and never switches to working host. Do I
> need to handle it manually? If I do, how can I do it? I've read that
> couchbase client is smart enough to handle it for me.
>

It looks like you are expecting couchbase server to behave in a way that
it's not designed to behave.

I cannot give you some pointers to read. But will try to explain in few
sentences. Couchbase Server (or client) does not automatically switch to
remaining nodes. It's equivalent of sharded mysql with slaves as backup. If
node dies or goes down temporarily, it's portion of data will become
unavailable. Until some entity (admin for example) fails over that portion
of data to replica. As a convenience we have autofailover feature than can
automatically fail over node in some common and safe cases.

Some call it CP (i.e. from CAP) but it's very wrong to say that. It's just
plain old master-slave replication sharded smartly for scaling and with
nice cluster management on top.

The only thing where client is expected one way or other way to switch to
live server(s) is client queries for cluster configuration (google for
vbucket map). Because we normally keep all nodes aware of cluster
configuration so any node can be asked for vbucket map.

-- 
You received this message because you are subscribed to the Google Groups 
"Couchbase" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to