Shouldn't be since you'll actually be connecting to moxi on the server side. http://www.couchbase.com/docs/moxi-manual-1.8/moxi-serverside.html --chad
On Thu, Dec 26, 2013 at 12:24 AM, Gregory Taylor <[email protected]> wrote: > That sounds like a pretty nice setup. The thing I'll be up against is the > fact that I could end up with quite a few clusters. I guess that would mean > running a moxi instance for each cluster on each app server? > Alternatively, I set up round robin DNS for each cluster, with all of the > nodes in it. Users would point at the DNS entry and connect directly with > their memcache clients in binary+SASL mode. > I guess my hesitation the direct method is consistency concerns. We really > can't have people running into consistency issues with memcache buckets. Do > you know if there's any danger of replication delay with the round robin > setup + memcache bucket types? > On Thu, Dec 26, 2013 at 12:18 AM, Chad Kouse <[email protected]> wrote: >> We always ran moxi locally on our application servers - this simplified >> our application logic (ie: just connect to localhost) and seemed to work >> just fine. >> >> We had a reverse proxy (haproxy) that moxi on the application servers was >> pointed at to get its configuration and learn about cluster changes. >> >> Moxi then connects directly to the appropriate node(s) involved for actual >> cache operations. >> --chad >> >> >> On Thu, Dec 26, 2013 at 12:14 AM, Gregory Taylor >> <[email protected]>wrote: >> >>> Thanks for the response, Chad! >>> >>> As far as moxi proxy, would you recommend a front-facing stand-alone moxi >>> proxy sitting in front of the cluster, or would a round robin DNS rotation >>> with all of the nodes' with their built-in moxi instances suffice? >>> >>> I'm not clear on what would be the best/safest for a memcache-only >>> cluster. Any advice would be appreciated! >>> >>> >>> On Wed, Dec 25, 2013 at 10:44 AM, Chad Kouse <[email protected]>wrote: >>> >>>> Memcache style buckets don't participate in replication so all keys and >>>> values exist on a per node basis. In other words if you have 4 nodes and >>>> one goes down you have lost 25% of your cache. >>>> >>>> To know which nodes contains the keys /values you want on a per-request >>>> basis you need to use a consistent key hash algorithm. The preferred method >>>> here is to use moxi as your proxy to the cluster (which I believe just uses >>>> a libketama style hashing algorithm) - this maps a key to a particular >>>> node. >>>> --chad >>>> >>>> >>>> On Wed, Dec 25, 2013 at 2:23 AM, Gregory Taylor >>>> <[email protected]>wrote: >>>> >>>>> I'm thinking about setting up a hosted memcache service using >>>>> Couchbase, and am evaluating pain points and looking to get some answers. >>>>> While I've read through a good chunk of the documentation, a lot of it >>>>> seems focused on the regular Couchbase buckets. I had a few questions >>>>> about >>>>> the memcache bucket types in particular: >>>>> >>>>> - I saw mention that memcache buckets (and their keys) always live >>>>> on a single node, unlike couchbase buckets. Is this true? >>>>> - For the couchbase buckets, I see "strong consistency" is offered, >>>>> though I'm not sure if this applies to memcache buckets (and to what >>>>> extent). If my users are connecting via SASL+binary memcache protocol, >>>>> is >>>>> there ever a case where an immediate SET/GET of the same key will >>>>> result in >>>>> out-of-date values being returned by GET? >>>>> - Do I need to point each user at a specific node in the cluster? >>>>> Perhaps the one that their bucket resides on? I ask this because of the >>>>> statement I saw about memcache buckets living on single nodes, though >>>>> I am >>>>> not sure this was correct or reflects the current state of things. >>>>> >>>>> -- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "Couchbase" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to [email protected]. >>>>> >>>>> For more options, visit https://groups.google.com/groups/opt_out. >>>>> >>>> >>>> -- >>>> You received this message because you are subscribed to a topic in the >>>> Google Groups "Couchbase" group. >>>> To unsubscribe from this topic, visit >>>> https://groups.google.com/d/topic/couchbase/m-AznwLDGfg/unsubscribe. >>>> To unsubscribe from this group and all its topics, send an email to >>>> [email protected]. >>>> For more options, visit https://groups.google.com/groups/opt_out. >>>> >>> >>> >>> >>> -- >>> Greg Taylor >>> (864) 888-7964 >>> http://gc-taylor.com >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "Couchbase" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to [email protected]. >>> For more options, visit https://groups.google.com/groups/opt_out. >>> >> >> -- >> You received this message because you are subscribed to a topic in the >> Google Groups "Couchbase" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/couchbase/m-AznwLDGfg/unsubscribe. >> To unsubscribe from this group and all its topics, send an email to >> [email protected]. >> For more options, visit https://groups.google.com/groups/opt_out. >> > -- > Greg Taylor > (864) 888-7964 > http://gc-taylor.com > -- > You received this message because you are subscribed to the Google Groups > "Couchbase" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups "Couchbase" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/groups/opt_out.
