Ok, you add two servers to be part of memcached servers, that means if you 
put a key:data pair, it is stored in whatever server. you dont know where 
is stored.
If one server is down, YES all his data is lost, and you dont need to be 
care about. if you makes use of the cachemem propertly. becouse only is 
cache!.

if you use in a correct way memcached, you dont need to care about this 
server down, becouse, only think is that the new clients will have a 
slightly delay to obtain his not cached data.  REMEMBER memcache is only a 
CACHE.   make this think clear in your mind. 

So this is the correct scenario,                
Clients Operations --------------> memcached----------------> YOUR REAL 
DATA.

if a client make a consult of memcached, and is not there, so the client 
hit the DATABASE, (mysql , filesystem, or where your data is stored)
but, if the data is stored already in the CACHE, that client will get his 
data faster.

Another way to see this:
client operations ------------->  faster access CACHED 
--------------------> slower Operations DATABASE OPERATION.

If you have memcached, as YOUR MAIN STORAGE FOR DATA, that is bad if your 
data is not disposable.

Many as you want memcached to be more than a cache system for fast access. 
so this is where come  THIS TOOL, called 
CouchBase<http://www.couchbase.com>this tool, uses cache in ram like memcached, 
and have many useful features, 
and one of them is replication.  you can do some sort of thinks to get the 
same goal with memcached, but couchbase is ready and they planned to 
integrated with memcached in the nearly future.

if you stay with xmemcached, so you need to do a hard work to make this 
replication for your self, but, the good news is , you can do it. is a 
matter of time.

On Friday, October 12, 2012 1:31:30 PM UTC-5, Kiran Kumar wrote:
>
> Great Thanks ArtPort it was very useful information from end to end .
>
> If you dont mind , need some clarification with this .
>
> So you mean to say that if a application is configured this way with two 
> servers as shown below 
>
> *1)MemcachedClient c=new 
> MemcachedClient(AddrUtil.getAddresses("server1:11211 server2:11211"));*
>
> each of the above server will have its own set of Data and in case that 
> server is down all the Data contained in that server is lost entirely ?? 
>  Is that what you mean 
>
>
> And in case if the above is true 
> 2) I dont want to  have Single POint of failure , so i want to go for 
> Replication , i am using xmemcached 1.3 version , is it possible that i can 
> go for replication with this ?? Is that posible
>
> On Friday, 12 October 2012 22:10:52 UTC+5:30, ArtPort wrote:
>>
>> Svr1 , Sv2 , Sv3 ,  are all  one unique STORE of your data. this is the 
>> main goals of memcached, all data is distributed in all servers, and you 
>> dont know where is your particular key:data, but, you know when you need, 
>> you can get fast your data. if you need to grow or scale, you can add 
>> servers as many as you like.  the CACHE, will have a SUM of all individual 
>> RAM in each server.
>>
>> Remember, this is CACHE, not a DATABASE. so you need to have some 
>> considerations. (**)
>>
>> (**)you need to consider this: In this scenario, if you turns off a Svr1, 
>> , you lost access to all keys in that particular Server. (all are lost).
>> that is not a problem, because your original application, has used 
>> memcached successfully. so memcached will do the same think with you new 
>> servers.
>>
>> In many scenarios, CACHE means disposable. if this is not your case so 
>> you need to make some research.
>> (**)CACHED, information is stored for a limited timeframe, is if its 
>> needed again, you need to recalculate again. (this is very useful).
>>
>> Replication means, have multiple instances of memcache with same data. 
>> BUT ONLY ONE IS ACTIVE the others are PASSIVE. 
>> but i think in a CACHE solution, only in some particular cases, 
>> replication can be useful, in the majority, the standard way (1 memcache 
>> cluster) is enough.
>>
>> On Friday, October 12, 2012 11:16:57 AM UTC-5, Kiran Kumar wrote:
>>>
>>> Thank you very much for your efforts and the great explanation . 
>>>
>>> *I have two questions with respect to your answer ?? Please let me know 
>>> *
>>>
>>> 1. The different memcache servers Svr1 , Svr2 and Svr3 will be as 
>>>  active/active clustering or active/passive clustering , and what is the 
>>> default behavior related to xmemcahed framework?
>>> 2. What exactly does  Replication meant in your answer  ?? ( Will these 
>>> three servers will not have same copy of Data automatically ) , i was 
>>> asking this because if is shutdown  servers say Svr1 and Svr2 ,and still i 
>>> am able to get the data from Svr3 . Is this different from what replication 
>>> means ??
>>>
>>>
>>> On Friday, 12 October 2012 21:22:06 UTC+5:30, ArtPort wrote:
>>>>
>>>> Kiran Kumar,  you can do what you intended to do, but not in the way 
>>>> you intended to do.
>>>> memcached has already a useful way to support scalability, with 
>>>> increment of performance lineally.
>>>>
>>>> if you already have "one Memcached for the entire application".
>>>> and work well, so, if you need scale your application, only
>>>> you need to do is add some servers to the previous memcached cluster.
>>>>
>>>> As You Stated, This Configuration is Working OK.
>>>>
>>>> Streaming Application   -----Write----->  Memcached  
>>>> ------Read/Delete----> ClientReads
>>>>                                             |
>>>>                                             |  
>>>>                                             | 
>>>>                                           Svr1   
>>>>                                              
>>>>                                              
>>>> Now, You have grow, and need make a scale of your Memcached.
>>>>
>>>> The easiest way is:
>>>>
>>>> Streaming Application   -----Write----->  Memcached  
>>>> ------Read/Delete----> Client
>>>>                                         /   |  \
>>>>                                        /    |   \
>>>>                                       /     |    \
>>>>                                 Svr1   Svr2  Svr3 
>>>>                                       
>>>> You can do this, with this instructions:
>>>> *//PHP 
>>>> $MEMCACHE_SERVERS = array(
>>>>     "10.1.1.1", //Svr1
>>>>     "10.1.1.2", //Svr2
>>>>     "10.1.1.3", //Svr3 
>>>> ); 
>>>> $memcache = new Memcache();
>>>> foreach($MEMCACHE_SERVERS as $server){
>>>>     $memcache->addServer ( $server ); 
>>>> }
>>>> *
>>>> Code taken from: 
>>>> Here<http://stackoverflow.com/questions/4717559/multiple-memcached-servers-question>
>>>>
>>>> This way, the performance is incremented because is using multiple 
>>>> servers to do the task (writes/reads/deletes),
>>>> So, the hole capacity, (if each Server has 8Gig Ram, so the Memcached 
>>>> cluster have 24Gig RAM) and boosted in performance.
>>>> Again, you do need to do what you initially intended to do. all is 
>>>> already done with memcache in the current version.
>>>> If you need Replication, that is another problem. but you do not 
>>>> mention that.
>>>>
>>>> Do not use the same web server as memcache server, use as separated 
>>>> servers exclusively for memcached as the graph
>>>> shows. (may be you think differently).
>>>>
>>>> On Friday, October 12, 2012 9:17:25 AM UTC-5, Kiran Kumar wrote:
>>>>>
>>>>> Initially there is one Memcache for the entire application , now we 
>>>>> expect some 50000 users on to our application , so as part of re modeling 
>>>>> we are going with 2 Memcache servers , which will be working 
>>>>> independently 
>>>>> to suppourt scalibility , and as the exisiting application is designed in 
>>>>> such a way that one part of the Application will be writing to both the 
>>>>> instances of the application . 
>>>>>
>>>>> On Thursday, 11 October 2012 20:28:06 UTC+5:30, Kiran Kumar wrote:
>>>>>>
>>>>>> I am working on a heavy traffic web site , where there will be GB's 
>>>>>> of data written per minute into our Memcache . So we have decided to use 
>>>>>> two separate instances of Memcache for the application .
>>>>>>
>>>>>> Right now the setup is that , there is NO clustering between 
>>>>>> Memcache1 and Memcache2 because , Memcache1 caches 50% of the data and 
>>>>>> Memcache2 caches the other 50% of the data. 
>>>>>>
>>>>>>  Memcache1   Memcache2
>>>>>>       \           /
>>>>>>        \         /
>>>>>>         \       /
>>>>>>          \     /
>>>>>>           \   /
>>>>>>            \ /
>>>>>>      CustomerData
>>>>>>
>>>>>> So right now as per the set up , there are two Memcache instances for 
>>>>>> a single application .
>>>>>>
>>>>>> Now my question is , once we recive a value inside the application , 
>>>>>> which writes/sets to both the Memcache instances , assume that if a key 
>>>>>> is 
>>>>>> read one of the instance of Memcache - 1 , i need to delete the same key 
>>>>>> on 
>>>>>> the other instance of memcahce also at the same time , so taht they will 
>>>>>> be 
>>>>>> in sync with each other . 
>>>>>>
>>>>>> As per the code point of view once a value is read from Memcache , i 
>>>>>> am deleting that key .
>>>>>>
>>>>>

Reply via email to