Thank you very much for your efforts and the great explanation .
*I have two questions with respect to your answer ?? Please let me know *
1. The different memcache servers Svr1 , Svr2 and Svr3 will be as
active/active clustering or active/passive clustering , and what is the
default behavior related to xmemcahed framework?
2. What exactly does Replication meant in your answer ?? ( Will these
three servers will not have same copy of Data automatically ) , i was
asking this because if is shutdown servers say Svr1 and Svr2 ,and still i
am able to get the data from Svr3 . Is this different from what replication
means ??
On Friday, 12 October 2012 21:22:06 UTC+5:30, ArtPort wrote:
>
> Kiran Kumar, you can do what you intended to do, but not in the way you
> intended to do.
> memcached has already a useful way to support scalability, with increment
> of performance lineally.
>
> if you already have "one Memcached for the entire application".
> and work well, so, if you need scale your application, only
> you need to do is add some servers to the previous memcached cluster.
>
> As You Stated, This Configuration is Working OK.
>
> Streaming Application -----Write-----> Memcached
> ------Read/Delete----> ClientReads
> |
> |
> |
> Svr1
>
>
> Now, You have grow, and need make a scale of your Memcached.
>
> The easiest way is:
>
> Streaming Application -----Write-----> Memcached
> ------Read/Delete----> Client
> / | \
> / | \
> / | \
> Svr1 Svr2 Svr3
>
> You can do this, with this instructions:
> *//PHP
> $MEMCACHE_SERVERS = array(
> "10.1.1.1", //Svr1
> "10.1.1.2", //Svr2
> "10.1.1.3", //Svr3
> );
> $memcache = new Memcache();
> foreach($MEMCACHE_SERVERS as $server){
> $memcache->addServer ( $server );
> }
> *
> Code taken from:
> Here<http://stackoverflow.com/questions/4717559/multiple-memcached-servers-question>
>
> This way, the performance is incremented because is using multiple servers
> to do the task (writes/reads/deletes),
> So, the hole capacity, (if each Server has 8Gig Ram, so the Memcached
> cluster have 24Gig RAM) and boosted in performance.
> Again, you do need to do what you initially intended to do. all is already
> done with memcache in the current version.
> If you need Replication, that is another problem. but you do not mention
> that.
>
> Do not use the same web server as memcache server, use as separated
> servers exclusively for memcached as the graph
> shows. (may be you think differently).
>
> On Friday, October 12, 2012 9:17:25 AM UTC-5, Kiran Kumar wrote:
>>
>> Initially there is one Memcache for the entire application , now we
>> expect some 50000 users on to our application , so as part of re modeling
>> we are going with 2 Memcache servers , which will be working independently
>> to suppourt scalibility , and as the exisiting application is designed in
>> such a way that one part of the Application will be writing to both the
>> instances of the application .
>>
>> On Thursday, 11 October 2012 20:28:06 UTC+5:30, Kiran Kumar wrote:
>>>
>>> I am working on a heavy traffic web site , where there will be GB's of
>>> data written per minute into our Memcache . So we have decided to use two
>>> separate instances of Memcache for the application .
>>>
>>> Right now the setup is that , there is NO clustering between Memcache1
>>> and Memcache2 because , Memcache1 caches 50% of the data and Memcache2
>>> caches the other 50% of the data.
>>>
>>> Memcache1 Memcache2
>>> \ /
>>> \ /
>>> \ /
>>> \ /
>>> \ /
>>> \ /
>>> CustomerData
>>>
>>> So right now as per the set up , there are two Memcache instances for a
>>> single application .
>>>
>>> Now my question is , once we recive a value inside the application ,
>>> which writes/sets to both the Memcache instances , assume that if a key is
>>> read one of the instance of Memcache - 1 , i need to delete the same key on
>>> the other instance of memcahce also at the same time , so taht they will be
>>> in sync with each other .
>>>
>>> As per the code point of view once a value is read from Memcache , i am
>>> deleting that key .
>>>
>>