Reducing virtual memory usage of memcached

2011-02-06 Thread lmwtv
Hi,

I would like to use memcached on my VPS, which uses OpenVZ.  The only
problem is that OpenVZ (at least the installation that my hosts have)
doesn't differentiate between virtual memory and resident memory, so
all virtual memory is included in my allocation.

When I start up memcached, it is uses just under 1MB of resident, but
around 50MB of virtual memory.  This is unaffected by the value for
the max memory usage (i.e. the -m tag).  Obviously on most systems
using up virtual memory wouldn't be a problem, but on OpenVZ this
means that 50MB of memory is being 'used'.

Can anyone tell me what the cause of this virtual memory use is, and
if it is possible to reduce or eliminate it.

In case it's relevant, I'm using v 1.4.5 of memcached and 2.0.10-
stable of libevent.

Thanks,

Dave.


Re: Memcache in multiple servers

2011-02-06 Thread Jason Sirota
On Thu, Feb 3, 2011 at 9:10 AM, Dustin dsalli...@gmail.com wrote:


 On Feb 3, 2:32 am, Margalit Silver margalit.sil...@gmail.com wrote:
  Our system has 4 live servers on a load balancer in an Amazon Cloud.

  We see in our code that memcache is deleted when a write is done but it
  seems that it is only done for that one server, we want that it should be
  cleared for that one piece of data on all servers.

   If you are using memcached effectively, a given piece of data will
 only exist on one server.  That's how you achieve massive scale in a
 caching layer.

  You can get a quick overview here:  http://memcached.org/about


Margalit,

Can you give us some more information about how your architecture is set up?
You say you have 4 live servers on an amazon cloud.

What client are you using to access memcached?
Can you share your memcache configuration section?
Can you share a snippet of code that accesses memcached?

As Dustin says, data is not supposed to exist on more than one server, so
something else may be going on.

Jason


Re: Reported error in binary protocol test on Solaris since 1.4.4

2011-02-06 Thread Dagobert
Hi,

On 19 Jan., 23:38, Dagobert honkma...@googlemail.com wrote:
 I have a strange phenomenon: The test t/binary.t fails from 1.4.4 for
 me on Solaris 9 Sparc with Sun Studio 12. I nailed it down to this
 commit causing the failure:
  https://github.com/memcached/memcached/commit/5100e7af8802b8170adb8a7...
 These two lines with APPEND_STAT seem to push something over the edge.
 If I comment these two lines out the test binary.t succeeds. Adding
 and deleting some of the APPEND_STAT lines breaks the test at
 different places. With the lines from the commit that is

I did some more research and the results are split:

81 10 00 09 00 00 00 00 00 00 00 0a d4 22 ae e8 00 00 00 00 00 00 00
00
81 10 00 00 00 00 00 00 00 00

#   Failed test 'Expected read length'
#   at t/binary.t line 540.
#  got: '10'
# expected: '24'
Use of uninitialized value $ident_hi in multiplication (*) at t/
binary.t line 546.
Use of uninitialized value $ident_lo in addition (+) at t/binary.t
line 546.
Use of uninitialized value $remaining in numeric eq (==) at t/binary.t
line 548.
00 00 d4 22 ae e8 00 00 00 00 00 00 00 00

#   Failed test 'Expected read length'
#   at t/binary.t line 540.
#  got: '14'
# expected: '24'


Best regards

  -- Dago


Re: Reducing virtual memory usage of memcached

2011-02-06 Thread lmwtv
I've found (part of) the answer to my own question - the virtual
memory comes from the thread stacks.  Setting -t 1 reduces the virtual
memory to around 20MB.

I've read from other posts that it is no longer possible to have a non-
threaded memcached version.  It appears on my system that the stack
size being used is 10MB.  This is almost certainly way too large.  Is
there any way to conveniently (i.e. without major edits to the source
code) set the thread stack size for the threads that memcached uses,
e.g. through a macro setting?


Re: Reducing virtual memory usage of memcached

2011-02-06 Thread lmwtv
Hi Roberto,

Thanks for responding.

On Feb 6, 10:50 pm, Roberto Spadim robe...@spadim.com.br wrote:
 maybe a memory leak?
The memory usage is just after a memcached restart, so it's not a
memory leak - I'm fairly sure it's the thread stack allocation (see
other post).

try change memcached command line parameters...
 check if VM size change with diferent parameters

I did - no change, unless -t is different (and the change is
approximately 10MB for each increment/decrement of the number of
threads).

Thanks,

Dave.


Re: Reducing virtual memory usage of memcached

2011-02-06 Thread Roberto Spadim
hu
you are using a virtual linux os? or the host linux os?

2011/2/6 lmwtv pay.letmewatch...@gmail.com:
 Hi Roberto,

 Thanks for responding.

 On Feb 6, 10:50 pm, Roberto Spadim robe...@spadim.com.br wrote:
 maybe a memory leak?
 The memory usage is just after a memcached restart, so it's not a
 memory leak - I'm fairly sure it's the thread stack allocation (see
 other post).

 try change memcached command line parameters...
 check if VM size change with diferent parameters

 I did - no change, unless -t is different (and the change is
 approximately 10MB for each increment/decrement of the number of
 threads).

 Thanks,

 Dave.



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial


Re: Reducing virtual memory usage of memcached

2011-02-06 Thread lmwtv
I'm using OpenVZ, which does operating-system level virtualization.
I'm not using it on my desktop - it's the software that runs the
virtual private server I'm hosting my websites on.

Dave Allen.


how to get the memcache statistics?

2011-02-06 Thread Selvaraj Varadharajan
Hi
 Currently my application response time is too slow and able to see lot
of lock failure and memcache timeouts in the log. We are using spy-memcache
client. We are assuming, time to increase the # of memcache servers. Before
coming to that conclusions need to identify the statistics of memcache.

In a simple sentence.. what is the better parameter to identify my memcache
farm need more servers ?

-Selvaraj


how to get the memcache statistics?

2011-02-06 Thread Selvaraj
Hi
 Currently my application response time is too slow and able to
see lot of lock failure and memcache timeouts in the log. We are using
spy-memcache client. We are assuming, time to increase the # of
memcache servers. Before coming to that conclusions need to identify
the statistics of memcache.

In a simple sentence.. what is the better parameter to identify my
memcache farm need more servers ?

-Selvaraj


Re: how to get the memcache statistics?

2011-02-06 Thread dormando
 Hi
  Currently my application response time is too slow and able to see lot 
 of lock failure and memcache timeouts in the log. We are using
 spy-memcache client. We are assuming, time to increase the # of memcache 
 servers. Before coming to that conclusions need to identify the statistics
 of memcache.

 In a simple sentence.. what is the better parameter to identify my memcache 
 farm need more servers ?

http://code.google.com/p/memcached/wiki/Timeouts -- do this to verify
what your timeouts are.
http://code.google.com/p/memcached/wiki/NewServerMaint -- for general
server health
http://code.google.com/p/memcached/wiki/NewPerformance -- how fast it
should be

Also make sure that your server is running the latest software available
(1.4.5). 1.2.x releases are not supported anymore.

-Dormando


Re: Reducing virtual memory usage of memcached

2011-02-06 Thread dormando
 I've found (part of) the answer to my own question - the virtual
 memory comes from the thread stacks.  Setting -t 1 reduces the virtual
 memory to around 20MB.

 I've read from other posts that it is no longer possible to have a non-
 threaded memcached version.  It appears on my system that the stack
 size being used is 10MB.  This is almost certainly way too large.  Is
 there any way to conveniently (i.e. without major edits to the source
 code) set the thread stack size for the threads that memcached uses,
 e.g. through a macro setting?

There's some amount of overhead with pre-allocating the hash table and
this and that... that'll show up as virtual memory until data's written
into it. Also note that memcached will lazily allocate one slab per slab
class, so even if you set -m 12 you'll end up using 50+ megs of ram if you
put one item in each slab class.

You could also use -I to lower the max item size and reduce some overhead.

Don't think it explicitly sets the thread stack size, and I forget how to
tweak that offhand, I think google will tell you :P


Re: how to get the memcache statistics?

2011-02-06 Thread Selvaraj Varadharajan
Thanks Dormando, let me go through the docs you provided.

-Selvaraj

On Mon, Feb 7, 2011 at 3:13 AM, dormando dorma...@rydia.net wrote:

  Hi
   Currently my application response time is too slow and able to see
 lot of lock failure and memcache timeouts in the log. We are using
  spy-memcache client. We are assuming, time to increase the # of memcache
 servers. Before coming to that conclusions need to identify the statistics
  of memcache.
 
  In a simple sentence.. what is the better parameter to identify my
 memcache farm need more servers ?

 http://code.google.com/p/memcached/wiki/Timeouts -- do this to verify
 what your timeouts are.
 http://code.google.com/p/memcached/wiki/NewServerMaint -- for general
 server health
 http://code.google.com/p/memcached/wiki/NewPerformance -- how fast it
 should be

 Also make sure that your server is running the latest software available
 (1.4.5). 1.2.x releases are not supported anymore.

 -Dormando


Re: how to get the memcache statistics?

2011-02-06 Thread Roberto Spadim
number of clients (tcp connections), cpu and memory, network bandwidth
(networked memcached)
you just need 1 server (2 if you have a replica (mirror))
if you want scale you will have bottleneck on cpu,memory and number of
clients (tcp ports have a limited number 65536 ports per IP, use UDP
if you don't want a limited number of clients)


2011/2/6 Selvaraj Varadharajan selvara...@gmail.com:
 Hi
  Currently my application response time is too slow and able to see lot
 of lock failure and memcache timeouts in the log. We are using spy-memcache
 client. We are assuming, time to increase the # of memcache servers. Before
 coming to that conclusions need to identify the statistics of memcache.

 In a simple sentence.. what is the better parameter to identify my memcache
 farm need more servers ?

 -Selvaraj




-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial


Re: Reducing virtual memory usage of memcached

2011-02-06 Thread Roberto Spadim
memcache use fork or thread?

2011/2/6 dormando dorma...@rydia.net:
 I've found (part of) the answer to my own question - the virtual
 memory comes from the thread stacks.  Setting -t 1 reduces the virtual
 memory to around 20MB.

 I've read from other posts that it is no longer possible to have a non-
 threaded memcached version.  It appears on my system that the stack
 size being used is 10MB.  This is almost certainly way too large.  Is
 there any way to conveniently (i.e. without major edits to the source
 code) set the thread stack size for the threads that memcached uses,
 e.g. through a macro setting?

 There's some amount of overhead with pre-allocating the hash table and
 this and that... that'll show up as virtual memory until data's written
 into it. Also note that memcached will lazily allocate one slab per slab
 class, so even if you set -m 12 you'll end up using 50+ megs of ram if you
 put one item in each slab class.

 You could also use -I to lower the max item size and reduce some overhead.

 Don't think it explicitly sets the thread stack size, and I forget how to
 tweak that offhand, I think google will tell you :P




-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial


Re: Memcache in multiple servers

2011-02-06 Thread Dustin

On Feb 6, 11:14 pm, Margalit Silver margalitatw...@gmail.com wrote:

 We set the key to expire after 6 hours so currently we know that the maximum
 time a key could have inconsistent data is 6 hours but we would like it to
 be updated as soon as there is a DB write.  We know that this problem of
 inconsistent data would be solved if we used memcache as it is supposed to
 be used by adding all the servers using the addServer function, however we
 are hesitant to do so because of the time lag that would be caused by a
 client having to get data from another server, the reason we are running 4
 identical servers is to have quick response to many client machines.

 Based on all of this we are leaning towards a solution of notifying all
 servers of an update.  In order not to impact response time and not to have
 these servers bogged down in notifications, the best solution might be one
 with a master server that notifies the other servers to invalidate cache on
 a DB write.

  It sounds like you're using fear of a potential bottleneck you'd
have by using memcached the normal way lead you to the path of
something that won't scale well.

  When you have 40 (10x) servers, how much of the time spent on any
one of the frontends will be running work from your distributed cache
invalidation tool?  What will be the effect of having your cache size
not grow by 10x when your traffic and servers do?  What's the cost of
developing this distributed cache invalidation tool (assuming you'd be
building on something like spread, that's still just a starting
point)?  How does that cost compare to just doing the simple thing and
seeing how well it works for you?