On Wed, Jul 20, 2011 at 7:54 PM, kashani <kashani-l...@badapple.net> wrote:
> On 7/20/2011 4:08 PM, Grant wrote:
>>>>
>>>> I ran into an out of memory problem.  The first mention of it in the
>>>> kernel log is "mysqld invoked oom-killer".  I haven't run into this
>>>> before.  I do have a swap partition but I don't activate it based on
>>>> something I read previously that I later found out was wrong so I
>>>> suppose I should activate it.  Is fstab the way to do that?  I have a
>>>> commented line in there for swap.
>>>
>>> Yes, just uncomment it and should be automatic. (you can use "swapon"
>>> to enable it without rebooting)
>>
>> Got it.
>>
>>>> Can anyone tell how much swap this is:
>>>>
>>>> /dev/sda2           80325     1140614      530145   82  Linux swap /
>>>> Solaris
>>>>
>>>> If it's something like 512MB, that may not have prevented me from
>>>> running out of memory since I have 4GB RAM.  Is there any way to find
>>>> out if there was a memory leak or other problem that should be
>>>> investigated?
>>>
>>> That's 512MB. You can also create a swap file to supplement the swap
>>> partition if you don't want to or aren't able to repartition.
>>
>> So I'm sure I have the concept right, is adding a 1GB swap partition
>> functionally identical to adding 1GB RAM with regard to the potential
>> for out-of-memory conditions?
>>
>>> I'd check the MySQL logs to see if it shows anything. Maybe check the
>>> settings with regard to memory upper limits (Google it, there's a lot
>>> of info about MySQL RAM management).
>>
>> Nothing in the log and from what I read online, an error should be
>> logged if I reach mysql's memory limit.
>>
>>> If you're running any other servers that utilize MySQL like Apache or
>>> something, check its access logs to see if you had an abnormal number
>>> of connections. Bruteforce hacking or some kind of flooding/DOS attack
>>> might cause it to use more memory than it ordinarily would.
>>
>> It runs apache and I found some info there.
>>
>>> A Basic "what's using up my memory?" technique is to log the output of
>>> "top" by using the -b command. Something like "top -b>  toplog.txt".
>>> Then you can go back to the time when the OOM occurred and see what
>>> was using a lot of RAM at that time.
>>
>> The kernel actually logged some top-like output and it looks like I
>> had a large number of apache2 processes running, likely 256 processes
>> which is the default MaxClients.  The specified total_vm for each
>> process was about 67000 which means 256 x 67MB = 17GB???
>>
>> I looked over my apache2 log and I was hit severely by a single IP
>> right as the server went down.  However, that IP looks to be a
>> residential customer in the US and they engaged in normal browsing
>> behavior both before and after the disruption.  I think that IP may
>> have done the refresh-100-times thing out of frustration as the server
>> started to go down.
>>
>> Does it sound like apache2 was using up all the memory?  If so, should
>> I look further for a catalyst or did this likely happen slowly?  What
>> can I do to prevent it from happening again?  Should I switch apache2
>> from prefork to threads?
>
>        Switching from prefork to threads and vice versa can be very
> difficult depending on which modules and libraries your site uses. It is not
> on the list of things you should try first. Or second. Maybe 37th.
>        I wouldn't expect adding swap to do much in this case. Your site gets
> hit hard, Mysql is a bit slow, Apache processes start stacking up, the
> system starts swapping, disk is really slow compared to RAM, and everything
> grinds to a complete halt possibly locking the machine up.
>
>        The easiest thing to try is to turn off keepalives so child processes
> aren't hanging around keeping connections up. Also lower the number of
> Apache children to 8 * number of processors or a minimum of 32. Test a bit.
> Turning off keep alive can cause problems for Flash based uploaders to your
> site and code that expect the connection to stay up. For most sites this
> shouldn't matter.
>
>        Next I'd look at tuning your Mysql config. If you've never touched
> my.cnf, by default it's set to use 64MB IIRC. You may need to raise this to
> get better performance. key_buffer and innodb_buffer_pool_size are the only
> two I'd modify without knowing more.

Also, run a caching proxy if at all possible. That made the single
biggest difference for my server.

Other useful things:
* Set the MaxRequestsPerChild to something like 450. As part of their
caching, things like mod_php will grow the process size a bit as the
apache process gets old in the tooth. Setting MaxRequestsPerChild
lower causes the process to expire and be replaced sooner. On my
server, I see apache processes consume about 60MB towards the end, and
then cycle back and consume about 22MB.
* On my server, I have MinSpareServers at 10, and MaxSpareServers at
12. I handle spikes pretty well, and free the memory quickly.
* If you're using PHP, set memory_limit in php.ini as low as your
applications can survive.

I'm assuming you're running on a VPS or similar. At 512MB of RAM with
a web server and database server, you need to keep things very tight.


-- 
:wq

Reply via email to