Re: [CentOS] Apache Server Tuning for Performance

2009-01-20 Thread Jed Reynolds
linux-crazy wrote:
  I want to  know about  the difference  between worker MPM and
 Prefork MPM , how to find out which one will be used by my apache
 server and  the recommended one   for highly loaded server.If some one
 provide me the link that best explains above two comparison also  be
 very use full.


  Can any one guide me tuning to be  done for the maximum utilization
 of the Resources  and better performance of the Servers.
   


Most list members would likely advise sticking with the prefork 
configuration.  Without knowing what kind of applications you are 
running on your webserver, I wouldn't suggest changing it.

Merely increasing the number of workers might make performance worse.

Use ps or top to figure out how much each apache worker is using. Then 
decide how much ram you want to dedicate on your server to Apache, 
without going into swap. (Over-allocating and then paging out memory 
will only make performance much worse.) For example, if I have 2G or 
ram, and I want 1.5 for apache workers, my average apache worker size 
(resident memory) is 65MB, then I have room for 23 workers. (1024 * 1.5 
) / 65. (There are more accurate ways to calculate this usage, like 
taking shared memory into account.)

Upgrading the ram in your web server is a pretty fast interim solution.

Consider your application performance, too. The longer a request in your 
application takes, the more workers are in use on your web server, 
taking up more memory. If you have long-running queries in your 
database, take care of those first.

Good luck

Jed
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] HA Storage Cookbook?

2008-11-10 Thread Jed Reynolds

Les Mikesell wrote:
But, I think the OP's real problem is that everything is tied to one 
single large drive (i.e. the software mirroring is mostly irrelevant as

...

I think that Les makes a good point, and I'd like to push the point even 
more generally: providing network file storage, via SAN or NFS is that 
when you have a single service instance, you need procedures and/or 
layers of caching to deal with outages.


I've been using a DRBD cluster joined by a bonded GigE switch and it 
replicates quite quickly. My issues have been related to Heartbeat and 
monitoring. We've learned it's very important to practice and tune the 
fail-over process and detect on file system performance rather than 
merely pinging. Also, it's necessary to monitor application performance 
to see if your storage nodes are suffering load issues. I've seen a 
two-core nfs server perform reliably under load 6-7 but it starts to get 
unhappy at any higher load.


Ironically, we've had absolutely no hard drive errors yet. Hardware 
things that come to mind are: mother boards: I've had more mother board 
and ram failures than drive failures with the systems we've had. Raid 
cards: we've had to swap out 2 3Ware raid controllers also.


Network failures will get you down if you're looking for uptime as well: 
we recently had a nic in one of our storage nodes get into a state where 
it was spouting 60Mbit of bad packets and created quite a layer-2 
networking issue for two cabinets of web servers and two ldap servers. 
When the ldap servers couldn't respond, the access to the storage nodes 
got even worse. It was a black day.


The next thing in our setup has to do with reliance of NFS. NFS may not 
the best choice to put behind web-servers, but it was quickest. We're 
adjusting our application to caching the data found on NFS nodes on 
local file-systems so that we can handle an NFS outage.


My take is: if you're a competent Linux admin, DRBD will cost you less 
with by using appropriate servers be more maintainable than an 
appliance. The challenge of course is working out how to reduce response 
time when any hardware goes sour.



Good luck

Jed
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] updated Apache mod_expires?

2008-11-07 Thread Jed Reynolds

I noticed that the apache rpm  httpd-2.2.3-11.el5_1.centos.3.src.rpm
has a bug in mod_expires. 
https://issues.apache.org/bugzilla/show_bug.cgi?id=39774


Please forgive a dumb question: how risky would using a Fedora httpd RPM 
be on a CentOS5 install?



Jed
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Re: DRBD 8.2 crashes CentOS 5.2 on rsync from remote host

2008-08-17 Thread Jed Reynolds

Scott Silva wrote:

on 8-14-2008 12:55 AM Chris Miller spake the following:

nate wrote:

Chris Miller wrote:

I've got a pair of HA servers I'm trying to get into production.
Here are some specs :



[..]

[EMAIL PROTECTED] ~]# BUG: unable to handle kernel paging request at
virtual address c


This typically means bad RAM


While I won't rule this out, my local hardware vendor does a 48 hour 
burn-in
When the servers are shipped to you, do you open them and make sure 
all modules are seated completely, and haven't been dislodged by the 
shipping?


+1 on hardware issues...I won't name names, but once recently I ordered 
two identical systems and I had to send one of them back FOUR times: two 
bad raid controllers, bad ram, and bad motherboard. This was all started 
about 4 weeks into production. I don't know if the vendor was actually 
doing burn-in, but I've seen pleny of damage from shipping. Do your own 
memory testing and line up another (nearly identical) server to verify 
the problem.


Good luck,

Jed


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] xen and nvidia

2008-08-17 Thread Jed Reynolds

sbeam wrote:
has anyone had any luck getting nvidia to work with the latest xen kernel 
under x86_64? I found an unsupported method involving IGNORE_XEN_PRESENCE 
[1], but it doesn't work for me. Everything google turns up seems to be a 
year old. prob nothing has changed but I just wonder.
  
I was only able to get this working briefly under Fedora 7 for a narrow 
window of kernel releases and nvidia.ko combinations. It 
was...frustrating. Good luck.


Jed
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] centos 5.1 install , 3ware raid card...

2008-02-26 Thread Jed Reynolds

Tom Bishop wrote:
Installing a new system using a 3ware card, raid 5 across 4 disks, 
partition, format went smothly and loaded the apps that I need, but 
for some reason it appears grub was not installed, or not completely.  
I am wanting to boot from the array, when installing grub on the 
loader it asks whether to install MBR on the first partition.  Should 
I use the partition instead of the MBR?  When I boot up in rescue mode 
and go to /boot/grub all I see is splashno other files.  Any 
suggestions would be welcome...thanks.


I will guess you've splurged on four 750GB drives...?

Check on your partitioning, possibly using a tool like gparted. Very 
large partitions are not supported by MSDOS-style partition tables, you 
possibly want to look into a different partition  formatting utility...gpt.


http://lists.centos.org/pipermail/centos/2007-February/074986.html

Jed
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] HD Failures

2008-02-26 Thread Jed Reynolds

Jimmy Bradley wrote:

  I'm just curious if any one else has noticed this. I've bought
hard drives from both Walmart and Best Buy. If I can wait, I order them
from newegg.com. I'm beginning to think that the staff at both Walmart
and Best Buy, somewhere along the supply line must dribble the drives
like basket balls. The reason I say that is all the drives I have bought
from those two places fail within a few months time. Has anyone else
noticed that? Just curious


You might want to consider them as possibly recycled drives. If you 
don't have a copy of SpinRite you can force the drive to check all the 
sectors with fdisk ...


fdisk -f -y -c -c

or if you are formatting,

mkfs.ext3 -c -c

will also do this check.

This will byte-swap check and should force updates of SMART statistics 
and  bad-sector detection on the drive.


Jed
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Re: DRBD NFS load issues

2008-01-07 Thread Jed Reynolds

Ugo Bellavance wrote:

Jed Reynolds wrote:

Jed Reynolds wrote:

Ugo Bellavance wrote:



Can you send us the output of vmstat -n 5 5
when you're doing a backup?



This is with rsync at bwlimit=2500



This is doing the same transfer with SSH. The load still climbs...and 
then load drops. I think NFS is the issue.


I wonder if my NFS connection settings in client fstabs are unwise? I 
figured with beefy machine and fast networking, I could take 
advantage of large packetsizes. Bad packet sizes?



Are you backing up nfs to nfs?  From where to where are you doing 
backups?


The source data is on a ext3 partition, LVM volume, backed by a 15krpm 
raid 10 volume. Both rsyncs where conducted from the source host (the db 
server) to the backup server (which hosts nfs). In the nfs backup, I was 
rsyncing from the db filesystem to an NFS mount, and the ssh backup, I 
was rsyncing from the db filesystem to [EMAIL PROTECTED]:/backups.


Jed
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Log Monitoring Recomendation

2008-01-07 Thread Jed Reynolds

Joseph L. Casale wrote:


Given my experience in Linux is limited currently, what do you guys 
use to monitor logs such as ‘messages’ on your centos servers? I had a 
hardware failure that happened in between me manually looking (of 
course…). I would hope it might have a some features to email critical 
issues etc…




Depends on if you're monitoring just one server or a bunch.

I'd google for these things:

LogWatch
epylog
big syster
oak

Then there's various things that read syslog and can read reports for 
you. Google around for things like syslog-ng, nagios, zenoss, whatnot, 
if you're looking at larger scope.


Jed
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] DRBD NFS load issues

2008-01-06 Thread Jed Reynolds

My NFS setup is a heartbeat setup on two servers running Active/Passive
DRBD. The NFS servers themselves are 1x 2 core Opterons with 8G ram and
5TB space with 16 drives and a 3ware controller. They're connected to a
HP procurve switch with bonded ethernet. The sync-rates between the two
DRBD nodes seem to safely reach 200Mbps or better. The processors on the
active NFS servers run with a load of 0.2, so it seems mighty healthy.
Until I do a serious backup.

I have a few load balanced web nodes and two database nodes as NFS
clients. When I start backing up my database to a mounted NFS partition,
a plain rsync drives the NFS box through the roof and forces a failover.
I can do my backup using --bwlimit=1500, but then I'm not anywhere close
to a fast  backup, just 1.5MBps. My backups are probably 40G. (The
database has fast disks and between database copies I see run at up to
60MBps - close to 500Mbps). I obviously do not have a networking issue.

The processor loads up like this:
bwlimit   1500   load2.3
bwlimit   2500   load   3.5
bwlimit   4500   load   5.5+

The DRBD secondary seems to run at about 1/2 the load of the primary.

What I'm wondering is--why is this thing *so* load sensitive? Is it
DRBD? Is it NFS? I'm guessing that since I only have two cores in the
NFS boxes that a prolonged transfer makes NFS dominates 1 core and DRBD
dominate the next, and so I'm saturating my processor.

Thots?

Jed


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Re: DRBD NFS load issues

2008-01-06 Thread Jed Reynolds

Ugo Bellavance wrote:

Jed Reynolds wrote:

My NFS setup is a heartbeat setup on two servers running Active/Passive
DRBD. The NFS servers themselves are 1x 2 core Opterons with 8G ram and
5TB space with 16 drives and a 3ware controller. They're connected to a
HP procurve switch with bonded ethernet. The sync-rates between the two
DRBD nodes seem to safely reach 200Mbps or better. The processors on the
active NFS servers run with a load of 0.2, so it seems mighty healthy.
Until I do a serious backup.

I have a few load balanced web nodes and two database nodes as NFS
clients. When I start backing up my database to a mounted NFS partition,
a plain rsync drives the NFS box through the roof and forces a failover.
I can do my backup using --bwlimit=1500, but then I'm not anywhere close
to a fast  backup, just 1.5MBps. My backups are probably 40G. (The
database has fast disks and between database copies I see run at up to
60MBps - close to 500Mbps). I obviously do not have a networking issue.

The processor loads up like this:
bwlimit   1500   load2.3
bwlimit   2500   load   3.5
bwlimit   4500   load   5.5+

The DRBD secondary seems to run at about 1/2 the load of the primary.

What I'm wondering is--why is this thing *so* load sensitive? Is it
DRBD? Is it NFS? I'm guessing that since I only have two cores in the
NFS boxes that a prolonged transfer makes NFS dominates 1 core and DRBD
dominate the next, and so I'm saturating my processor.


Is your CPU usage 100% all the time?



Not 100% user or 100% system--not even close.
Wow. Looks like a lot of idle wait time to me, actually.

Looking at the stats below, I'd think that if there's so much idle time, 
it's either disk or network latency. I wonder if packets going thru the 
drbd device are ... wrong size? Drbd devices are waiting for a response 
from seconday? Seems strange.


The only other thing running on that system is memcached, which uses 11% 
cpu. About 200 connections open to memcached from other hosts. There 
were 8 nfsd instances.




Can you send us the output of vmstat -n 5 5
when you're doing a backup?



This is with rsync at bwlimit=2500

top - 22:37:23 up 3 days, 10:07,  4 users,  load average: 4.67, 2.37, 1.30
Tasks: 124 total,   1 running, 123 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.3% us,  1.3% sy,  0.0% ni,  9.3% id, 87.7% wa,  0.3% hi,  1.0% si
Cpu1  :  0.0% us,  3.3% sy,  0.0% ni,  8.0% id, 83.7% wa,  1.7% hi,  3.3% si
Mem:   8169712k total,  8148616k used,21096k free,   296636k buffers
Swap:  4194296k total,  160k used,  4194136k free,  6295284k cached

$ vmstat -n 5 5
procs ---memory-- ---swap-- -io --system-- 
cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us sy 
id wa
0 10160  24136 304208 6277104009538   2263  0  
2 89  9
0 10160  28224 304228 6277288003664 2015   707  0  
3  0 97
0  0160  28648 304316 628032800   62928 3332  1781  0  
4 65 31
0  8160  26784 304384 628338800   629   106 4302  3085  1  
5 70 25
0  0160  21520 304412 628730400   763   104 3487  1944  0  
4 78 18


$ vmstat -n 5 5
procs ---memory-- ---swap-- -io --system-- 
cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us sy 
id wa
0  0160  26528 301516 6287820009538   2263  0  
2 89  9
0  0160  21288 301600 629276800   99986 4856  3273  0  
2 87 11
2  8160  19408 298304 628396000   294 15293 33983 15309  0 
22 53 25
0 10160  28360 298176 62812320034   266 2377   858  0  
2  0 97
0 10160  33680 298196 6281552003248 1937   564  0  
1  4 96






___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Re: DRBD NFS load issues

2008-01-06 Thread Jed Reynolds

Jed Reynolds wrote:

Ugo Bellavance wrote:



Can you send us the output of vmstat -n 5 5
when you're doing a backup?



This is with rsync at bwlimit=2500



This is doing the same transfer with SSH. The load still climbs...and 
then load drops. I think NFS is the issue.


I wonder if my NFS connection settings in client fstabs are unwise? I 
figured with beefy machine and fast networking, I could take advantage 
of large packetsizes. Bad packet sizes?


rw,hard,intr,rsize=16384,wsize=16384


top - 23:04:35 up 3 days, 10:34,  4 users,  load average: 4.08, 3.06, 2.81
Tasks: 132 total,   1 running, 131 sleeping,   0 stopped,   0 zombie
Cpu0  :  5.7% us,  1.7% sy,  0.0% ni, 72.0% id, 19.3% wa,  0.7% hi,  0.7% si
Cpu1  :  1.3% us,  3.0% sy,  0.0% ni, 38.4% id, 51.0% wa,  0.7% hi,  5.6% si
Mem:   8169712k total,  8149288k used,20424k free,   162628k buffers
Swap:  4194296k total,  160k used,  4194136k free,  6374960k cached

then

top - 23:08:49 up 3 days, 10:39,  4 users,  load average: 0.89, 1.86, 2.38
Tasks: 129 total,   1 running, 128 sleeping,   0 stopped,   0 zombie
Cpu0  :  5.2% us,  2.8% sy,  0.0% ni, 63.7% id, 23.4% wa,  1.2% hi,  3.8% si
Cpu1  :  1.2% us,  3.2% sy,  0.0% ni, 65.9% id, 27.3% wa,  1.0% hi,  1.4% si
Mem:   8169712k total,  8149512k used,20200k free,   141388k buffers
Swap:  4194296k total,  160k used,  4194136k free,  6388856k cached


$ vmstat -n 5 5
procs ---memory-- ---swap-- -io --system-- 
cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us sy 
id wa
0  0160  18712 155060 6383956009645   4270  0  
2 89  9
0  0160  20128 154328 638298800   421  2578 7622  2433  3  
4 64 29
0  0160  18192 153920 638407600   126  2498 7116  2238  3  
6 72 19
0  1160  22872 153684 638064000   110  2451 7065  2063  3  
4 64 28
0  0160  23880 153416 63797520034  2520 7091  2506  3  
4 68 25



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos