Apache and PHP-FPM
Hi, I have a web server with Apache 2.4 running in worker mode and multiple php version with FPM service running in dynamic mode. Sometimes php-fpm stop responding and I got a 503 error on php request but apache still respond to http request to other files (css, js, jpeg, ...) then after a while apache stop respond (nothing logged on error log neither in access log). I have to restart apache and php-fpm services to unlock the server. Off course it's append while I have malicious activity on the server, the last time it's happen I found a SQL injection attempt that causing lot of php error (maybe an infinite loop). How can I avoid these kind of situation? Can I optimize actual settings? Should I use mpm_event with apache and/or ondemand process manager with PHP-FPM? mpm_worker config: StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestWorkers 150 MaxConnectionsPerChild 0 php-fpm pool config: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 Thanks, Martin
Re: Slow XFS write
Hi all, After some research on my RAID controller, PERC H310 Mini on a Dell server, I found a lot performance issue. It does not have WriteBack cache or BBU. I do not tested to write directly on the partition because I have some backup to do before but I think reinstall the server on a software RAID is the best solution here. Thanks to all Le 23/08/2018 à 09:07, Stefan K a écrit : Hello, did you really need the hardware controller? I suggest to use Software/MD raid, ob btrfs with raid1 or zfs with raidz1 Looks suspiciously similar to LSI MegaRAID. Is controller firmware current? Is it possible to upgrade it? Since you seem to have BBU, have you considered enabling WriteBack mode? Reco Can you rebuild the partition? If so, unmount it then perform the dd directly to the device. (of=/dev/sda2 or whatever) What's the hardware raid vendor? Honestly, the performance of ext4 partition is also horrible, just not as bad as the xfs partition. What's the hardware raid model? Does it have a battery backed cache? (I will guess not, because the test is only 1G and that should fit into most current caches?) If the test directly to the partition shows good performance, then it might be an alignment issue; try recreating the fs with `mkfs.xfs -d su=64k,sw=2 ...`. If the performance is the same to the raw partition (and still much worse than the root partition) then the partition alignment itself might be off--which means recreating the partitions. (fsck and mkfs.xfs normally handle this automatically, but that depends on the raid controller+driver passing the information up, and per your xfsinfo output, that didn't happen.) Mike Stone
Re: Slow XFS write
To complete the description there is infos about the XFS partition: meta-data=/dev/sda4 isize=256agcount=11, agsize=268435455 blks = sectsz=512 attr=2, projid32bit=1 = crc=0finobt=0 data = bsize=4096 blocks=2920268544, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 And infos about the RAID5 volume: Virtual Drive: 0 (Target Id: 0) Name: RAID Level : Primary-5, Secondary-0, RAID Level Qualifier-3 Size: 10.915 TB Sector Size : 512 Parity Size : 5.457 TB State : Optimal Strip Size : 64 KB Number Of Drives: 3 Span Depth : 1 Default Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Default Power Savings Policy: Controller Defined Current Power Savings Policy: None Can spin up in 1 minute: Yes LD has drives that support T10 power conditions: No LD's IO profile supports MAX power savings with cached writes: No Bad Blocks Exist: No Is VD Cached: No Le 22/08/2018 à 11:41, Martin LEUSCH a écrit : Hi, I have a NFS server with a hardware RAID5 on 3 HDD of 6 TB. I have a system partition with ext4 and a data partition with XFS. I get only 10 MB/s in write speed on the XFS data partition and 80 MB/s on the system partition. XFS mount option: /dev/sda4 on /var/srv/nfs type xfs (rw,relatime,attr2,inode64,noquota) Is there a way to have better performance? Is XFS a good choice in this situation?
Re: Slow XFS write
Le 22/08/2018 à 15:17, Michael Stone a écrit : On Wed, Aug 22, 2018 at 02:25:51PM +0200, Martin LEUSCH wrote: I tested write speed with dd command like "dd if=/dev/zero of=/var/srv/nfs/ testfile bs=1G count=1 oflag=direct". The 10 MB/s for the data partition also correspond to the behavior I get in real situation, when I copy a big file on NFS, 12GB are copied quickly then hang for 15 or 20 minutes. When I copy big file with scp, It copy quickly the at the beginning then decrease to 10 MB/s till the end. what happens with dd if=/dev/zero of=/var/srv/nfs/testfile bs=64k count=16k conv=fsync And this is direct on the server, not via NFS, right? I've got same result as previous test. tests with dd command are executed directly on the server.
Re: Slow XFS write
Le 22/08/2018 à 13:15, Michael Stone a écrit : On Wed, Aug 22, 2018 at 11:41:08AM +0200, Martin LEUSCH wrote: I have a NFS server with a hardware RAID5 on 3 HDD of 6 TB. I have a system partition with ext4 and a data partition with XFS. I get only 10 MB/s in write speed on the XFS data partition and 80 MB/s on the system partition. how are you testing this? I tested write speed with dd command like "dd if=/dev/zero of=/var/srv/nfs/testfile bs=1G count=1 oflag=direct". The 10 MB/s for the data partition also correspond to the behavior I get in real situation, when I copy a big file on NFS, 12GB are copied quickly then hang for 15 or 20 minutes. When I copy big file with scp, It copy quickly the at the beginning then decrease to 10 MB/s till the end.
Slow XFS write
Hi, I have a NFS server with a hardware RAID5 on 3 HDD of 6 TB. I have a system partition with ext4 and a data partition with XFS. I get only 10 MB/s in write speed on the XFS data partition and 80 MB/s on the system partition. XFS mount option: /dev/sda4 on /var/srv/nfs type xfs (rw,relatime,attr2,inode64,noquota) Is there a way to have better performance? Is XFS a good choice in this situation?
Unpacking process is very slow with apt on VM under Debian 8
Hi, I have a virtualization environment with a Proxmox cluster and an other cluster with GlusterFS to store disk images. With VM under Debian 8, and the unpacking operation is very very very long, arround an hour to unpack linux-image-amd64 for example. I do not have this problem with Debian 7. I already try to mount FS with "nodelalloc" (ext4), use "eatmydata" with no noticeable difference. I use raw disk image with no cache for my VM to avoid instability with GlusterFS storage. The transfer rate between Proxmox and GlusterFS can reach 15MB/s. How can I have better performance with apt-get or aptitude in Debian 8?