Re: [PERFORM] SSD + RAID
Matthew Wakeling wrote: On Fri, 13 Nov 2009, Greg Smith wrote: In order for a drive to work reliably for database use such as for PostgreSQL, it cannot have a volatile write cache. You either need a write cache with a battery backup (and a UPS doesn't count), or to turn the cache off. The SSD performance figures you've been looking at are with the drive's write cache turned on, which means they're completely fictitious and exaggerated upwards for your purposes. In the real world, that will result in database corruption after a crash one day. Seagate are claiming to be on the ball with this one. http://www.theregister.co.uk/2009/12/08/seagate_pulsar_ssd/ I have updated our documentation to mention that even SSD drives often have volatile write-back caches. Patch attached and applied. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do + If your life is a hard drive, Christ can be your backup. + Index: doc/src/sgml/wal.sgml === RCS file: /cvsroot/pgsql/doc/src/sgml/wal.sgml,v retrieving revision 1.61 diff -c -c -r1.61 wal.sgml *** doc/src/sgml/wal.sgml 3 Feb 2010 17:25:06 - 1.61 --- doc/src/sgml/wal.sgml 20 Feb 2010 18:26:40 - *** *** 59,65 same concerns about data loss exist for write-back drive caches as exist for disk controller caches. Consumer-grade IDE and SATA drives are particularly likely to have write-back caches that will not survive a !power failure. To check write caching on productnameLinux/ use commandhdparm -I/; it is enabled if there is a literal*/ next to literalWrite cache/; commandhdparm -W/ to turn off write caching. On productnameFreeBSD/ use --- 59,66 same concerns about data loss exist for write-back drive caches as exist for disk controller caches. Consumer-grade IDE and SATA drives are particularly likely to have write-back caches that will not survive a !power failure. Many solid-state drives also have volatile write-back !caches. To check write caching on productnameLinux/ use commandhdparm -I/; it is enabled if there is a literal*/ next to literalWrite cache/; commandhdparm -W/ to turn off write caching. On productnameFreeBSD/ use -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] SSD + RAID
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bruce Momjian wrote: Matthew Wakeling wrote: On Fri, 13 Nov 2009, Greg Smith wrote: In order for a drive to work reliably for database use such as for PostgreSQL, it cannot have a volatile write cache. You either need a write cache with a battery backup (and a UPS doesn't count), or to turn the cache off. The SSD performance figures you've been looking at are with the drive's write cache turned on, which means they're completely fictitious and exaggerated upwards for your purposes. In the real world, that will result in database corruption after a crash one day. Seagate are claiming to be on the ball with this one. http://www.theregister.co.uk/2009/12/08/seagate_pulsar_ssd/ I have updated our documentation to mention that even SSD drives often have volatile write-back caches. Patch attached and applied. Hmmm. That got me thinking: consider ZFS and HDD with volatile cache. Do the characteristics of ZFS avoid this issue entirely? - -- Dan Langille BSDCan - The Technical BSD Conference : http://www.bsdcan.org/ PGCon - The PostgreSQL Conference: http://www.pgcon.org/ -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.13 (FreeBSD) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkuAayQACgkQCgsXFM/7nTyMggCgnZUbVzldxjp/nPo8EL1Nq6uG 6+IAoNGIB9x8/mwUQidjM9nnAADRbr9j =3RJi -END PGP SIGNATURE- -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] SSD + RAID
Dan Langille wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bruce Momjian wrote: Matthew Wakeling wrote: On Fri, 13 Nov 2009, Greg Smith wrote: In order for a drive to work reliably for database use such as for PostgreSQL, it cannot have a volatile write cache. You either need a write cache with a battery backup (and a UPS doesn't count), or to turn the cache off. The SSD performance figures you've been looking at are with the drive's write cache turned on, which means they're completely fictitious and exaggerated upwards for your purposes. In the real world, that will result in database corruption after a crash one day. Seagate are claiming to be on the ball with this one. http://www.theregister.co.uk/2009/12/08/seagate_pulsar_ssd/ I have updated our documentation to mention that even SSD drives often have volatile write-back caches. Patch attached and applied. Hmmm. That got me thinking: consider ZFS and HDD with volatile cache. Do the characteristics of ZFS avoid this issue entirely? No, I don't think so. ZFS only avoids partial page writes. ZFS still assumes something sent to the drive is permanent or it would have no way to operate. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
[PERFORM] AutoVacuum_NapTime
I have a system with around 330 databases running PostgreSQL 8.4.2 What would the expected behavior be with AutoVacuum_NapTime set to the default of 1m and autovacuum_workers set to 3? What I'm observing is that the system is continuously vacuuming databases. Would these settings mean the autovacuum worker would try to vacuum all 330 databases once per minute? George Sexton -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] AutoVacuum_NapTime
George Sexton geor...@mhsoftware.com writes: I have a system with around 330 databases running PostgreSQL 8.4.2 What would the expected behavior be with AutoVacuum_NapTime set to the default of 1m and autovacuum_workers set to 3? autovacuum_naptime is the cycle time for any one database, so you'd get an autovac worker launched every 60/330 seconds ... regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] AutoVacuum_NapTime
-Original Message- From: Tom Lane [mailto:t...@sss.pgh.pa.us] Sent: Saturday, February 20, 2010 6:15 PM To: George Sexton Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] AutoVacuum_NapTime George Sexton geor...@mhsoftware.com writes: I have a system with around 330 databases running PostgreSQL 8.4.2 What would the expected behavior be with AutoVacuum_NapTime set to the default of 1m and autovacuum_workers set to 3? autovacuum_naptime is the cycle time for any one database, so you'd get an autovac worker launched every 60/330 seconds ... regards, tom lane Thanks. That's non-optimal for my usage. I'll change it. Another question then. Say I set it to 720 minutes, which if I understand things would see each db done twice per day. If I'm cold starting the system, would it vacuum all 330 databases and then wait 720 minutes and then do them all again, or would it distribute the databases more or less evenly over the time period? George Sexton MH Software, Inc. http://www.mhsoftware.com/ Voice: 303 438 9585 -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
[PERFORM] can we optimize STACK_DEPTH_SLOP
hi, STACK_DEPTH_SLOP stands for Required daylight between max_stack_depth and the kernel limit, in bytes. Why we need so much memory? MySql need only no more than 100K. Where these memory allocated for? Can we do something to decrease this variable? Thanks.
[PERFORM] can we optimize STACK_DEPTH_SLOP
hi, STACK_DEPTH_SLOP stands for Required daylight between max_stack_depth and the kernel limit, in bytes. Why we need so much memory? MySql need only no more than 100K. Where these memory allocated for? Can we do something to decrease this variable? Thanks.
Re: [PERFORM] can we optimize STACK_DEPTH_SLOP
Thanks for your help! But why we set STACK_DEPTH_SLOP to 512K, not 128K? What it according to?