Re: [PERFORM] Testing Sandforce SSD

2010-08-11 Thread Bruce Momjian
Greg Smith wrote: * How to test for power failure? I've had good results using one of the early programs used to investigate this class of problems: http://brad.livejournal.com/2116715.html?page=2 FYI, this tool is mentioned in the Postgres documentation:

Re: [PERFORM] Testing Sandforce SSD

2010-08-05 Thread Brad Nicholson
On 10-08-04 03:49 PM, Scott Carey wrote: On Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote: On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havingayebhavi...@gmail.com wrote: After a week testing I think I can answer the question above: does it work like it's supposed to under PostgreSQL? YES The

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Hannu Krosing
On Tue, 2010-08-03 at 10:40 +0200, Yeb Havinga wrote: se note that the 10% was on a slower CPU. On a more recent CPU the difference was 47%, based on tests that ran for an hour. I am not surprised at all that reading and writing almost twice as much data from/to disk takes 47% longer. If less

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Scott Carey
On Jul 26, 2010, at 12:45 PM, Greg Smith wrote: Yeb Havinga wrote: I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory read/write test. (scale 300) No real winners or losers, though ext2 isn't really faster and the manual need for fix (y) during boot makes it

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Scott Carey
On Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote: On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga yebhavi...@gmail.com wrote: After a week testing I think I can answer the question above: does it work like it's supposed to under PostgreSQL? YES The drive I have tested is the $435,- 50GB

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Scott Carey
On Aug 3, 2010, at 9:27 AM, Merlin Moncure wrote: 2) I've heard that some SSD have utilities that you can use to query the write cycles in order to estimate lifespan. Does this one, and is it possible to publish the output (an approximation of the amount of work along with this would be

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Chris Browne
g...@2ndquadrant.com (Greg Smith) writes: Yeb Havinga wrote: * What filesystem to use on the SSD? To minimize writes and maximize chance for seeing errors I'd choose ext2 here. I don't consider there to be any reason to deploy any part of a PostgreSQL database on ext2. The potential for

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Chris Browne
j...@commandprompt.com (Joshua D. Drake) writes: On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote: Greg Smith wrote: Note that not all of the Sandforce drives include a capacitor; I hope you got one that does! I wasn't aware any of the SF drives with a capacitor on them were even

Re: [PERFORM] Testing Sandforce SSD

2010-08-03 Thread Yeb Havinga
Scott Marlowe wrote: On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith g...@2ndquadrant.com wrote: Josh Berkus wrote: That doesn't make much sense unless there's some special advantage to a 4K blocksize with the hardware itself. Given that pgbench is always doing tiny updates to

Re: [PERFORM] Testing Sandforce SSD

2010-08-03 Thread Yeb Havinga
Hannu Krosing wrote: Did it fit in shared_buffers, or system cache ? Database was ~5GB, server has 16GB, shared buffers was set to 1920MB. I first noticed this several years ago, when doing a COPY to a large table with indexes took noticably longer (2-3 times longer) when the indexes were

Re: [PERFORM] Testing Sandforce SSD

2010-08-03 Thread Greg Smith
Yeb Havinga wrote: Small IO size: 4 KB Maximum Small IOPS=86883 @ Small=8 and Large=0 Small IO size: 8 KB Maximum Small IOPS=48798 @ Small=11 and Large=0 Conclusion: you can write 4KB blocks almost twice as fast as 8KB ones. This is a useful observation about the effectiveness of the write

Re: [PERFORM] Testing Sandforce SSD

2010-08-03 Thread Yeb Havinga
Yeb Havinga wrote: Hannu Krosing wrote: Did it fit in shared_buffers, or system cache ? Database was ~5GB, server has 16GB, shared buffers was set to 1920MB. I first noticed this several years ago, when doing a COPY to a large table with indexes took noticably longer (2-3 times longer)

Re: [PERFORM] Testing Sandforce SSD

2010-08-03 Thread Merlin Moncure
On Tue, Aug 3, 2010 at 11:37 AM, Yeb Havinga yebhavi...@gmail.com wrote: Yeb Havinga wrote: Hannu Krosing wrote: Did it fit in shared_buffers, or system cache ? Database was ~5GB, server has 16GB, shared buffers was set to 1920MB. I first noticed this several years ago, when doing a COPY

Re: [PERFORM] Testing Sandforce SSD

2010-08-02 Thread Merlin Moncure
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga yebhavi...@gmail.com wrote: After a week testing I think I can answer the question above: does it work like it's supposed to under PostgreSQL? YES The drive I have tested is the $435,- 50GB OCZ Vertex 2 Pro,

Re: [PERFORM] Testing Sandforce SSD

2010-08-02 Thread Yeb Havinga
Merlin Moncure wrote: On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga yebhavi...@gmail.com wrote: Postgres settings: 8.4.4 --with-blocksize=4 I saw about 10% increase in performance compared to 8KB blocksizes. That's very interesting -- we need more testing in that department...

Re: [PERFORM] Testing Sandforce SSD

2010-08-02 Thread Scott Marlowe
On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith g...@2ndquadrant.com wrote: Josh Berkus wrote: That doesn't make much sense unless there's some special advantage to a 4K blocksize with the hardware itself. Given that pgbench is always doing tiny updates to blocks, I wouldn't be surprised if

Re: [PERFORM] Testing Sandforce SSD

2010-07-30 Thread Karl Denninger
6700tps?! Wow.. Ok, I'm impressed. May wait a bit for prices to come somewhat, but that sounds like two of those are going in one of my production machines (Raid 1, of course) Yeb Havinga wrote: Greg Smith wrote: Greg Smith wrote: Note that not all of the Sandforce drives include a

Re: [PERFORM] Testing Sandforce SSD

2010-07-29 Thread Michael Stone
On Wed, Jul 28, 2010 at 03:45:23PM +0200, Yeb Havinga wrote: Due to the LBA remapping of the SSD, I'm not sure of putting files that are sequentially written in a different partition (together with e.g. tables) would make a difference: in the end the SSD will have a set new blocks in it's

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Michael Stone
On Mon, Jul 26, 2010 at 01:47:14PM -0600, Scott Marlowe wrote: Note that SSDs aren't usually real fast at large sequential writes though, so it might be worth putting pg_xlog on a spinning pair in a mirror and seeing how much, if any, the SSD drive speeds up when not having to do pg_xlog. xlog

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Michael Stone
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? If you dedicate a partition to xlog, you already

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Yeb Havinga
Michael Stone wrote: On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? If you dedicate a

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Yeb Havinga
Yeb Havinga wrote: Michael Stone wrote: On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? If

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Greg Spiegelberg
On Wed, Jul 28, 2010 at 9:18 AM, Yeb Havinga yebhavi...@gmail.com wrote: Yeb Havinga wrote: Due to the LBA remapping of the SSD, I'm not sure of putting files that are sequentially written in a different partition (together with e.g. tables) would make a difference: in the end the SSD will

Re: [PERFORM] Testing Sandforce SSD

2010-07-27 Thread Hannu Krosing
On Mon, 2010-07-26 at 14:34 -0400, Greg Smith wrote: Matthew Wakeling wrote: Yeb also made the point - there are far too many points on that graph to really tell what the average latency is. It'd be instructive to have a few figures, like only x% of requests took longer than y. Average

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Yeb Havinga
Yeb Havinga wrote: Greg Smith wrote: Put it on ext3, toggle on noatime, and move on to testing. The overhead of the metadata writes is the least of the problems when doing write-heavy stuff on Linux. I ran a pgbench run and power failure test during pgbench with a 3 year old computer On

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Matthew Wakeling
On Sun, 25 Jul 2010, Yeb Havinga wrote: Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at http://tinypic.com/r/x5e846/3 Does your latency graph really have milliseconds as the y axis? If so, this device is really slow - some requests have a latency of more than a second! Matthew

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Yeb Havinga
Matthew Wakeling wrote: On Sun, 25 Jul 2010, Yeb Havinga wrote: Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at http://tinypic.com/r/x5e846/3 Does your latency graph really have milliseconds as the y axis? Yes If so, this device is really slow - some requests have a latency of

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Matthew Wakeling wrote: Does your latency graph really have milliseconds as the y axis? If so, this device is really slow - some requests have a latency of more than a second! Have you tried that yourself? If you generate one of those with standard hard drives and a BBWC under Linux, I

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Yeb Havinga wrote: Please remember that particular graphs are from a read/write pgbench run on a bigger than RAM database that ran for some time (so with checkpoints), on a *single* $435 50GB drive without BBU raid controller. To get similar *average* performance results you'd need to put

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Matthew Wakeling
On Mon, 26 Jul 2010, Greg Smith wrote: Matthew Wakeling wrote: Does your latency graph really have milliseconds as the y axis? If so, this device is really slow - some requests have a latency of more than a second! Have you tried that yourself? If you generate one of those with standard

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Yeb Havinga
Matthew Wakeling wrote: Apologies, I was interpreting the graph as the latency of the device, not all the layers in-between as well. There isn't any indication in the email with the graph as to what the test conditions or software are. That info was in the email preceding the graph mail, but I

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Spiegelberg
On Mon, Jul 26, 2010 at 10:26 AM, Yeb Havinga yebhavi...@gmail.com wrote: Matthew Wakeling wrote: Apologies, I was interpreting the graph as the latency of the device, not all the layers in-between as well. There isn't any indication in the email with the graph as to what the test conditions

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Matthew Wakeling wrote: Yeb also made the point - there are far too many points on that graph to really tell what the average latency is. It'd be instructive to have a few figures, like only x% of requests took longer than y. Average latency is the inverse of TPS. So if the result is, say,

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Greg Spiegelberg wrote: Speaking of the layers in-between, has this test been done with the ext3 journal on a different device? Maybe the purpose is wrong for the SSD. Use the SSD for the ext3 journal and the spindled drives for filesystem? The main disk bottleneck on PostgreSQL

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Kevin Grittner
Greg Smith g...@2ndquadrant.com wrote: Yeb's data is showing that a single SSD is competitive with a small array on average, but with better worst-case behavior than I'm used to seeing. So, how long before someone benchmarks a small array of SSDs? :-) -Kevin -- Sent via

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Yeb Havinga
Greg Smith wrote: Yeb Havinga wrote: Please remember that particular graphs are from a read/write pgbench run on a bigger than RAM database that ran for some time (so with checkpoints), on a *single* $435 50GB drive without BBU raid controller. To get similar *average* performance results

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Yeb Havinga
Yeb Havinga wrote: To get similar *average* performance results you'd need to put about 4 drives and a BBU into a server. The Please forget this question, I now see it in the mail i'm replying to. Sorry for the spam! -- Yeb -- Sent via pgsql-performance mailing list

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Yeb Havinga wrote: I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory read/write test. (scale 300) No real winners or losers, though ext2 isn't really faster and the manual need for fix (y) during boot makes it impractical in its standard configuration. That's what

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Scott Marlowe
On Mon, Jul 26, 2010 at 12:40 PM, Greg Smith g...@2ndquadrant.com wrote: Greg Spiegelberg wrote: Speaking of the layers in-between, has this test been done with the ext3 journal on a different device?  Maybe the purpose is wrong for the SSD.  Use the SSD for the ext3 journal and the spindled

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Spiegelberg
On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith g...@2ndquadrant.com wrote: Yeb Havinga wrote: I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory read/write test. (scale 300) No real winners or losers, though ext2 isn't really faster and the manual need for fix (y) during

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Andres Freund
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith g...@2ndquadrant.com wrote: Yeb Havinga wrote: I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory read/write test. (scale 300) No real winners or losers, though

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? It's possible to set the PostgreSQL wal_sync_method parameter in the database to

Re: [PERFORM] Testing Sandforce SSD

2010-07-25 Thread Yeb Havinga
Yeb Havinga wrote: 8GB DDR2 something.. (lots of details removed) Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at http://tinypic.com/r/x5e846/3 Thanks http://www.westnet.com/~gsmith/content/postgresql/pgbench.htm for the gnuplot and psql scripts! -- Sent via

[PERFORM] Testing Sandforce SSD

2010-07-24 Thread Yeb Havinga
Hello list, Probably like many other's I've wondered why no SSD manufacturer puts a small BBU on a SSD drive. Triggered by Greg Smith's mail http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php here, and also anandtech's review at http://www.anandtech.com/show/2899/1 (see

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread David Boreham
Do you guys have any more ideas to properly 'feel this disk at its teeth' ? While an 'end-to-end' test using PG is fine, I think it would be easier to determine if the drive is behaving correctly by using a simple test program that emulates the storage semantics the WAL expects. Have it

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread david
On Sat, 24 Jul 2010, David Boreham wrote: Do you guys have any more ideas to properly 'feel this disk at its teeth' ? While an 'end-to-end' test using PG is fine, I think it would be easier to determine if the drive is behaving correctly by using a simple test program that emulates the

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Ben Chobot
On Jul 24, 2010, at 12:20 AM, Yeb Havinga wrote: The problem in this scenario is that even when the SSD would show not data loss and the rotating disk would for a few times, a dozen tests without failure isn't actually proof that the drive can write it's complete buffer to disk after power

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Greg Smith
Yeb Havinga wrote: Probably like many other's I've wondered why no SSD manufacturer puts a small BBU on a SSD drive. Triggered by Greg Smith's mail http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php here, and also anandtech's review at http://www.anandtech.com/show/2899/1

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Merlin Moncure
On Sat, Jul 24, 2010 at 3:20 AM, Yeb Havinga yebhavi...@gmail.com wrote: Hello list, Probably like many other's I've wondered why no SSD manufacturer puts a small BBU on a SSD drive. Triggered by Greg Smith's mail http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php here,

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Yeb Havinga
Greg Smith wrote: Note that not all of the Sandforce drives include a capacitor; I hope you got one that does! I wasn't aware any of the SF drives with a capacitor on them were even shipping yet, all of the ones I'd seen were the chipset that doesn't include one still. Haven't checked in a

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Yeb Havinga
Yeb Havinga wrote: diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes; 39/s) Total errors: 0 :-) OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Greg Smith
Greg Smith wrote: Note that not all of the Sandforce drives include a capacitor; I hope you got one that does! I wasn't aware any of the SF drives with a capacitor on them were even shipping yet, all of the ones I'd seen were the chipset that doesn't include one still. Haven't checked in a

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Joshua D. Drake
On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote: Greg Smith wrote: Note that not all of the Sandforce drives include a capacitor; I hope you got one that does! I wasn't aware any of the SF drives with a capacitor on them were even shipping yet, all of the ones I'd seen were the

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Greg Smith
Joshua D. Drake wrote: That is quite the toy. I can get 4 SATA-II with RAID Controller, with battery backed cache, for the same price or less :P True, but if you look at tests like http://www.anandtech.com/show/2899/12 it suggests there's probably at least a 6:1 performance speedup for

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Yeb Havinga
Yeb Havinga wrote: Yeb Havinga wrote: diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes; 39/s) Total errors: 0 :-) OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer. When playing with it a bit more, I couldn't get the test_file to be created in the

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Greg Smith
Yeb Havinga wrote: Writes/s start low but quickly converge to a number in the range of 1200 to 1800. The writes diskchecker does are 16kB writes. Making this 4kB writes does not increase writes/s. 32kB seems a little less, 64kB is about two third of initial writes/s and 128kB is half. Let's