Greg Smith wrote:
* How to test for power failure?
I've had good results using one of the early programs used to
investigate this class of problems:
http://brad.livejournal.com/2116715.html?page=2
FYI, this tool is mentioned in the Postgres documentation:
On 10-08-04 03:49 PM, Scott Carey wrote:
On Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote:
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havingayebhavi...@gmail.com wrote:
After a week testing I think I can answer the question above: does it work
like it's supposed to under PostgreSQL?
YES
The
On Tue, 2010-08-03 at 10:40 +0200, Yeb Havinga wrote:
se note that the 10% was on a slower CPU. On a more recent CPU the
difference was 47%, based on tests that ran for an hour.
I am not surprised at all that reading and writing almost twice as much
data from/to disk takes 47% longer. If less
On Jul 26, 2010, at 12:45 PM, Greg Smith wrote:
Yeb Havinga wrote:
I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
read/write test. (scale 300) No real winners or losers, though ext2
isn't really faster and the manual need for fix (y) during boot makes
it
On Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote:
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga yebhavi...@gmail.com wrote:
After a week testing I think I can answer the question above: does it work
like it's supposed to under PostgreSQL?
YES
The drive I have tested is the $435,- 50GB
On Aug 3, 2010, at 9:27 AM, Merlin Moncure wrote:
2) I've heard that some SSD have utilities that you can use to query
the write cycles in order to estimate lifespan. Does this one, and is
it possible to publish the output (an approximation of the amount of
work along with this would be
g...@2ndquadrant.com (Greg Smith) writes:
Yeb Havinga wrote:
* What filesystem to use on the SSD? To minimize writes and maximize
chance for seeing errors I'd choose ext2 here.
I don't consider there to be any reason to deploy any part of a
PostgreSQL database on ext2. The potential for
j...@commandprompt.com (Joshua D. Drake) writes:
On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote:
Greg Smith wrote:
Note that not all of the Sandforce drives include a capacitor; I hope
you got one that does! I wasn't aware any of the SF drives with a
capacitor on them were even
Scott Marlowe wrote:
On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith g...@2ndquadrant.com wrote:
Josh Berkus wrote:
That doesn't make much sense unless there's some special advantage to a
4K blocksize with the hardware itself.
Given that pgbench is always doing tiny updates to
Hannu Krosing wrote:
Did it fit in shared_buffers, or system cache ?
Database was ~5GB, server has 16GB, shared buffers was set to 1920MB.
I first noticed this several years ago, when doing a COPY to a large
table with indexes took noticably longer (2-3 times longer) when the
indexes were
Yeb Havinga wrote:
Small IO size: 4 KB
Maximum Small IOPS=86883 @ Small=8 and Large=0
Small IO size: 8 KB
Maximum Small IOPS=48798 @ Small=11 and Large=0
Conclusion: you can write 4KB blocks almost twice as fast as 8KB ones.
This is a useful observation about the effectiveness of the write
Yeb Havinga wrote:
Hannu Krosing wrote:
Did it fit in shared_buffers, or system cache ?
Database was ~5GB, server has 16GB, shared buffers was set to 1920MB.
I first noticed this several years ago, when doing a COPY to a large
table with indexes took noticably longer (2-3 times longer)
On Tue, Aug 3, 2010 at 11:37 AM, Yeb Havinga yebhavi...@gmail.com wrote:
Yeb Havinga wrote:
Hannu Krosing wrote:
Did it fit in shared_buffers, or system cache ?
Database was ~5GB, server has 16GB, shared buffers was set to 1920MB.
I first noticed this several years ago, when doing a COPY
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga yebhavi...@gmail.com wrote:
After a week testing I think I can answer the question above: does it work
like it's supposed to under PostgreSQL?
YES
The drive I have tested is the $435,- 50GB OCZ Vertex 2 Pro,
Merlin Moncure wrote:
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga yebhavi...@gmail.com wrote:
Postgres settings:
8.4.4
--with-blocksize=4
I saw about 10% increase in performance compared to 8KB blocksizes.
That's very interesting -- we need more testing in that department...
On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith g...@2ndquadrant.com wrote:
Josh Berkus wrote:
That doesn't make much sense unless there's some special advantage to a
4K blocksize with the hardware itself.
Given that pgbench is always doing tiny updates to blocks, I wouldn't be
surprised if
6700tps?! Wow..
Ok, I'm impressed. May wait a bit for prices to come somewhat, but that
sounds like two of those are going in one of my production machines
(Raid 1, of course)
Yeb Havinga wrote:
Greg Smith wrote:
Greg Smith wrote:
Note that not all of the Sandforce drives include a
On Wed, Jul 28, 2010 at 03:45:23PM +0200, Yeb Havinga wrote:
Due to the LBA remapping of the SSD, I'm not sure of putting files
that are sequentially written in a different partition (together with
e.g. tables) would make a difference: in the end the SSD will have a
set new blocks in it's
On Mon, Jul 26, 2010 at 01:47:14PM -0600, Scott Marlowe wrote:
Note that SSDs aren't usually real fast at large sequential writes
though, so it might be worth putting pg_xlog on a spinning pair in a
mirror and seeing how much, if any, the SSD drive speeds up when not
having to do pg_xlog.
xlog
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
I know I'm talking development now but is there a case for a pg_xlog block
device to remove the file system overhead and guaranteeing your data is
written sequentially every time?
If you dedicate a partition to xlog, you already
Michael Stone wrote:
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
I know I'm talking development now but is there a case for a pg_xlog
block
device to remove the file system overhead and guaranteeing your data is
written sequentially every time?
If you dedicate a
Yeb Havinga wrote:
Michael Stone wrote:
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
I know I'm talking development now but is there a case for a pg_xlog
block
device to remove the file system overhead and guaranteeing your data is
written sequentially every time?
If
On Wed, Jul 28, 2010 at 9:18 AM, Yeb Havinga yebhavi...@gmail.com wrote:
Yeb Havinga wrote:
Due to the LBA remapping of the SSD, I'm not sure of putting files that
are sequentially written in a different partition (together with e.g.
tables) would make a difference: in the end the SSD will
On Mon, 2010-07-26 at 14:34 -0400, Greg Smith wrote:
Matthew Wakeling wrote:
Yeb also made the point - there are far too many points on that graph
to really tell what the average latency is. It'd be instructive to
have a few figures, like only x% of requests took longer than y.
Average
Yeb Havinga wrote:
Greg Smith wrote:
Put it on ext3, toggle on noatime, and move on to testing. The
overhead of the metadata writes is the least of the problems when
doing write-heavy stuff on Linux.
I ran a pgbench run and power failure test during pgbench with a 3
year old computer
On
On Sun, 25 Jul 2010, Yeb Havinga wrote:
Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at
http://tinypic.com/r/x5e846/3
Does your latency graph really have milliseconds as the y axis? If so,
this device is really slow - some requests have a latency of more than a
second!
Matthew
Matthew Wakeling wrote:
On Sun, 25 Jul 2010, Yeb Havinga wrote:
Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at
http://tinypic.com/r/x5e846/3
Does your latency graph really have milliseconds as the y axis?
Yes
If so, this device is really slow - some requests have a latency of
Matthew Wakeling wrote:
Does your latency graph really have milliseconds as the y axis? If so,
this device is really slow - some requests have a latency of more than
a second!
Have you tried that yourself? If you generate one of those with
standard hard drives and a BBWC under Linux, I
Yeb Havinga wrote:
Please remember that particular graphs are from a read/write pgbench
run on a bigger than RAM database that ran for some time (so with
checkpoints), on a *single* $435 50GB drive without BBU raid controller.
To get similar *average* performance results you'd need to put
On Mon, 26 Jul 2010, Greg Smith wrote:
Matthew Wakeling wrote:
Does your latency graph really have milliseconds as the y axis? If so, this
device is really slow - some requests have a latency of more than a second!
Have you tried that yourself? If you generate one of those with standard
Matthew Wakeling wrote:
Apologies, I was interpreting the graph as the latency of the device,
not all the layers in-between as well. There isn't any indication in
the email with the graph as to what the test conditions or software are.
That info was in the email preceding the graph mail, but I
On Mon, Jul 26, 2010 at 10:26 AM, Yeb Havinga yebhavi...@gmail.com wrote:
Matthew Wakeling wrote:
Apologies, I was interpreting the graph as the latency of the device, not
all the layers in-between as well. There isn't any indication in the email
with the graph as to what the test conditions
Matthew Wakeling wrote:
Yeb also made the point - there are far too many points on that graph
to really tell what the average latency is. It'd be instructive to
have a few figures, like only x% of requests took longer than y.
Average latency is the inverse of TPS. So if the result is, say,
Greg Spiegelberg wrote:
Speaking of the layers in-between, has this test been done with the
ext3 journal on a different device? Maybe the purpose is wrong for
the SSD. Use the SSD for the ext3 journal and the spindled drives for
filesystem?
The main disk bottleneck on PostgreSQL
Greg Smith g...@2ndquadrant.com wrote:
Yeb's data is showing that a single SSD is competitive with a
small array on average, but with better worst-case behavior than
I'm used to seeing.
So, how long before someone benchmarks a small array of SSDs? :-)
-Kevin
--
Sent via
Greg Smith wrote:
Yeb Havinga wrote:
Please remember that particular graphs are from a read/write pgbench
run on a bigger than RAM database that ran for some time (so with
checkpoints), on a *single* $435 50GB drive without BBU raid controller.
To get similar *average* performance results
Yeb Havinga wrote:
To get similar *average* performance results you'd need to put about
4 drives and a BBU into a server. The
Please forget this question, I now see it in the mail i'm replying to.
Sorry for the spam!
-- Yeb
--
Sent via pgsql-performance mailing list
Yeb Havinga wrote:
I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
read/write test. (scale 300) No real winners or losers, though ext2
isn't really faster and the manual need for fix (y) during boot makes
it impractical in its standard configuration.
That's what
On Mon, Jul 26, 2010 at 12:40 PM, Greg Smith g...@2ndquadrant.com wrote:
Greg Spiegelberg wrote:
Speaking of the layers in-between, has this test been done with the ext3
journal on a different device? Maybe the purpose is wrong for the SSD. Use
the SSD for the ext3 journal and the spindled
On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith g...@2ndquadrant.com wrote:
Yeb Havinga wrote:
I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
read/write test. (scale 300) No real winners or losers, though ext2 isn't
really faster and the manual need for fix (y) during
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith g...@2ndquadrant.com wrote:
Yeb Havinga wrote:
I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
read/write test. (scale 300) No real winners or losers, though
Greg Spiegelberg wrote:
I know I'm talking development now but is there a case for a pg_xlog
block device to remove the file system overhead and guaranteeing your
data is written sequentially every time?
It's possible to set the PostgreSQL wal_sync_method parameter in the
database to
Yeb Havinga wrote:
8GB DDR2 something..
(lots of details removed)
Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at
http://tinypic.com/r/x5e846/3
Thanks http://www.westnet.com/~gsmith/content/postgresql/pgbench.htm for
the gnuplot and psql scripts!
--
Sent via
Hello list,
Probably like many other's I've wondered why no SSD manufacturer puts a
small BBU on a SSD drive. Triggered by Greg Smith's mail
http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php
here, and also anandtech's review at
http://www.anandtech.com/show/2899/1 (see
Do you guys have any more ideas to properly 'feel this disk at its
teeth' ?
While an 'end-to-end' test using PG is fine, I think it would be easier
to determine if the drive is behaving correctly by using a simple test
program that emulates the storage semantics the WAL expects. Have it
On Sat, 24 Jul 2010, David Boreham wrote:
Do you guys have any more ideas to properly 'feel this disk at its teeth' ?
While an 'end-to-end' test using PG is fine, I think it would be easier to
determine if the drive is behaving correctly by using a simple test program
that emulates the
On Jul 24, 2010, at 12:20 AM, Yeb Havinga wrote:
The problem in this scenario is that even when the SSD would show not data
loss and the rotating disk would for a few times, a dozen tests without
failure isn't actually proof that the drive can write it's complete buffer to
disk after power
Yeb Havinga wrote:
Probably like many other's I've wondered why no SSD manufacturer puts
a small BBU on a SSD drive. Triggered by Greg Smith's mail
http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php
here, and also anandtech's review at
http://www.anandtech.com/show/2899/1
On Sat, Jul 24, 2010 at 3:20 AM, Yeb Havinga yebhavi...@gmail.com wrote:
Hello list,
Probably like many other's I've wondered why no SSD manufacturer puts a
small BBU on a SSD drive. Triggered by Greg Smith's mail
http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php here,
Greg Smith wrote:
Note that not all of the Sandforce drives include a capacitor; I hope
you got one that does! I wasn't aware any of the SF drives with a
capacitor on them were even shipping yet, all of the ones I'd seen
were the chipset that doesn't include one still. Haven't checked in a
Yeb Havinga wrote:
diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes; 39/s)
Total errors: 0
:-)
OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
Greg Smith wrote:
Note that not all of the Sandforce drives include a capacitor; I hope
you got one that does! I wasn't aware any of the SF drives with a
capacitor on them were even shipping yet, all of the ones I'd seen
were the chipset that doesn't include one still. Haven't checked in a
On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote:
Greg Smith wrote:
Note that not all of the Sandforce drives include a capacitor; I hope
you got one that does! I wasn't aware any of the SF drives with a
capacitor on them were even shipping yet, all of the ones I'd seen
were the
Joshua D. Drake wrote:
That is quite the toy. I can get 4 SATA-II with RAID Controller, with
battery backed cache, for the same price or less :P
True, but if you look at tests like
http://www.anandtech.com/show/2899/12 it suggests there's probably at
least a 6:1 performance speedup for
Yeb Havinga wrote:
Yeb Havinga wrote:
diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes;
39/s)
Total errors: 0
:-)
OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer.
When playing with it a bit more, I couldn't get the test_file to be
created in the
Yeb Havinga wrote:
Writes/s start low but quickly converge to a number in the range of
1200 to 1800. The writes diskchecker does are 16kB writes. Making this
4kB writes does not increase writes/s. 32kB seems a little less, 64kB
is about two third of initial writes/s and 128kB is half.
Let's
56 matches
Mail list logo