PFC,
Thanks for doing those graphs. They've been used by Simon Heikki, and
now me, to show our main issue with PostgreSQL performance: consistency.
That is, our median response time beats MySQL and even Oracle, but our
bottom 10% does not, and is in fact intolerably bad.
If you want us to
On Mon, 28 May 2007 05:53:16 +0200, Chris [EMAIL PROTECTED] wrote:
I am re-running it with other tuning, notably cost-based vacuum
delay and less frequent checkpoints, and it is a *lot* smoother.
These take a full night to run, so I'll post more results when I
have usefull stuff to
I am re-running it with other tuning, notably cost-based vacuum
delay and less frequent checkpoints, and it is a *lot* smoother.
These take a full night to run, so I'll post more results when I
have usefull stuff to show.
This has proven to be a very interesting trip to
Greg Smith [EMAIL PROTECTED] writes:
On Tue, 22 May 2007, Gregory Stark wrote:
However as mentioned a while back in practice it doesn't work quite right and
you should expect to get 1/2 the expected performance. So even with 10
clients
you should expect to see 5*120 tps on a 7200 rpm drive
Am 21.05.2007 um 23:51 schrieb Greg Smith:
The standard pgbench transaction includes a select, an insert, and
three updates.
I see. Didn't know that, but it makes sense.
Unless you went out of your way to turn it off, your drive is
caching writes; every Seagate SATA drive I've ever seen
Jim C. Nasby írta:
On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:
I also went into benchmarking mode last night for my own
amusement when I read on the linux-kernel ML that
NCQ support for nForce5 chips was released.
I tried current PostgreSQL 8.3devel CVS.
pgbench over
Greg Smith írta:
On Mon, 21 May 2007, Guido Neitzer wrote:
Yes, that right, but if a lot of the transactions are selects, there
is no entry in the x_log for them and most of the stuff can come from
the cache - read from memory which is blazing fast compared to any
disk ... And this was a
- Deferred Transactions, since adding a comment to a blog post
doesn't need the same guarantees than submitting a paid order, it makes
sense that the application could tell postgres which transactions we
care about if power is lost. This will massively boost performance for
websites I
Alvaro Herrera [EMAIL PROTECTED] writes:
Scott Marlowe wrote:
I thought you were limited to 250 or so COMMITS to disk per second, and
since 1 client can be committed at once, you could do greater than 250
tps, as long as you had 1 client providing input. Or was I wrong?
My impression
What's interesting here is that on a couple metrics the green curve is
actually *better* until it takes that nosedive at 500 MB. Obviously it's not
better on average hits/s, the most obvious metric. But on deviation and
worst-case hits/s it's actually doing better.
Note that while the average
Note that while the average hits/s between 100 and 500 is over 600 tps
for
Postgres there is a consistent smattering of plot points spread all the
way
down to 200 tps, well below the 400-500 tps that MySQL is getting.
Yes, these are due to checkpointing, mostly.
Also, note that a
On Tue, 22 May 2007, Gregory Stark wrote:
However as mentioned a while back in practice it doesn't work quite right and
you should expect to get 1/2 the expected performance. So even with 10 clients
you should expect to see 5*120 tps on a 7200 rpm drive and 5*250 tps on a
15kprm drive.
I
On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:
I also went into benchmarking mode last night for my own
amusement when I read on the linux-kernel ML that
NCQ support for nForce5 chips was released.
I tried current PostgreSQL 8.3devel CVS.
pgbench over local TCP connection
Well that matches up well with my experience, better even yet, file a
performance bug to the commercial support and you'll get an explanation
why your schema (or your hardware, well anything but the database
software used) is the guilty factor.
Yeah, I filed a bug last week since
On Mon, 21 May 2007 23:05:22 +0200, Jim C. Nasby [EMAIL PROTECTED]
wrote:
On Sun, May 20, 2007 at 04:58:45PM +0200, PFC wrote:
I felt the world needed a new benchmark ;)
So : Forum style benchmark with simulation of many users posting and
viewing forums and topics on a PHP
Am 21.05.2007 um 15:01 schrieb Jim C. Nasby:
I'd be willing to bet money that the drive is lying about commits/
fsync.
Each transaction committed essentially requires one revolution of the
drive with pg_xlog on it, so a 15kRPM drive limits you to 250TPS.
Yes, that right, but if a lot of the
I assume red is the postgresql. AS you add connections, Mysql always dies.
On 5/20/07, PFC [EMAIL PROTECTED] wrote:
I felt the world needed a new benchmark ;)
So : Forum style benchmark with simulation of many users posting
and
viewing forums and topics on a PHP website.
Jim C. Nasby wrote:
On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:
I also went into benchmarking mode last night for my own
amusement when I read on the linux-kernel ML that
NCQ support for nForce5 chips was released.
I tried current PostgreSQL 8.3devel CVS.
pgbench over
Scott Marlowe wrote:
Jim C. Nasby wrote:
On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:
I also went into benchmarking mode last night for my own
amusement when I read on the linux-kernel ML that
NCQ support for nForce5 chips was released.
I tried current PostgreSQL
On Mon, 21 May 2007, Guido Neitzer wrote:
Yes, that right, but if a lot of the transactions are selects, there is no
entry in the x_log for them and most of the stuff can come from the cache -
read from memory which is blazing fast compared to any disk ... And this was
a pg_bench test - I
I felt the world needed a new benchmark ;)
So : Forum style benchmark with simulation of many users posting and
viewing forums and topics on a PHP website.
http://home.peufeu.com/ftsbench/forum1.png
One of those curves is a very popular open-source database which claims
I assume red is PostgreSQL and green is MySQL. That reflects my own
benchmarks with those two.
But I don't fully understand what the graph displays. Does it reflect
the ability of the underlying database to support a certain amount of
users per second given a certain database size? Or is the
I assume red is PostgreSQL and green is MySQL. That reflects my own
benchmarks with those two.
Well, since you answered first, and right, you win XD
The little curve that dives into the ground is MySQL with InnoDB.
The Energizer bunny that keeps going is Postgres.
On 20-5-2007 19:09 PFC wrote:
Since I use lighttpd, I don't really care about the number of actual
slow clients (ie. real concurrent HTTP connections). Everything is
funneled through those 8 PHP processes, so postgres never sees huge
concurrency.
Well, that would only be in favour of
PFC [EMAIL PROTECTED] writes:
The little curve that dives into the ground is MySQL with InnoDB.
The Energizer bunny that keeps going is Postgres.
Just for comparison's sake it would be interesting to see a curve for
mysql/myisam. Mysql's claim to speed is mostly based on
PFC írta:
I felt the world needed a new benchmark ;)
So : Forum style benchmark with simulation of many users posting
and viewing forums and topics on a PHP website.
http://home.peufeu.com/ftsbench/forum1.png
One of those curves is a very popular open-source database which
On Sun, 20 May 2007 19:26:38 +0200, Tom Lane [EMAIL PROTECTED] wrote:
PFC [EMAIL PROTECTED] writes:
The little curve that dives into the ground is MySQL with InnoDB.
The Energizer bunny that keeps going is Postgres.
Just for comparison's sake it would be interesting to see a
I'm writing a full report, but I'm having a
lot of problems with MySQL,
I'd like to give it a fair chance, but it shows
real obstination in NOT
working.
Well that matches up well with my experience, better even yet, file a
performance bug to the commercial support and you'll
28 matches
Mail list logo