On Jan 25, 2010, at 6:55 AM, fka...@googlemail.com wrote:
Scott Carey:
(2) The tests:
Note: The standard speed was about 800MB/40s, so 20MB/s.
a)
What I changed: fsync=off
Result: 35s, so 5s faster.
b) like a) but:
checkpoint_segments=128 (was 3)
autovacuum=off
Result:
Scott Carey wrote:
#bgwriter_delay=200ms
#bgwriter_lru_percent=1.0
#bgwriter_lru_maxpages=5
#bgwriter_all_percent=0.333
#bgwriter_all_maxpages=5
#checkpoint_segments=3
#checkpoint_timeout=5min
#checkpoint_warning=30s
Check out this for info on these parameters
Scott Carey:
(2) The tests:
Note: The standard speed was about 800MB/40s, so 20MB/s.
a)
What I changed: fsync=off
Result: 35s, so 5s faster.
b) like a) but:
checkpoint_segments=128 (was 3)
autovacuum=off
Result: 35s (no change...?!)
yes, more
Scott Carey:
Well, something is causing the system to alternate between
CPU and disk bound here. (see below).
It would be useful to see what affect the index has.
Ok, I simply deleted the index and repeated the test: I did
not notice any difference. This is probably so because in
fact I am
On Jan 21, 2010, at 12:35 AM, Greg Smith wrote:
Scott Carey wrote:
On Jan 20, 2010, at 5:32 AM, fka...@googlemail.com wrote:
In the attachement you'll find 2 screenshots perfmon34.png
and perfmon35.png (I hope 2x14 kb is o.k. for the mailing
list).
I don't think they made it to
On Jan 22, 2010, at 12:42 PM, fka...@googlemail.com wrote:
'Writing twice': That is the most interesting point I
believe. Why is the data disk doing 40 MB/s *not* including
WAL, however, having 20 MB/s write thoughput in fact. Seems
like: 20 MB for data, 20 MB for X, 20 MB for WAL.
There
On Jan 20, 2010, at 5:32 AM, fka...@googlemail.com wrote:
Bulk inserts into an indexed table is always significantly
slower than inserting unindexed and then indexing later.
Agreed. However, shouldn't this be included in the disk-time
counters? If so, it should by near 100%.
Well,
Scott Carey wrote:
On Jan 20, 2010, at 5:32 AM, fka...@googlemail.com wrote:
In the attachement you'll find 2 screenshots perfmon34.png
and perfmon35.png (I hope 2x14 kb is o.k. for the mailing
list).
I don't think they made it to the list? I didn't see it, presumably
Scott got a
On Thu, 21 Jan 2010, Greg Smith wrote:
In the attachement you'll find 2 screenshots perfmon34.png
and perfmon35.png (I hope 2x14 kb is o.k. for the mailing
list).
I don't think they made it to the list?
No, it seems that no emails with image attachments ever make it through
the list server.
* Matthew Wakeling:
The data needs to be written first to the WAL, in order to provide
crash-safety. So you're actually writing 1600MB, not 800.
In addition to that, PostgreSQL 8.4.2 seems pre-fill TOAST files (or
all relations?) with zeros when they are written first, which adds
another 400
fka...@googlemail.com:
I'll try to execute these tests on a SSD
and/or Raid system.
FYI:
On a sata raid-0 (mid range hardware) and recent 2x 1.5 TB
disks with a write performance of 100 MB/s (worst, to 200
MB/s max), I get a performance of 18.2 MB/s. Before, with
other disk 43 MB/s (worst to
Scott Carey:
You are CPU bound.
30% of 4 cores is greater than 25%. 25% is one core fully
used.
I have measured the cores separately. Some of them reached
30%. I am not CPU bound here.
The postgres compression of data in TOAST is
probably the problem. I'm assuming its doing Gzip, and
Pierre Frédéric Caillaud:
I don't remember if you used TOAST compression or not.
I use 'bytea' and SET STORAGE EXTERNAL for this column.
AFAIK this switches off the compression.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On 01/19/10 11:16, fka...@googlemail.com wrote:
fka...@googlemail.com:
I'll try to execute these tests on a SSD
and/or Raid system.
FYI:
On a sata raid-0 (mid range hardware) and recent 2x 1.5 TB
disks with a write performance of 100 MB/s (worst, to 200
MB/s max), I get a performance of
On 19/01/10 10:50, fka...@googlemail.com wrote:
However, the deeper question is (sounds ridiculous): Why am
I I/O bound *this much* here. To recall: The write
performance in pg is about 20-25% of the worst case serial
write performance of the disk (and only about 8-10% of the
best disk perf)
Ivan Voras:
[I just skimmed this thread - did you increase the number of WAL logs to
something very large, like 128?]
Yes, I tried even more.
I will be writing data quite constantly in the real scenario
later. So I wonder if increasing WAL logs will have a
positive effect or not: AFAIK when
fka...@googlemail.com fka...@googlemail.com wrote:
Scott Carey:
You are CPU bound.
30% of 4 cores is greater than 25%. 25% is one core fully
used.
I have measured the cores separately. Some of them reached
30%. I am not CPU bound here.
If you have numbers like that when running one
On 01/19/10 14:36, fka...@googlemail.com wrote:
Ivan Voras:
[I just skimmed this thread - did you increase the number of WAL logs to
something very large, like 128?]
Yes, I tried even more.
I will be writing data quite constantly in the real scenario
later. So I wonder if increasing WAL
On Jan 19, 2010, at 2:50 AM, fka...@googlemail.com wrote:
Scott Carey:
You are CPU bound.
30% of 4 cores is greater than 25%. 25% is one core fully
used.
I have measured the cores separately. Some of them reached
30%. I am not CPU bound here.
Measuring the cores isn't enough.
Hello Pierre,
thank You for these useful test commands.
Here is what I did:
Pierre Frédéric Caillaud:
Try this :
CREATE TABLE test AS SELECT * FROM yourtable;
This will test write speed, and TOAST compression speed.
Then try this:
(1)
Setting:
* pg_xlog sym'linked
Hannu Krosing:
On Thu, 2010-01-14 at 21:14 +0100, fka...@googlemail.com wrote:
Thanks a lot for your reply.
Hannu Krosing:
4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
try inserting the same data using 4 parallel connections or even 8
parallel ones.
Interesting idea -- I
Dave Crooke:
If you don't need that level of consistency for your 8MB blobs, write them
to plain files named with some kind of id, and put the id in the database
instead of the blob.
The problem here is that I later need to give access to the
database via TCP to an external client. This
Hannu Krosing:
did you also test this with fsync=off ?
Yes. No significant difference.
I suspect that what you are seeing is the effect of randomly writing to
the index files. While sequential write performance can be up to
80MB/sec on modern drives, sequential writes are an order of
Matthew Wakeling:
The data needs to be written first to the WAL, in order to provide
crash-safety. So you're actually writing 1600MB, not 800.
I come back again to saving WAL to another disk. Now, after
all, I wonder: Doesn't the server wait anyway until WAL was
written to disk?
So, if true,
fka...@googlemail.com:
I'll try to execute these tests on a SSD
and/or Raid system.
FYI:
On a recent but mid-range performing SSD (128 MB size,
serially writing 80-140 MB, 100 MB average) the situation
was worse for some reason. No difference by fsync=on/off.
Felix
--
Sent via
On Jan 18, 2010, at 3:20 AM, fka...@googlemail.com wrote:
Hello Pierre,
thank You for these useful test commands.
Here is what I did:
Pierre Frédéric Caillaud:
Try this :
CREATE TABLE test AS SELECT * FROM yourtable;
This will test write speed, and TOAST
I've changed the setting a bit:
(1) Replaced 7.200 disk by a 10.000 one, still sata though.
(2) Inserting rows only 10x times (instead of 100x times)
but 80mb each, so having the same amount of 800mb in total.
(3) Changed the WAL path to the system disk (by the
great 'junction' trick
Florian Weimer:
Do you have any further idea why 16MB/s seems to be the limit here?
BYTEA deserialization is very slow, and this could be a factor here.
Have you checked that you are in fact I/O bound?
Could you elaborate that a bit? It sounds interesting but I
do not get what you mean
http://www.hagander.net/talks/Advanced%20PostgreSQL%20on%20Windows.pdf
Great doc ! I'm keeping that ;)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Thu, 14 Jan 2010, fka...@googlemail.com wrote:
The data needs to be written first to the WAL, in order to provide
crash-safety. So you're actually writing 1600MB, not 800.
I understand. So the actual throughput is 32MB/s which is
closer to 43 MB/s, of course.
Can I verify that by
On Thu, 14 Jan 2010, fka...@googlemail.com wrote:
Nevertheless: If your explanation covers all what can be
said about it then replacing the hard disk by a faster one
should increase the performance here (I'll try to check that
out).
Probably. However, it is worth you running the test again,
Matthew Wakeling:
On Thu, 14 Jan 2010, fka...@googlemail.com wrote:
Nevertheless: If your explanation covers all what can be
said about it then replacing the hard disk by a faster one
should increase the performance here (I'll try to check that
out).
Probably. However, it is worth you
Pierre Frédéric Caillaud:
At install, or use a symlink (they exist on windows too !...)
http://stackoverflow.com/questions/1901405/postgresql-wal-on-windows
Very interesting! Did not help much though (see other
posting).
Thank You
Felix
--
Sent via pgsql-performance mailing
On Thu, Jan 14, 2010 at 9:29 AM, fka...@googlemail.com
fka...@googlemail.com wrote:
Hello together,
I need to increase the write performance when inserting
bytea of 8MB. I am using 8.2.4 on windows with libpq.
The test setting is simple:
I write 100x times a byte array (bytea) of 8 MB
I'd second this a database is doing all kinds of clever things to
ensure ACID consistency on every byte that gets written to it.
If you don't need that level of consistency for your 8MB blobs, write them
to plain files named with some kind of id, and put the id in the database
instead of the
fka...@googlemail.com wrote:
Hello together,
I need to increase the write performance when inserting
bytea of 8MB. I am using 8.2.4 on windows with libpq.
This takes about 50s, so, 800MB/50s = 16MB/s.
However the harddisk (sata) could write 43 MB/s in the worst
case! Why is write
On Thu, 14 Jan 2010, fka...@googlemail.com wrote:
This takes about 50s, so, 800MB/50s = 16MB/s.
However the harddisk (sata) could write 43 MB/s in the worst
case! Why is write performance limited to 16 MB/s?
Several reasons:
The data needs to be written first to the WAL, in order to provide
* fka...@googlemail.com fka...@googlemail.com [100114 09:29]:
This takes about 50s, so, 800MB/50s = 16MB/s.
However the harddisk (sata) could write 43 MB/s in the worst
case! Why is write performance limited to 16 MB/s?
I altered the binary column to STORAGE EXTERNAL.
Some experiments
Do you have any further idea why 16MB/s seems to be the limit here?
BYTEA deserialization is very slow, and this could be a factor here.
Have you checked that you are in fact I/O bound?
You can speed things up by sending the data in binary, by passing
approriate parameters to PQexecParams().
However the harddisk (sata) could write 43 MB/s in the worst
case! Why is write performance limited to 16 MB/s?
Some more hints what I do:
I use PQexecParams() and the INSERT ... $001 notation to NOT
create a real escapted string from the data additionally but
use a pointer to the 8MB data
Thank You for your reply.
Ivan Voras:
Are you doing it locally or over a network? If you are accessing the
server over a network then it could be the location of the bottleneck.
All is done locally (for now).
Felix
--
Sent via pgsql-performance mailing list
Thanks a lot for the detailed reply.
Matthew Wakeling:
On Thu, 14 Jan 2010, fka...@googlemail.com wrote:
This takes about 50s, so, 800MB/50s = 16MB/s.
However the harddisk (sata) could write 43 MB/s in the worst
case! Why is write performance limited to 16 MB/s?
Several reasons:
Thanks a lot for your reply.
Hannu Krosing:
4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
try inserting the same data using 4 parallel connections or even 8
parallel ones.
Interesting idea -- I forgot to mention though that 2-3
cores will be occupied soon with other tasks.
Felix
--
Sent
Aidan Van Dyk:
So, your SATA disk can do 43MB/s of sequential writes, but you're example
is doing:
1) Sequential writes to WAL
2) Random writes to your index
3) Sequential writes to table heap
4) Sequential writes to table' toast heap
5) Any other OS-based FS overhead
Ok, I see. Thanks a
Florian Weimer:
Do you have any further idea why 16MB/s seems to be the limit here?
BYTEA deserialization is very slow, and this could be a factor here.
Have you checked that you are in fact I/O bound?
Could you elaborate that a bit? It sounds interesting but I
do not get what you mean by:
Pierre Frédéric Caillaud:
4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
Big CPU and slow disk...
You should add another disk just for the WAL -- disks are pretty cheap
these days.
Writing the WAL on a second disk is the first thing to do on a
configuration like yours,
-Mensaje original-
De: fka...@googlemail.com
Nevertheless: If your explanation covers all what can be said
about it then replacing the hard disk by a faster one should
increase the performance here (I'll try to check that out).
Moving the pg_xlog directory to the OS drive
On Thu, 14 Jan 2010 22:28:07 +0100, fka...@googlemail.com
fka...@googlemail.com wrote:
Pierre Frédéric Caillaud:
4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
Big CPU and slow disk...
You should add another disk just for the WAL -- disks are pretty cheap
these days.
2010/1/15 Pierre Frédéric Caillaud li...@peufeu.com:
On Thu, 14 Jan 2010 22:28:07 +0100, fka...@googlemail.com
fka...@googlemail.com wrote:
Pierre Frédéric Caillaud:
4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
Big CPU and slow disk...
You should add another disk just for the
49 matches
Mail list logo