On 1/21/16 11:54 PM, Jinhua Luo wrote:
There is another problem.
When the autovacuum recycles the old pages, the ReadBuffer_common()
would do mdread() instead of mdextend().
The read is synchronous, while the write could be mostly asynchronous,
so the frequent read is much worse than write
Hi,
There is another problem.
When the autovacuum recycles the old pages, the ReadBuffer_common()
would do mdread() instead of mdextend().
The read is synchronous, while the write could be mostly asynchronous,
so the frequent read is much worse than write version.
Any help? Please.
Regards,
Hi,
I thought with async commit enabled, the backend process would rarely
do file io. But in fact, it still involves a lot of file io.
After inspecting the vfs probes using systemtap, and reading the
source codes of postgresql, I found the tight loop of insert or update
will cause heavy file io
Hi,
I found the insert performance is not related to the table schema.
In fact, I could recur the issue using simple table:
create table test(k bigserial primary key, a int, b int, c text, d text);
test.sql:
insert into test(a, b, c, d) values(3438, 1231,
Hi All,
I found it's not related to file I/O.
I use systemtap to diagnose the postgres backend process.
The systemtap script is a modified version of sample-bt-off-cpu:
https://gist.github.com/kingluo/15b656998035cef193bc
Test process:
1) create a simple table:
---
create table
On Sat, Jan 9, 2016 at 9:57 PM, Jinhua Luo wrote:
>
> To make a clean test env, I clone a new table, removing the indexes (keeping
> the primary key) and triggers, and use pgbench to test insert statement
> purely.
Can you share the pgbench command line, and the sql file you
On 1/9/16 11:57 PM, Jinhua Luo wrote:
But I do not understand that why the process do so many IO with async
commit? And it does not even happen at the shared buffer flushing and
locks waiting. Where's the code path doing these IO?
I assume you're asking about all the IO to the heap table. That
Hi All,
The database is postgresql 9.3, running on debian7, with 8 cpu cores and
8096MB physical memory.
There is a big table, with 70 more columns. It would be constantly at 700
rows/sec. It's not feasible to use COPY, because the data is not predefined
or provisioned, and it's generated on
Hey Jeff (and others)!
First of all: Thanks for your detailed explanations and guide lines.
On 17.01.2013, at 18:12, Jeff Janes jeff.ja...@gmail.com wrote:
So the theory is that the presence of idx_4 is causing the trigger to
pick a poor plan (i.e. one using idx_4) while its absence removes
On Tue, Jan 15, 2013 at 3:44 PM, Horst Dehmer horst.deh...@gmail.com wrote:
idx_4 together with a simple select in the tables on-insert trigger is
slowing things down considerably.
So the theory is that the presence of idx_4 is causing the trigger to
pick a poor plan (i.e. one using idx_4)
After more testing I have gained some insights:
The foreign key constraints are NOT responsible for the low COPY FROM
performance in my case. I forgot about the indexes which are created along with
the FK constraints.
Besides the primary key
CONSTRAINT obj_item_loc_pkey PRIMARY KEY
Yes, the ids is something I don't like either.
They carry additional semantics, which I cannot make go away.
How are chances char(20) is more time efficient than numeric(20)?
Disk space is no problem here.
On 12.01.2013, at 02:17, Claudio Freire klaussfre...@gmail.com wrote:
On Fri, Jan 11,
On Fri, Jan 11, 2013 at 5:17 PM, Claudio Freire klaussfre...@gmail.com wrote:
On Fri, Jan 11, 2013 at 8:55 PM, Horst Dehmer horst.deh...@gmail.com wrote:
Except - and that's the wall I'm hitting - for one table which yielded just
75 records/second.
The main 'problem' seem to be the FK
On Sat, Jan 12, 2013 at 5:16 PM, Horst Dehmer horst.deh...@gmail.com wrote:
Yes, the ids is something I don't like either.
They carry additional semantics, which I cannot make go away.
How are chances char(20) is more time efficient than numeric(20)?
Disk space is no problem here.
What are
Jeff Janes jeff.ja...@gmail.com writes:
Will PG allow you to add a FK constraint where there is no usable
index on the referenced side?
It will not, because the referenced side must have a unique constraint,
ie an index.
The standard performance gotcha here is not having an index on the
The types referenced by the foreign keys are the same Numeric(20).
Since the complete schema (of about 300 tables) is generated, I will just try
char(20) instead of numeric(20) in the next days to see if it makes any
difference. Which I somehow doubt.
But first I'm following the lead of the
On Sat, Jan 12, 2013 at 7:41 PM, Horst Dehmer horst.deh...@gmail.com wrote:
Since the complete schema (of about 300 tables) is generated, I will just try
char(20) instead of numeric(20) in the next days to see if it makes any
difference. Which I somehow doubt.
I think that might just make it
Hi!
I see a massive performance drop when writing a large transaction. I'm writing
data for 33 tables with COPY FROM directly from streams in Scala/Java. Over all
tables there are 2.2M records which are unevenly distributed from 1 record to
315k records in some tables.
For comparison I ran a
On Fri, Jan 11, 2013 at 8:55 PM, Horst Dehmer horst.deh...@gmail.com wrote:
Except - and that's the wall I'm hitting - for one table which yielded just
75 records/second.
The main 'problem' seem to be the FK constraints. Dropping just them
restored insert performance for this table to 6k
On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
)
the index definition is
CREATE INDEX PK_AT2
ON ABC
USING btree
(event, tableindex)
TABLESPACE sample;
Indexing twice the same column is useless.
2010/11/2 hubert depesz lubaczewski dep...@depesz.com:
On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
)
the index definition is
CREATE INDEX PK_AT2
ON ABC
USING btree
(event, tableindex)
TABLESPACE
On Tue, Nov 02, 2010 at 12:04:42PM +0100, Cédric Villemain wrote:
2010/11/2 hubert depesz lubaczewski dep...@depesz.com:
On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
)
the index definition is
CREATE INDEX
4:34:42 PM
Subject: Re: [PERFORM] Insert performance with composite index
2010/11/2 hubert depesz lubaczewski dep...@depesz.com:
On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
)
the index definition is
CREATE INDEX
2010/11/2 hubert depesz lubaczewski dep...@depesz.com:
On Tue, Nov 02, 2010 at 12:04:42PM +0100, Cédric Villemain wrote:
2010/11/2 hubert depesz lubaczewski dep...@depesz.com:
On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
CONSTRAINT tableindex_pkey PRIMARY KEY
Hi,
I am trying to tune my libpq program for insert performance.
When I tried inserting 1M rows into a table with a Primary Key, it took almost
62 seconds.
After adding a composite index of 2 columns, the performance degrades to 125
seconds.
I am using COPY to insert all data in 1 transaction.
: Re: [PERFORM] Insert performance with composite index
On Mon, Nov 1, 2010 at 14:49, Divakar Singh dpsma...@yahoo.com wrote:
I am trying to tune my libpq program for insert performance.
When I tried inserting 1M rows into a table with a Primary Key, it took
almost 62 seconds.
After adding
On Mon, Nov 1, 2010 at 14:49, Divakar Singh dpsma...@yahoo.com wrote:
I am trying to tune my libpq program for insert performance.
When I tried inserting 1M rows into a table with a Primary Key, it took
almost 62 seconds.
After adding a composite index of 2 columns, the performance degrades to
On Mon, Nov 1, 2010 at 14:56, Divakar Singh dpsma...@yahoo.com wrote:
Thanks for your tips. i will try those.
I am on Solaris Sparc 5.10
Sorry, I assumed you were running Linux. But still it could be the
same problem as I had.
Be careful changing your wal_sync_method, as it has the potential
2010/11/1 Divakar Singh dpsma...@yahoo.com:
Hi,
I am trying to tune my libpq program for insert performance.
When I tried inserting 1M rows into a table with a Primary Key, it took
almost 62 seconds.
After adding a composite index of 2 columns, the performance degrades to 125
seconds.
I am
Hi,
On Monday 01 November 2010 13:49:14 Divakar Singh wrote:
When I tried inserting 1M rows into a table with a Primary Key, it took
almost 62 seconds.
After adding a composite index of 2 columns, the performance degrades to
125 seconds.
I am using COPY to insert all data in 1 transaction.
On Monday 01 November 2010 15:08:10 Divakar Singh wrote:
here are my parameters:
Which pg version is that?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I am using 9.0.1
Best Regards,
Divakar
From: Andres Freund and...@anarazel.de
To: Divakar Singh dpsma...@yahoo.com
Cc: pgsql-performance@postgresql.org
Sent: Mon, November 1, 2010 7:44:31 PM
Subject: Re: [PERFORM] Insert performance with composite index
On Monday 01 November 2010 15:16:49 Divakar Singh wrote:
I am using 9.0.1
Either thats not true or you cargo culted loads of your config from a
significantly older pg version.
Things like:
#bgwriter_delay = 200# 10-1 milliseconds between rounds
bgwriter_lru_percent = 0#
and...@anarazel.de
To: Divakar Singh dpsma...@yahoo.com
Cc: pgsql-performance@postgresql.org
Sent: Mon, November 1, 2010 7:50:59 PM
Subject: Re: [PERFORM] Insert performance with composite index
On Monday 01 November 2010 15:16:49 Divakar Singh wrote:
I am using 9.0.1
Either thats not true or you cargo culted
On Monday 01 November 2010 15:28:19 Divakar Singh wrote:
Do you mean these parameters have been removed starting 9.X?
As I see on
http://www.network-theory.co.uk/docs/postgresql/vol3/BackgroundWriter.html
,these parameters were added starting from 8.0 right?
No, I mean setting to 0 is a bit
to PostgreSQL.
BTW, this is PG 8.2.1 and 8.3.7 running on SLES 10.3, although I don't think it
matters.
Thanks for the help, Greg and Tom!
--- On Sat, 6/27/09, Greg Smith gsm...@gregsmith.com wrote:
From: Greg Smith gsm...@gregsmith.com
Subject: Re: [PERFORM] Insert performance and multi
I have a partitioned table with a multi-column unique index. The table is
partitioned on a timestamp with time zone column. (I realize this has nothing
to do with the unique index.) The original unique index was in the order
(timestamptz, varchar, text, text) and most queries against it
On Fri, 26 Jun 2009, bob_lun...@yahoo.com wrote:
The original unique index was in the order (timestamptz, varchar, text,
text) and most queries against it were slow. I changed the index order
to (varchar, text, timestamptz, text) and queries now fly, but loading
data (via copy from stdin) in
Hi Richard,
No, as said above transactions are made of 10 inserts...
Hmm - I read that as just meaning inserted 10 rows. You might find
that smaller batches provide peak performance.
Ahh ok ;-) sorry for my bad english... (yeah, i have been testing
several transaction size
Hi and thanks for your quick answer :-)
Here is my problem. With some heavy insert into a simple BD (one
table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
testing it using a simple C software which use libpq and which use:
- Insert prepared statement (to avoid too
Hi Richard,
Here is my problem. With some heavy insert into a simple BD (one
table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
testing it using a simple C software which use libpq and which use:
- Insert prepared statement (to avoid too many request parsing on the
joël Winteregg wrote:
Hi Richard,
Here is my problem. With some heavy insert into a simple BD (one
table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
testing it using a simple C software which use libpq and which use:
- Insert prepared statement (to avoid too many request
joël Winteregg wrote:
No, as said above transactions are made of 10 inserts...
Hmm - I read that as just meaning inserted 10 rows. You might find
that smaller batches provide peak performance.
Ahh ok ;-) sorry for my bad english... (yeah, i have been testing
several transaction size
* Richard Huxton dev@archonet.com [070306 12:22]:
2. You can do a COPY from libpq - is it really not possible?
Not really but i have been testing it and inserts are flying (about
10 inserts/sec) !!
What's the problem with the COPY? Could you COPY into one table then insert
from that
Andreas Kostyrka wrote:
* Richard Huxton dev@archonet.com [070306 12:22]:
2. You can do a COPY from libpq - is it really not possible?
Not really but i have been testing it and inserts are flying (about
10 inserts/sec) !!
What's the problem with the COPY? Could you COPY into one table
* Richard Huxton dev@archonet.com [070306 13:47]:
Andreas Kostyrka wrote:
* Richard Huxton dev@archonet.com [070306 12:22]:
2. You can do a COPY from libpq - is it really not possible?
Not really but i have been testing it and inserts are flying (about
10 inserts/sec) !!
What's the
Csaba Nagy wrote:
I only know to answer your no. 2:
2) What about the issue with excessive locking for foreign keys when
inside a transaction? Has that issue disappeared in 8.2? And if not,
would it affect similarly in the case of multiple-row inserts?
The exclusive lock is gone
Dear all,
After many tests and doc reading, i finally try to get help from
you...
Here is my problem. With some heavy insert into a simple BD (one
table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
testing it using a simple C software which use libpq and which use:
- Insert
hatman wrote:
Dear all,
After many tests and doc reading, i finally try to get help from
you...
Here is my problem. With some heavy insert into a simple BD (one
table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
testing it using a simple C software which use libpq and which
I am importing roughly 15 million rows in one batch transaction. I am
currently doing this through batch inserts of around 500 at a time,
although I am looking at ways to do this via multiple (one-per-table)
copy commands for performance reasons.
I am currently running: PostgreSQL 8.0.4,
Jeremy Haile [EMAIL PROTECTED] writes:
1) Why does the performance degrade as the table sizes grow? Shouldn't
the insert performance remain fairly constant if there are no indexes or
constraints?
Yeah, insert really should be a constant-time operation if there's no
add-on operations like
On Fri, 2005-11-04 at 13:21 -0500, Bruce Momjian wrote:
David Fetter wrote:
On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:
I'm inclined to treat this as an outright bug, not just a minor
performance issue, because it implies that a sufficiently long psql
script would
You mean something like the attached?
not quite: attached is a file to generate test.
to do it:
psql yadda
\i timeit.sql
\t
\o dump.sql
select make_dump(5, false);
\q
cat dump.sql | psql -q yadda
and see what pops out. I had to do it that way because redirecting psql
to dump file caused
You mean something like the attached?
oh, btw I ran timeit.c and performance is flat and fairly fast. I'm
pretty sure psql is the culprit here.
Merlin
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Merlin Moncure [EMAIL PROTECTED] writes:
You mean something like the attached?
not quite: attached is a file to generate test.
cat dump.sql | psql -q yadda
Ah. Does your psql have readline support? if so, does adding -n to
that command change anything?
regards,
not quite: attached is a file to generate test.
cat dump.sql | psql -q yadda
Ah. Does your psql have readline support? if so, does adding -n to
that command change anything?
It doesn't, and it doesn't. :/ Ok, here's where it gets interesting. I
removed all the newlines from the
Merlin Moncure [EMAIL PROTECTED] writes:
It doesn't, and it doesn't. :/ Ok, here's where it gets interesting. I
removed all the newlines from the test output (dump.sql) and got flat
times ;).
That's bizarre ... I'd have thought a very long line would be more
likely to trigger internal
That's bizarre ... I'd have thought a very long line would be more
likely to trigger internal performance problems than the original.
What happens if you read the file with psql -f dump.sql instead
of cat/stdin?
non-flat. Also ran via \i and got non flat times.
BTW, I get flat times for
Merlin Moncure [EMAIL PROTECTED] writes:
yeah. I'm guessing problem is in the mingw flex/bison (which I really,
really hope is not the case) or some other win32 specific block of code.
I'm snooping around there...
Maybe I'm confused here, but I thought we had established that the local
and
Merlin Moncure [EMAIL PROTECTED] writes:
yeah. I'm guessing problem is in the mingw flex/bison (which I
really,
really hope is not the case) or some other win32 specific block of
code.
I'm snooping around there...
Maybe I'm confused here, but I thought we had established that the
ok, here is gprof output from newlines/no newlines
[newlines]
% cumulative self self total
time seconds secondscalls s/call s/call name
19.03 0.67 0.671 0.67 3.20 MainLoop
17.61 1.29 0.62 500031 0.00
Merlin Moncure [EMAIL PROTECTED] writes:
ok, mingw gprof is claiming MainLoop is a culprit here,
The only thing I can see that would be different for Windows is the
SetConsoleCtrlHandler kernel call ... could that be expensive? Why
do we have either sigsetjmp or setup_cancel_handler inside the
Nailed it.
problem is in mainloop.c - setup_cancel_handler. Apparently you can
have multiple handlers and windows keeps track of them all, even if they
do the same thing. Keeping track of so many system handles would
naturally slow the whole process down. Commenting that line times are
flat as
Merlin Moncure [EMAIL PROTECTED] writes:
Nailed it.
problem is in mainloop.c - setup_cancel_handler. Apparently you can
have multiple handlers and windows keeps track of them all, even if they
do the same thing. Keeping track of so many system handles would
naturally slow the whole process
Merlin Moncure [EMAIL PROTECTED] writes:
Nailed it.
problem is in mainloop.c - setup_cancel_handler. Apparently you
can
have multiple handlers and windows keeps track of them all, even if
they
do the same thing. Keeping track of so many system handles would
naturally slow the whole
Merlin Moncure [EMAIL PROTECTED] writes:
AFAICS it is appropriate to move the sigsetjmp and
setup_cancel_handler
calls in front of the per-line loop inside MainLoop --- can anyone see
a reason not to?
hm. mainloop is re-entrant, right? That means each \i would reset the
handler...what is
Tom Lane wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
Nailed it.
problem is in mainloop.c - setup_cancel_handler. Apparently you can
have multiple handlers and windows keeps track of them all, even if they
do the same thing. Keeping track of so many system handles would
naturally
David Fetter [EMAIL PROTECTED] writes:
On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:
I'm inclined to treat this as an outright bug, not just a minor
performance issue, because it implies that a sufficiently long psql
script would probably crash a Windows machine.
Ouch. In light
David Fetter wrote:
On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
Nailed it.
problem is in mainloop.c - setup_cancel_handler. Apparently you
can have multiple handlers and windows keeps track of them all,
even if they do the
I'm inclined to treat this as an outright bug, not just a minor
certainly...
performance issue, because it implies that a sufficiently long psql
script would probably crash a Windows machine.
actually, it's worse than that, it's more of a dos on the
whole system, as windows will
Merlin Moncure [EMAIL PROTECTED] wrote
ok, here is gprof output from newlines/no newlines
[newlines]
% cumulative self self total
time seconds secondscalls s/call s/call name
19.03 0.67 0.671 0.67 3.20 MainLoop
17.61 1.29
AFAICS it is appropriate to move the sigsetjmp and
setup_cancel_handler calls in front of the per-line loop inside
MainLoop --- can anyone see a reason not to?
hm. mainloop is re-entrant, right? That means each \i
would reset the
handler...what is downside to keeping global flag?
Magnus Hagander [EMAIL PROTECTED] wrote
I'd like to use the win32 provided recv(), send() functions
instead of redirect them to pgwin32_recv()/pgwin32_send(),
just like libpq does. If we do this, we will lose some
functionalities, but I'd like to see the performance
difference first. -- do
On Wed, 2 Nov 2005, Merlin Moncure wrote:
If you put client/server on the same machine, then we don't know how
the
CPU is splitted. Can you take a look at the approximate number by
observing the task manager data while running?
ok, I generated a test case which was 250k inserts to simple two
Both win32 send/recv have pgwin32_poll_signals() in them. This is
glorified WaitForSingleObjectEx on global pgwin32_signal_event. This
is
probably part of the problem. Can we work some of the same magic you
put
into check interrupts macro?
Whoop! following a cvs update I see this is
On Thu, 3 Nov 2005, Magnus Hagander wrote:
Sorry, I don't follow you here - what do you mean to do? Remove the
event completely so we can't wait on it?
I'd like to use the win32 provided recv(), send() functions instead of
redirect them to pgwin32_recv()/pgwin32_send(), just like libpq
Both win32 send/recv have pgwin32_poll_signals() in them.
This is glorified WaitForSingleObjectEx on global
pgwin32_signal_event. This is probably part of the problem.
Can we work some of the same magic you put into check interrupts
macro?
Uh, we already do that, don't we?
On Thu, 3 Nov 2005, Magnus Hagander wrote:
Sorry, I don't follow you here - what do you mean to do? Remove the
event completely so we can't wait on it?
I'd like to use the win32 provided recv(), send() functions
instead of redirect them to pgwin32_recv()/pgwin32_send(),
just
I'd like to use the win32 provided recv(), send()
functions instead
of redirect them to pgwin32_recv()/pgwin32_send(), just
like libpq
does. If we do this, we will lose some functionalities,
but I'd like
to see the performance difference first. -- do you think
that will
Merlin Moncure [EMAIL PROTECTED] wrote
Running from remote, Time progression is:
First 50k: 20 sec
Second: 29 sec
[...]
final:: 66 sec
This may due to the maintainence cost of a big transaction, I am not sure
... Tom?
so, clear upward progression of time/rec. Initial time is
Qingqing Zhou [EMAIL PROTECTED] wrote
Not to 100%, so this means the server is always starving. It is waiting on
something -- of couse not lock. That's why I think there is some problem
on network communication. Another suspect will be the write - I knwo NTFS
system will issue an internal
On Thu, 3 Nov 2005, Magnus Hagander wrote:
Both win32 send/recv have pgwin32_poll_signals() in them.
This is glorified WaitForSingleObjectEx on global
pgwin32_signal_event. This is probably part of the problem.
Can we work some of the same magic you put into check
interrupts macro?
Merlin Moncure [EMAIL PROTECTED] writes:
ok, I generated a test case which was 250k inserts to simple two column
table all in single transaction. Every 50k inserts, time is recorded
via timeofday().
You mean something like the attached?
Running from remote, Time progression is:
First
On Thu, 3 Nov 2005, Tom Lane wrote:
On Unix I get a dead flat line (within measurement noise), both local
loopback and across my LAN.
after 5 30.20 sec
after 10 31.67 sec
after 15 30.98 sec
after 20 29.64 sec
after 25 29.83 sec
Confirmed in Linux. And on a winxp
I've done the tests with rc1. This is still as slow on windows ...
about
6-10
times slower thant linux (via Ip socket). (depending on using prepared
queries, etc...)
By the way, we've tried to insert into the windows database from a
linux
psql
client, via the network. In this
I've done the tests with rc1. This is still as slow on windows ...
about
6-10
times slower thant linux (via Ip socket). (depending on
using prepared
queries, etc...)
By the way, we've tried to insert into the windows database from a
linux
psql
client, via the network. In this
Le Mercredi 02 Novembre 2005 14:54, Magnus Hagander a écrit :
I've done the tests with rc1. This is still as slow on windows ...
about
6-10
times slower thant linux (via Ip socket). (depending on
using prepared
queries, etc...)
By the way, we've tried to insert into
Magnus Hagander [EMAIL PROTECTED] writes:
Marc's observation that by switching to a linux client drops
time down drastically is really intersing!
Could this be a case of the network being slow,
I'm wondering about nonstandard junk lurking in the TCP stack of the
Windows client machine.
On Wed, 2 Nov 2005, Merlin Moncure wrote:
By the way, we've tried to insert into the windows database from a
linux psql client, via the network. In this configuration, inserting
is only about 2 times slower than inserting locally (the linux client
had a slower CPU 1700Mhz agains
One thing I did notice that in a 250k insert transaction the insert
time
grows with #recs inserted. Time to insert first 50k recs is about
27
sec and last 50 k recs is 77 sec. I also confimed that size of
table is
not playing a role here.
Marc, can you do select timeofday() every 50k
On Tuesday 06 September 2005 19:11, Merlin Moncure wrote:
This makes me wonder if we are looking in the wrong place. Maybe the
problem is coming from psql? More results to follow.
problem is not coming from psql.
One thing I did notice that in a 250k insert transaction the insert time
On Tuesday 06 September 2005 19:11, Merlin Moncure wrote:
Here's the timeofday ... i'll do the gprof as soon as I can.
Every 5 rows...
Wed Sep 07 13:58:13.860378 2005 CEST
Wed Sep 07 13:58:20.926983 2005 CEST
Wed Sep 07 13:58:27.928385 2005 CEST
Wed Sep 07 13:58:35.472813 2005 CEST
Hi,
I usually use PostgreSQL coupled with Linux, but I have to use Windows for a
perticular project.
So I wanted to do some tests to know if the performance will be acceptable (I
don't need PostgreSQL to be as fast with windows as with linux, but it has to
be usable...).
I started with
Hi,
I usually use PostgreSQL coupled with Linux, but I have to use Windows
for
a
perticular project.
So I wanted to do some tests to know if the performance will be
acceptable
(I
don't need PostgreSQL to be as fast with windows as with linux, but it
has
to
be usable...).
In my
In my experience win32 is par with linux generally with a few gotchas on
either side. Are your times with fsync=no? It's much harder to give
apples-apples comparison with fsync=on for various reasons.
It is with fsync=off on windows, fsync=on on linux
Are you running
In my experience win32 is par with linux generally with a few gotchas on
either side. Are your times with fsync=no? It's much harder to give
apples-apples comparison with fsync=on for various reasons.
It is with fsync=off on windows, fsync=on on linux
well, inside a transaction this
This makes me wonder if we are looking in the wrong place. Maybe the
problem is coming from psql? More results to follow.
problem is not coming from psql.
One thing I did notice that in a 250k insert transaction the insert time
grows with #recs inserted. Time to insert first 50k recs is
nobody ?
On 18 Jul 2005, at 21:29, Yves Vindevogel wrote:
Hi,
Suppose I have a table with 4 fields (f1, f2, f3, f4)
I define 2 unique indexes u1 (f1, f2, f3) and u2 (f1, f2, f4)
I have 3 records
A, B, C, D (this will be inserted)
A, B, C, E (this will pass u2, but not u1, thus not inserted)
Yves Vindevogel wrote:
Hi,
Suppose I have a table with 4 fields (f1, f2, f3, f4)
I define 2 unique indexes u1 (f1, f2, f3) and u2 (f1, f2, f4)
I have 3 records
A, B, C, D (this will be inserted)
A, B, C, E (this will pass u2, but not u1, thus not inserted)
A, B, F, D (this will pass u1, but
Yves Vindevogel wrote:
So, I must use a function that will check against u1 and u2, and then
insert if it is ok.
I know that such a function is way slower that my insert query.
So - you have a table, called something like upload with 20,000 rows
and you'd like to know whether it is safe to
1 - 100 of 119 matches
Mail list logo