On 1/21/16 11:54 PM, Jinhua Luo wrote:
There is another problem.
When the autovacuum recycles the old pages, the ReadBuffer_common()
would do mdread() instead of mdextend().
The read is synchronous, while the write could be mostly asynchronous,
so the frequent read is much worse than write versio
Hi,
There is another problem.
When the autovacuum recycles the old pages, the ReadBuffer_common()
would do mdread() instead of mdextend().
The read is synchronous, while the write could be mostly asynchronous,
so the frequent read is much worse than write version.
Any help? Please.
Regards,
Jinh
Hi,
I thought with async commit enabled, the backend process would rarely
do file io. But in fact, it still involves a lot of file io.
After inspecting the vfs probes using systemtap, and reading the
source codes of postgresql, I found the tight loop of insert or update
will cause heavy file io u
Hi All,
I found it's not related to file I/O.
I use systemtap to diagnose the postgres backend process.
The systemtap script is a modified version of sample-bt-off-cpu:
https://gist.github.com/kingluo/15b656998035cef193bc
Test process:
1) create a simple table:
---
create table test
Hi,
I found the insert performance is not related to the table schema.
In fact, I could recur the issue using simple table:
create table test(k bigserial primary key, a int, b int, c text, d text);
test.sql:
insert into test(a, b, c, d) values(3438, 1231,
'o'
On Sat, Jan 9, 2016 at 9:57 PM, Jinhua Luo wrote:
>
> To make a clean test env, I clone a new table, removing the indexes (keeping
> the primary key) and triggers, and use pgbench to test insert statement
> purely.
Can you share the pgbench command line, and the sql file you feed to
it (and whate
On 1/9/16 11:57 PM, Jinhua Luo wrote:
But I do not understand that why the process do so many IO with async
commit? And it does not even happen at the shared buffer flushing and
locks waiting. Where's the code path doing these IO?
I assume you're asking about all the IO to the heap table. That
Hi All,
The database is postgresql 9.3, running on debian7, with 8 cpu cores and
8096MB physical memory.
There is a big table, with 70 more columns. It would be constantly at 700
rows/sec. It's not feasible to use COPY, because the data is not predefined
or provisioned, and it's generated on dem
Hey Jeff (and others)!
First of all: Thanks for your detailed explanations and guide lines.
On 17.01.2013, at 18:12, Jeff Janes wrote:
> So the theory is that the presence of idx_4 is causing the trigger to
> pick a poor plan (i.e. one using idx_4) while its absence removes that
> temptation?
On Tue, Jan 15, 2013 at 3:44 PM, Horst Dehmer wrote:
> idx_4 together with a simple select in the tables on-insert trigger is
> slowing things down considerably.
So the theory is that the presence of idx_4 is causing the trigger to
pick a poor plan (i.e. one using idx_4) while its absence remov
After more testing I have gained some insights:
The foreign key constraints are NOT responsible for the low COPY FROM
performance in my case. I forgot about the indexes which are created along with
the FK constraints.
Besides the primary key
CONSTRAINT obj_item_loc_pkey PRIMARY KEY (obj_item_i
On Sat, Jan 12, 2013 at 7:41 PM, Horst Dehmer wrote:
> Since the complete schema (of about 300 tables) is generated, I will just try
> char(20) instead of numeric(20) in the next days to see if it makes any
> difference. Which I somehow doubt.
I think that might just make it worse.
Well, maybe
The types referenced by the foreign keys are the same Numeric(20).
Since the complete schema (of about 300 tables) is generated, I will just try
char(20) instead of numeric(20) in the next days to see if it makes any
difference. Which I somehow doubt.
But first I'm following the lead of the ta
Jeff Janes writes:
> Will PG allow you to add a FK constraint where there is no usable
> index on the referenced side?
It will not, because the referenced side must have a unique constraint,
ie an index.
The standard performance gotcha here is not having an index on the
referencing side. But th
On Sat, Jan 12, 2013 at 5:16 PM, Horst Dehmer wrote:
> Yes, the ids is something I don't like either.
> They carry additional semantics, which I cannot make go away.
> How are chances char(20) is more time efficient than numeric(20)?
> Disk space is no problem here.
What are the other tables like
On Fri, Jan 11, 2013 at 5:17 PM, Claudio Freire wrote:
> On Fri, Jan 11, 2013 at 8:55 PM, Horst Dehmer wrote:
>> Except - and that's the wall I'm hitting - for one table which yielded just
>> 75 records/second.
>> The main 'problem' seem to be the FK constraints. Dropping just them
>> restored in
Yes, the ids is something I don't like either.
They carry additional semantics, which I cannot make go away.
How are chances char(20) is more time efficient than numeric(20)?
Disk space is no problem here.
On 12.01.2013, at 02:17, Claudio Freire wrote:
> On Fri, Jan 11, 2013 at 8:55 PM, Horst D
On Fri, Jan 11, 2013 at 8:55 PM, Horst Dehmer wrote:
> Except - and that's the wall I'm hitting - for one table which yielded just
> 75 records/second.
> The main 'problem' seem to be the FK constraints. Dropping just them
> restored insert performance for this table to 6k records/s. The table in
On Friday, January 11, 2013, Horst Dehmer wrote:
>
> Except - and that's the wall I'm hitting - for one table which yielded
> just 75 records/second.
> The main 'problem' seem to be the FK constraints. Dropping just them
> restored insert performance for this table to 6k records/s.
>
It sure soun
Hi!
I see a massive performance drop when writing a large transaction. I'm writing
data for 33 tables with COPY FROM directly from streams in Scala/Java. Over all
tables there are 2.2M records which are unevenly distributed from 1 record to
315k records in some tables.
For comparison I ran a t
2010/11/2 hubert depesz lubaczewski :
> On Tue, Nov 02, 2010 at 12:04:42PM +0100, Cédric Villemain wrote:
>> 2010/11/2 hubert depesz lubaczewski :
>> > On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
>> >> > CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
>> >> > )
>> >> > t
May be a query that is filtering based on these 2 columns?
Best Regards,
Divakar
From: Cédric Villemain
To: dep...@depesz.com
Cc: Divakar Singh ; pgsql-performance@postgresql.org
Sent: Tue, November 2, 2010 4:34:42 PM
Subject: Re: [PERFORM] Insert
On Tue, Nov 02, 2010 at 12:04:42PM +0100, Cédric Villemain wrote:
> 2010/11/2 hubert depesz lubaczewski :
> > On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
> >> > CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
> >> > )
> >> > the index definition is
> >> > CREATE INDEX "P
2010/11/2 hubert depesz lubaczewski :
> On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
>> > CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
>> > )
>> > the index definition is
>> > CREATE INDEX "PK_AT2"
>> > ON ABC
>> > USING btree
>> > (event, tableindex)
>> > TABLES
On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
> > CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
> > )
> > the index definition is
> > CREATE INDEX "PK_AT2"
> > ON ABC
> > USING btree
> > (event, tableindex)
> > TABLESPACE sample;
>
> Indexing twice the same column
On Monday 01 November 2010 15:28:19 Divakar Singh wrote:
> Do you mean these parameters have been removed starting 9.X?
> As I see on
> http://www.network-theory.co.uk/docs/postgresql/vol3/BackgroundWriter.html
> ,these parameters were added starting from 8.0 right?
No, I mean setting to 0 is a b
Singh
Cc: pgsql-performance@postgresql.org
Sent: Mon, November 1, 2010 7:50:59 PM
Subject: Re: [PERFORM] Insert performance with composite index
On Monday 01 November 2010 15:16:49 Divakar Singh wrote:
> I am using 9.0.1
Either thats not true or you cargo culted loads of your config fro
On Monday 01 November 2010 15:16:49 Divakar Singh wrote:
> I am using 9.0.1
Either thats not true or you cargo culted loads of your config from a
significantly older pg version.
Things like:
#bgwriter_delay = 200# 10-1 milliseconds between rounds
bgwriter_lru_percent = 0#
I am using 9.0.1
Best Regards,
Divakar
From: Andres Freund
To: Divakar Singh
Cc: pgsql-performance@postgresql.org
Sent: Mon, November 1, 2010 7:44:31 PM
Subject: Re: [PERFORM] Insert performance with composite index
On Monday 01 November 2010 15:08:10
On Monday 01 November 2010 15:08:10 Divakar Singh wrote:
> here are my parameters:
Which pg version is that?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi,
On Monday 01 November 2010 13:49:14 Divakar Singh wrote:
> When I tried inserting 1M rows into a table with a Primary Key, it took
> almost 62 seconds.
> After adding a composite index of 2 columns, the performance degrades to
> 125 seconds.
> I am using COPY to insert all data in 1 transactio
2010/11/1 Divakar Singh :
> Hi,
> I am trying to tune my libpq program for insert performance.
> When I tried inserting 1M rows into a table with a Primary Key, it took
> almost 62 seconds.
> After adding a composite index of 2 columns, the performance degrades to 125
> seconds.
> I am using COPY t
On Mon, Nov 1, 2010 at 14:56, Divakar Singh wrote:
> Thanks for your tips. i will try those.
> I am on Solaris Sparc 5.10
Sorry, I assumed you were running Linux. But still it could be the
same problem as I had.
Be careful changing your wal_sync_method, as it has the potential to
corrupt your da
On Mon, Nov 1, 2010 at 14:49, Divakar Singh wrote:
> I am trying to tune my libpq program for insert performance.
> When I tried inserting 1M rows into a table with a Primary Key, it took
> almost 62 seconds.
> After adding a composite index of 2 columns, the performance degrades to 125
> seconds.
Hi Marti,
Thanks for your tips. i will try those.
I am on Solaris Sparc 5.10
Best Regards,
Divakar
From: Marti Raudsepp
To: Divakar Singh
Cc: pgsql-performance@postgresql.org
Sent: Mon, November 1, 2010 6:23:17 PM
Subject: Re: [PERFORM] Insert performance
Hi,
I am trying to tune my libpq program for insert performance.
When I tried inserting 1M rows into a table with a Primary Key, it took almost
62 seconds.
After adding a composite index of 2 columns, the performance degrades to 125
seconds.
I am using COPY to insert all data in 1 transaction.
t
ame reason, but forgot to apply the lesson to PostgreSQL.
BTW, this is PG 8.2.1 and 8.3.7 running on SLES 10.3, although I don't think it
matters.
Thanks for the help, Greg and Tom!
--- On Sat, 6/27/09, Greg Smith wrote:
> From: Greg Smith
> Subject: Re: [PERFORM] Insert performa
On Fri, 26 Jun 2009, bob_lun...@yahoo.com wrote:
The original unique index was in the order (timestamptz, varchar, text,
text) and most queries against it were slow. I changed the index order
to (varchar, text, timestamptz, text) and queries now fly, but loading
data (via copy from stdin) in
bob_lun...@yahoo.com writes:
> Why would changing the column order on a unique index cause data loading or
> index servicing to slow down? Page splits in the b-tree, maybe?
Yeah, perhaps. Tell us about the data distributions in the columns?
Is there any ordering to the keys that're being insert
I have a partitioned table with a multi-column unique index. The table is
partitioned on a timestamp with time zone column. (I realize this has nothing
to do with the unique index.) The original unique index was in the order
(timestamptz, varchar, text, text) and most queries against it were
Hi Richard,
> >
> > Here is my problem. With some heavy insert into a simple BD (one
> > table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
> > testing it using a simple C software which use libpq and which use:
> > - Insert prepared statement (to avoid too many request parsing
Hi and thanks for your quick answer :-)
> >
> >>> Here is my problem. With some heavy insert into a simple BD (one
> >>> table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
> >>> testing it using a simple C software which use libpq and which use:
> >>> - Insert prepared statemen
Hi Richard,
>
> >>> No, as said above transactions are made of 10 inserts...
> >> Hmm - I read that as just meaning "inserted 10 rows". You might find
> >> that smaller batches provide peak performance.
>
> > Ahh ok ;-) sorry for my bad english... (yeah, i have been testing
> > several tra
Hi Andreas,
Thanks for the info about COPY !!
On Mar 6, 1:23 pm, [EMAIL PROTECTED] (Andreas Kostyrka) wrote:
> * Richard Huxton <[EMAIL PROTECTED]> [070306 12:22]:> >>2. You can do a COPY
> from libpq - is it really not possible?
>
> > >Not really but i have been testing it and inserts are flyin
Csaba Nagy wrote:
I only know to answer your no. 2:
2) What about the issue with excessive locking for foreign keys when
inside a transaction? Has that issue disappeared in 8.2? And if not,
would it affect similarly in the case of multiple-row inserts?
The exclusive lock is gone alr
I only know to answer your no. 2:
> 2) What about the issue with excessive locking for foreign keys when
> inside a transaction? Has that issue disappeared in 8.2? And if not,
> would it affect similarly in the case of multiple-row inserts?
The exclusive lock is gone already starting with 8.0 II
1. If you're running 8.2 you can have multiple sets of values in an
INSERT
http://www.postgresql.org/docs/8.2/static/sql-insert.html
Yeah, i'm running the 8.2.3 version ! i didn't know about multiple
inserts sets ! Thanks for the tip ;-)
No kidding --- thanks for the tip from me as well
* Richard Huxton [070306 13:47]:
> Andreas Kostyrka wrote:
> >* Richard Huxton [070306 12:22]:
> 2. You can do a COPY from libpq - is it really not possible?
>
> >>>Not really but i have been testing it and inserts are flying (about
> >>>10 inserts/sec) !!
> >>What's the problem with
Andreas Kostyrka wrote:
* Richard Huxton [070306 12:22]:
2. You can do a COPY from libpq - is it really not possible?
Not really but i have been testing it and inserts are flying (about
10 inserts/sec) !!
What's the problem with the COPY? Could you COPY into one table then insert
from t
* Richard Huxton [070306 12:22]:
> >>2. You can do a COPY from libpq - is it really not possible?
> >>
> >Not really but i have been testing it and inserts are flying (about
> >10 inserts/sec) !!
>
> What's the problem with the COPY? Could you COPY into one table then insert
> from that to y
joël Winteregg wrote:
No, as said above transactions are made of 10 inserts...
Hmm - I read that as just meaning "inserted 10 rows". You might find
that smaller batches provide peak performance.
Ahh ok ;-) sorry for my bad english... (yeah, i have been testing
several transaction siz
joël Winteregg wrote:
Hi Richard,
Here is my problem. With some heavy insert into a simple BD (one
table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
testing it using a simple C software which use libpq and which use:
- Insert prepared statement (to avoid too many request par
hatman wrote:
Dear all,
After many tests and doc reading, i finally try to get help from
you...
Here is my problem. With some heavy insert into a simple BD (one
table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
testing it using a simple C software which use libpq and which u
Dear all,
After many tests and doc reading, i finally try to get help from
you...
Here is my problem. With some heavy insert into a simple BD (one
table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
testing it using a simple C software which use libpq and which use:
- Insert pre
"Jeremy Haile" <[EMAIL PROTECTED]> writes:
> 1) Why does the performance degrade as the table sizes grow? Shouldn't
> the insert performance remain fairly constant if there are no indexes or
> constraints?
Yeah, insert really should be a constant-time operation if there's no
add-on operations l
I am importing roughly 15 million rows in one batch transaction. I am
currently doing this through batch inserts of around 500 at a time,
although I am looking at ways to do this via multiple (one-per-table)
copy commands for performance reasons.
I am currently running: PostgreSQL 8.0.4, Redhat
On Fri, 2005-11-04 at 13:21 -0500, Bruce Momjian wrote:
> David Fetter wrote:
> > On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:
> > > I'm inclined to treat this as an outright bug, not just a minor
> > > performance issue, because it implies that a sufficiently long psql
> > > script wo
""Magnus Hagander"" <[EMAIL PROTECTED]> wrote
>>
>> I'd like to use the win32 provided recv(), send() functions
>> instead of redirect them to pgwin32_recv()/pgwin32_send(),
>> just like libpq does. If we do this, we will lose some
>> functionalities, but I'd like to see the performance
>> differe
""Merlin Moncure"" <[EMAIL PROTECTED]> wrote
> ok, here is gprof output from newlines/no newlines
> [newlines]
> % cumulative self self total
> time seconds secondscalls s/call s/call name
> 19.03 0.67 0.671 0.67 3.20 MainLoop
> 17.61
> >> AFAICS it is appropriate to move the sigsetjmp and
> >> setup_cancel_handler calls in front of the per-line loop inside
> >> MainLoop --- can anyone see a reason not to?
>
> > hm. mainloop is re-entrant, right? That means each \i
> would reset the
> > handler...what is downside to keepin
> > I'm inclined to treat this as an outright bug, not just a minor
> certainly...
>
> > performance issue, because it implies that a sufficiently long psql
> > script would probably crash a Windows machine.
>
> actually, it's worse than that, it's more of a dos on the
> whole system, as window
David Fetter wrote:
> On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:
> > "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > > Nailed it.
> >
> > > problem is in mainloop.c -> setup_cancel_handler. Apparently you
> > > can have multiple handlers and windows keeps track of them all,
> > >
David Fetter <[EMAIL PROTECTED]> writes:
> On Fri, Nov 04, 2005 at 01:01:20PM -0500, Tom Lane wrote:
>> I'm inclined to treat this as an outright bug, not just a minor
>> performance issue, because it implies that a sufficiently long psql
>> script would probably crash a Windows machine.
> Ouch.
Tom Lane wrote:
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > Nailed it.
>
> > problem is in mainloop.c -> setup_cancel_handler. Apparently you can
> > have multiple handlers and windows keeps track of them all, even if they
> > do the same thing. Keeping track of so many system handles wou
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
>> AFAICS it is appropriate to move the sigsetjmp and
>> setup_cancel_handler
>> calls in front of the per-line loop inside MainLoop --- can anyone see
>> a reason not to?
> hm. mainloop is re-entrant, right? That means each \i would reset the
> handle
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > Nailed it.
>
> > problem is in mainloop.c -> setup_cancel_handler. Apparently you
can
> > have multiple handlers and windows keeps track of them all, even if
they
> > do the same thing. Keeping track of so many system handles would
> > naturally
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> Nailed it.
> problem is in mainloop.c -> setup_cancel_handler. Apparently you can
> have multiple handlers and windows keeps track of them all, even if they
> do the same thing. Keeping track of so many system handles would
> naturally slow the whol
Nailed it.
problem is in mainloop.c -> setup_cancel_handler. Apparently you can
have multiple handlers and windows keeps track of them all, even if they
do the same thing. Keeping track of so many system handles would
naturally slow the whole process down. Commenting that line times are
flat as
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> ok, mingw gprof is claiming MainLoop is a culprit here,
The only thing I can see that would be different for Windows is the
SetConsoleCtrlHandler kernel call ... could that be expensive? Why
do we have either sigsetjmp or setup_cancel_handler inside
ok, here is gprof output from newlines/no newlines
[newlines]
% cumulative self self total
time seconds secondscalls s/call s/call name
19.03 0.67 0.671 0.67 3.20 MainLoop
17.61 1.29 0.62 500031 0.00
>
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > yeah. I'm guessing problem is in the mingw flex/bison (which I
really,
> > really hope is not the case) or some other win32 specific block of
code.
> > I'm snooping around there...
>
> Maybe I'm confused here, but I thought we had established
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> yeah. I'm guessing problem is in the mingw flex/bison (which I really,
> really hope is not the case) or some other win32 specific block of code.
> I'm snooping around there...
Maybe I'm confused here, but I thought we had established that the local
> That's bizarre ... I'd have thought a very long line would be more
> likely to trigger internal performance problems than the original.
>
> What happens if you read the file with "psql -f dump.sql" instead
> of cat/stdin?
non-flat. Also ran via \i and got non flat times.
> BTW, I get flat tim
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> It doesn't, and it doesn't. :/ Ok, here's where it gets interesting. I
> removed all the newlines from the test output (dump.sql) and got flat
> times ;).
That's bizarre ... I'd have thought a very long line would be more
likely to trigger internal
> > not quite: attached is a file to generate test.
>
> > cat dump.sql | psql -q yadda
>
> Ah. Does your psql have readline support? if so, does adding -n to
> that command change anything?
>
It doesn't, and it doesn't. :/ Ok, here's where it gets interesting. I
removed all the newlines fr
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
>> You mean something like the attached?
> not quite: attached is a file to generate test.
> cat dump.sql | psql -q yadda
Ah. Does your psql have readline support? if so, does adding -n to
that command change anything?
reg
> > You mean something like the attached?
oh, btw I ran timeit.c and performance is flat and fairly fast. I'm
pretty sure psql is the culprit here.
Merlin
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
> You mean something like the attached?
not quite: attached is a file to generate test.
to do it:
psql yadda
\i timeit.sql
\t
\o dump.sql
select make_dump(5, false);
\q
cat dump.sql | psql -q yadda
and see what pops out. I had to do it that way because redirecting psql
to dump file caused
On Thu, 3 Nov 2005, Tom Lane wrote:
>
> On Unix I get a dead flat line (within measurement noise), both local
> loopback and across my LAN.
>
> after 5 30.20 sec
> after 10 31.67 sec
> after 15 30.98 sec
> after 20 29.64 sec
> after 25 29.83 sec
>
Confirmed in Linux. And on
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> ok, I generated a test case which was 250k inserts to simple two column
> table all in single transaction. Every 50k inserts, time is recorded
> via timeofday().
You mean something like the attached?
> Running from remote, Time progression is:
> F
> > Sorry, I don't follow you here - what do you mean to do? Remove the
> > event completely so we can't wait on it?
> >
>
> I'd like to use the win32 provided recv(), send() functions instead of
> redirect them to pgwin32_recv()/pgwin32_send(), just like libpq does.
If
> we do this, we will lose
On Thu, 3 Nov 2005, Magnus Hagander wrote:
> > Both win32 send/recv have pgwin32_poll_signals() in them.
> > This is glorified WaitForSingleObjectEx on global
> > pgwin32_signal_event. This is probably part of the problem.
> > Can we work some of the same magic you put into check
> > interrupts
"Qingqing Zhou" <[EMAIL PROTECTED]> wrote
>
> Not to 100%, so this means the server is always starving. It is waiting on
> something -- of couse not lock. That's why I think there is some problem
> on network communication. Another suspect will be the write - I knwo NTFS
> system will issue an
""Merlin Moncure"" <[EMAIL PROTECTED]> wrote
>
> Running from remote, Time progression is:
> First 50k: 20 sec
> Second: 29 sec
> [...]
> final:: 66 sec
>
This may due to the maintainence cost of a big transaction, I am not sure
... Tom?
> so, clear upward progression of time/rec. Init
> > > I'd like to use the win32 provided recv(), send()
> functions instead
> > > of redirect them to pgwin32_recv()/pgwin32_send(), just
> like libpq
> > > does. If we do this, we will lose some functionalities,
> but I'd like
> > > to see the performance difference first. -- do you think
>
On Thu, 3 Nov 2005, Magnus Hagander wrote:
> > > Sorry, I don't follow you here - what do you mean to do? Remove the
> > > event completely so we can't wait on it?
> > >
> >
> > I'd like to use the win32 provided recv(), send() functions
> > instead of redirect them to pgwin32_recv()/pgwin32_sen
> > Sorry, I don't follow you here - what do you mean to do? Remove the
> > event completely so we can't wait on it?
> >
>
> I'd like to use the win32 provided recv(), send() functions
> instead of redirect them to pgwin32_recv()/pgwin32_send(),
> just like libpq does. If we do this, we will lo
> > > Both win32 send/recv have pgwin32_poll_signals() in them.
> > > This is glorified WaitForSingleObjectEx on global
> > > pgwin32_signal_event. This is probably part of the problem.
> > > Can we work some of the same magic you put into check interrupts
> > > macro?
> > >
> >
> > Uh, we alrea
On Thu, 3 Nov 2005, Magnus Hagander wrote:
>
> Sorry, I don't follow you here - what do you mean to do? Remove the
> event completely so we can't wait on it?
>
I'd like to use the win32 provided recv(), send() functions instead of
redirect them to pgwin32_recv()/pgwin32_send(), just like libpq
> Both win32 send/recv have pgwin32_poll_signals() in them.
> This is glorified WaitForSingleObjectEx on global
> pgwin32_signal_event. This is probably part of the problem.
> Can we work some of the same magic you put into check
> interrupts macro?
>
> ISTM everything also in win32 functio
> Both win32 send/recv have pgwin32_poll_signals() in them. This is
> glorified WaitForSingleObjectEx on global pgwin32_signal_event. This
is
> probably part of the problem. Can we work some of the same magic you
put
> into check interrupts macro?
Whoop! following a cvs update I see this is alr
> On Wed, 2 Nov 2005, Merlin Moncure wrote:
> If you put client/server on the same machine, then we don't know how
the
> CPU is splitted. Can you take a look at the approximate number by
> observing the task manager data while running?
ok, I generated a test case which was 250k inserts to simple t
On Wed, 2 Nov 2005, Merlin Moncure wrote:
> >
> > By the way, we've tried to insert into the windows database from a
> > linux psql client, via the network. In this configuration, inserting
> > is only about 2 times slower than inserting locally (the linux client
> > had a slower CPU 1700Mhz aga
"Magnus Hagander" <[EMAIL PROTECTED]> writes:
>> Marc's observation that by switching to a linux client drops
>> time down drastically is really intersing!
> Could this be a case of the network being slow,
I'm wondering about nonstandard junk lurking in the TCP stack of the
Windows client machin
Le Mercredi 02 Novembre 2005 14:54, Magnus Hagander a écrit :
> > > I've done the tests with rc1. This is still as slow on windows ...
> >
> > about
> >
> > > 6-10
> > > times slower thant linux (via Ip socket). (depending on
> >
> > using prepared
> >
> > > queries, etc...)
> > >
> > > By the way,
> > I've done the tests with rc1. This is still as slow on windows ...
> about
> > 6-10
> > times slower thant linux (via Ip socket). (depending on
> using prepared
> > queries, etc...)
> >
> > By the way, we've tried to insert into the windows database from a
> linux
> > psql
> > client, via th
> I've done the tests with rc1. This is still as slow on windows ...
about
> 6-10
> times slower thant linux (via Ip socket). (depending on using prepared
> queries, etc...)
>
> By the way, we've tried to insert into the windows database from a
linux
> psql
> client, via the network. In this confi
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> ok, I've been in crunching profile profile graphs, and so far have been
> only been able to draw following conclusions.
> For bulk, 'in-transaction' insert:
> 1. win32 is slower than linux. win32 time for each insert grows with #
> inserts in xact, l
> On Tuesday 06 September 2005 19:11, Merlin Moncure wrote:
> Here's the timeofday ... i'll do the gprof as soon as I can.
> Every 5 rows...
>
> Wed Sep 07 13:58:13.860378 2005 CEST
> Wed Sep 07 13:58:20.926983 2005 CEST
> Wed Sep 07 13:58:27.928385 2005 CEST
> Wed Sep 07 13:58:35.472813 2005
> > One thing I did notice that in a 250k insert transaction the insert
time
> > grows with #recs inserted. Time to insert first 50k recs is about
27
> > sec and last 50 k recs is 77 sec. I also confimed that size of
table is
> > not playing a role here.
> >
> > Marc, can you do select timeofday(
1 - 100 of 140 matches
Mail list logo