On Tue, 2005-04-05 at 16:05 -0400, Christopher Petrilli wrote:
On Apr 5, 2005 3:48 PM, Simon Riggs [EMAIL PROTECTED] wrote:
The indicated fix of course is to increase shared_buffers.
Splitting your tables at 4M, not 10M would work even better.
Unfortunately, given we are talking about
On Mon, 2005-04-04 at 22:36 -0400, Tom Lane wrote:
Christopher Petrilli [EMAIL PROTECTED] writes:
On Apr 4, 2005 12:23 PM, Tom Lane [EMAIL PROTECTED] wrote:
do a test run with *no* indexes on the table, just to see if it behaves
any differently? Basically I was wondering if index overhead
On Apr 5, 2005 3:48 PM, Simon Riggs [EMAIL PROTECTED] wrote:
Now some amount of slowdown is to be expected as the indexes get larger,
since it ought to take roughly O(log N) time to insert a new entry in an
index of size N. The weird thing about your curves is the very sudden
jump in the
On Apr 5, 2005 3:48 PM, Simon Riggs [EMAIL PROTECTED] wrote:
B-trees aren't unique to PostgreSQL; the explanation developed here
would work equally well for any database system that used tree-based
indexes. Do we still think that MySQL can do this when PostgreSQL
cannot? How?
Do we have
On Apr 1, 2005 3:59 PM, Christopher Petrilli [EMAIL PROTECTED] wrote:
On Apr 1, 2005 3:53 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
What seems to happen is it slams into a wall of some sort, the
system goes into disk write frenzy (wait=90% CPU), and eventually
recovers and starts
Christopher Petrilli [EMAIL PROTECTED] writes:
The table has:
* 21 columns (nothing too strange)
* No OIDS
* 5 indexes, including the primary key on a string
Could we see the *exact* SQL definitions of the table and indexes?
Also some sample data would be interesting. I'm
On Apr 4, 2005 11:52 AM, Tom Lane [EMAIL PROTECTED] wrote:
Christopher Petrilli [EMAIL PROTECTED] writes:
The table has:
* 21 columns (nothing too strange)
* No OIDS
* 5 indexes, including the primary key on a string
Could we see the *exact* SQL definitions of the table and
Christopher Petrilli [EMAIL PROTECTED] writes:
On Apr 4, 2005 11:52 AM, Tom Lane [EMAIL PROTECTED] wrote:
Could we see the *exact* SQL definitions of the table and indexes?
Of course, this is a bit cleansed, since it's an internal project, but
only the column names are changed:
Thanks. No
On Apr 4, 2005 12:23 PM, Tom Lane [EMAIL PROTECTED] wrote:
Christopher Petrilli [EMAIL PROTECTED] writes:
On Apr 4, 2005 11:52 AM, Tom Lane [EMAIL PROTECTED] wrote:
Could we see the *exact* SQL definitions of the table and indexes?
Of course, this is a bit cleansed, since it's an internal
On Mon, 2005-04-04 at 09:48 -0400, Christopher Petrilli wrote:
The point, in the rough middle, is where the program begins inserting
into a new table (inherited). The X axis is the total number of rows
inserted.
and you also mention the same data plotted with elapsed time:
On Apr 4, 2005 3:46 PM, Simon Riggs [EMAIL PROTECTED] wrote:
On Mon, 2005-04-04 at 09:48 -0400, Christopher Petrilli wrote:
The point, in the rough middle, is where the program begins inserting
into a new table (inherited). The X axis is the total number of rows
inserted.
and you also
On Mon, 2005-04-04 at 15:56 -0400, Christopher Petrilli wrote:
On Apr 4, 2005 3:46 PM, Simon Riggs [EMAIL PROTECTED] wrote:
On Mon, 2005-04-04 at 09:48 -0400, Christopher Petrilli wrote:
The point, in the rough middle, is where the program begins inserting
into a new table (inherited).
On Apr 4, 2005 4:11 PM, Simon Riggs [EMAIL PROTECTED] wrote:
I'm very interested in the graphs of elapsed time for COPY 500 rows
against rows inserted. The simplistic inference from those graphs are
that if you only inserted 5 million rows into each table, rather than 10
million rows
On Apr 4, 2005 4:53 PM, PFC [EMAIL PROTECTED] wrote:
This is done using COPY syntax, not INSERT syntax. So I suppose yes
I do. The file that is being used for COPY is kept on a ramdisk.
COPY or psql \copy ?
If you wanna be sure you commit after each COPY, launch a psql in
This is done using COPY syntax, not INSERT syntax. So I suppose yes
I do. The file that is being used for COPY is kept on a ramdisk.
COPY or psql \copy ?
If you wanna be sure you commit after each COPY, launch a psql in a shell
and check if the inserted rows are visible (watching SELECT
On Mon, 2005-04-04 at 16:18 -0400, Christopher Petrilli wrote:
On Apr 4, 2005 4:11 PM, Simon Riggs [EMAIL PROTECTED] wrote:
I'm very interested in the graphs of elapsed time for COPY 500 rows
against rows inserted. The simplistic inference from those graphs are
that if you only
On Apr 4, 2005 4:58 PM, Simon Riggs [EMAIL PROTECTED] wrote:
Can you do:
select relname from pg_class where relfilenode = 26488271
and confirm that the name is the table you've been loading...
It is.
Couldn't see all your indexes... are they still there?
Nope, I'm running a second run
On Mon, 2005-04-04 at 17:03 -0400, Christopher Petrilli wrote:
On Apr 4, 2005 4:58 PM, Simon Riggs [EMAIL PROTECTED] wrote:
Can you do:
select relname from pg_class where relfilenode = 26488271
and confirm that the name is the table you've been loading...
It is.
Couldn't see all
If I'm getting the point of this thread correctly, have a huge amount
of data in one table degrades INSERT/COPY performance even with just a
PKEY index. If that's about the size of it, read on. If not, ignore
me because I missed something.
On Apr 4, 2005 10:44 PM, Simon Riggs [EMAIL PROTECTED]
On Apr 4, 2005 6:44 PM, Simon Riggs [EMAIL PROTECTED] wrote:
Before I start to tunnel-vision on a particular coincidence...
How much memory have you got on the system?
Now, 2Gb, but most of it is free in this situation. Earlier, I posted
some of the settings related to work mem.
How much
On Apr 4, 2005 10:36 PM, Tom Lane [EMAIL PROTECTED] wrote:
Christopher Petrilli [EMAIL PROTECTED] writes:
On Apr 4, 2005 12:23 PM, Tom Lane [EMAIL PROTECTED] wrote:
do a test run with *no* indexes on the table, just to see if it behaves
any differently? Basically I was wondering if index
Christopher Petrilli [EMAIL PROTECTED] writes:
On Apr 4, 2005 10:36 PM, Tom Lane [EMAIL PROTECTED] wrote:
The indicated fix of course is to increase shared_buffers.
Any idea where it should be set?
Not really. An upper bound would be the total size of the finished
indexes for one 10M-row
Tom Lane [EMAIL PROTECTED] writes:
What I think might be happening is that the working set of pages
touched during index inserts is gradually growing, and at some point it
exceeds shared_buffers, and at that point performance goes in the toilet
because we are suddenly doing lots of reads to
On 04 Apr 2005 23:45:47 -0400, Greg Stark [EMAIL PROTECTED] wrote:
Tom Lane [EMAIL PROTECTED] writes:
What I think might be happening is that the working set of pages
touched during index inserts is gradually growing, and at some point it
exceeds shared_buffers, and at that point
Greg Stark [EMAIL PROTECTED] writes:
All this is happening within a single transaction too, right? So there hasn't
been an fsync the entire time. It's entirely up to the kernel when to decide
to start writing data.
No ... there's a commit every 500 records. However, I think Chris said
he was
On Apr 4, 2005 11:57 PM, Tom Lane [EMAIL PROTECTED] wrote:
Greg Stark [EMAIL PROTECTED] writes:
All this is happening within a single transaction too, right? So there
hasn't
been an fsync the entire time. It's entirely up to the kernel when to decide
to start writing data.
No ...
On Tue, Apr 05, 2005 at 12:16:27AM -0400, Christopher Petrilli wrote:
My fear is that it's some bizarre situation interacting with both
issues, and one that might not be solvable. Does anyone else have
much experience with this sort of sustained COPY?
You might ask the guy who just posted to
On Apr 1, 2005 1:06 PM, Marc G. Fournier [EMAIL PROTECTED] wrote:
Just curious, but does anyone have an idea of what we are capable of? I
realize that size of record would affect things, as well as hardware, but
if anyone has some ideas on max, with 'record size', that would be
appreciated
Christopher Petrilli [EMAIL PROTECTED] writes:
I can start at about 4,000 rows/second, but at about 1M rows, it
plummets, and by 4M it's taking 6-15 seconds to insert 1000 rows.
That's only about 15 rows/second, which is quite pathetic. The
problem seems to be related to my indexes, since I
On Apr 1, 2005 3:42 PM, Tom Lane [EMAIL PROTECTED] wrote:
Christopher Petrilli [EMAIL PROTECTED] writes:
I can start at about 4,000 rows/second, but at about 1M rows, it
plummets, and by 4M it's taking 6-15 seconds to insert 1000 rows.
That's only about 15 rows/second, which is quite
What seems to happen is it slams into a wall of some sort, the
system goes into disk write frenzy (wait=90% CPU), and eventually
recovers and starts running for a while at a more normal speed. What
I need though, is to not have that wall happen. It is easier for me
to accept a constant
On Apr 1, 2005 3:53 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
What seems to happen is it slams into a wall of some sort, the
system goes into disk write frenzy (wait=90% CPU), and eventually
recovers and starts running for a while at a more normal speed. What
I need though, is to not
Mike Nolan [EMAIL PROTECTED] writes:
Well, I just did an insert of 27,500 records with 9 fields, averaging
around 118 bytes per record, each insert statement coming from a separate
SQL statement fed to psql, and it took a bit over 4 minutes, or about
106 inserts per second.
Is that with a
Oh - this is with a seperate transaction per command.
fsync is on.
Alex Turner
netEconomist
On Apr 1, 2005 4:17 PM, Alex Turner [EMAIL PROTECTED] wrote:
1250/sec with record size average is 26 bytes
800/sec with record size average is 48 bytes.
250/sec with record size average is 618 bytes.
Alex Turner [EMAIL PROTECTED] writes:
On Apr 1, 2005 4:17 PM, Alex Turner [EMAIL PROTECTED] wrote:
1250/sec with record size average is 26 bytes
800/sec with record size average is 48 bytes.
250/sec with record size average is 618 bytes.
Oh - this is with a seperate transaction per command.
Well, I just did an insert of 27,500 records with 9 fields, averaging
around 118 bytes per record, each insert statement coming from a separate
SQL statement fed to psql, and it took a bit over 4 minutes, or about
106 inserts per second.
Is that with a separate transaction for each
36 matches
Mail list logo