nobody ?
On 18 Jul 2005, at 21:29, Yves Vindevogel wrote:
Hi,
Suppose I have a table with 4 fields (f1, f2, f3, f4)
I define 2 unique indexes u1 (f1, f2, f3) and u2 (f1, f2, f4)
I have 3 records
A, B, C, D (this will be inserted)
A, B, C, E (this will pass u2, but not u1, thus not inserted)
A,
Yves Vindevogel wrote:
Hi,
Suppose I have a table with 4 fields (f1, f2, f3, f4)
I define 2 unique indexes u1 (f1, f2, f3) and u2 (f1, f2, f4)
I have 3 records
A, B, C, D (this will be inserted)
A, B, C, E (this will pass u2, but not u1, thus not inserted)
A, B, F, D (this will pass u1, but no
BTW: thank you for the idea
Begin forwarded message:
From: Yves Vindevogel <[EMAIL PROTECTED]>
Date: Tue 19 Jul 2005 12:20:34 CEST
To: Richard Huxton
Subject: Re: [PERFORM] Insert performance (OT?)
On 19 Jul 2005, at 11:39, Richard Huxton wrote:
Yves Vindevogel wrote:
Hi,
Suppose I have a tab
Yves Vindevogel wrote:
>>> So, I must use a function that will check against u1 and u2, and then
insert if it is ok.
I know that such a function is way slower that my insert query.
So - you have a table, called something like "upload" with 20,000 rows
and you'd like to know whether it is safe
I will use 2 queries. They run within a function fnUpload(), so I'm going to keep it simple.
On 19 Jul 2005, at 12:51, Richard Huxton wrote:
Yves Vindevogel wrote:
>>> So, I must use a function that will check against u1 and u2, and then
insert if it is ok.
I know that such a function is way sl
On 7/18/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> > The table has 15 columns, 5 indexes (character, inet and timestamp).
> > No foreign keys. The only other thing running on the machine was the
> > application actually DOING the benchmarking, written in Python
> > (psycopg), but it was, according t
Christopher Petrilli <[EMAIL PROTECTED]> writes:
> Here's a graph of the differences and density of behavior:
> http://blog.amber.org/diagrams/pgsql_copy_803_cvs.png
> I can provide the raw data.
How about the complete test case? There's something awfully odd going
on there, and I'd like to fin
What happens if, say at iteration 6000 (a bit after the mess starts), you
pause it for a few minutes and resume. Will it restart with a plateau like
at the beginning of the test ? or not ?
What if, during this pause, you disconnect and reconnect, or restart the
postmaster, or vacuum, or a
On 7/19/05, PFC <[EMAIL PROTECTED]> wrote:
>
> What happens if, say at iteration 6000 (a bit after the mess starts),
> you
> pause it for a few minutes and resume. Will it restart with a plateau like
> at the beginning of the test ? or not ?
Not sure... my benchmark is designed to repres
Christopher Petrilli <[EMAIL PROTECTED]> writes:
> On 7/19/05, PFC <[EMAIL PROTECTED]> wrote:
>> What happens if, say at iteration 6000 (a bit after the mess starts), you
>> pause it for a few minutes and resume. Will it restart with a plateau like
>> at the beginning of the test ? or not ?
> Not
total. If the insertion pattern is sufficiently random that the entire
index ranges are "hot" then you might not have enough RAM.
Try doing the test dropping some of your indexes and see if it moves the
number of iterations after which it becomes slow.
---(end of
I think PFC's question was not directed towards modeling your
application, but about helping us understand what is going wrong
(so we can fix it).
Exactly, I was wondering if this delay would allow things to get flushed,
for instance, which would give information about the problem (if giv
On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > Not sure... my benchmark is designed to represent what the database
> > will do under "typical" circumstances, and unfortunately these are
> > typical for the application. However, I can see abo
On 7/19/05, PFC <[EMAIL PROTECTED]> wrote:
>
>
> > I think PFC's question was not directed towards modeling your
> > application, but about helping us understand what is going wrong
> > (so we can fix it).
>
> Exactly, I was wondering if this delay would allow things to get
> flushed,
Christopher Petrilli <[EMAIL PROTECTED]> writes:
> On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
>> I'm suddenly wondering if the performance dropoff corresponds to the
>> point where the indexes have grown large enough to not fit in shared
>> buffers anymore. If I understand correctly, the 5000
On Mon, 18 Jul 2005 13:52:53 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
>
> Start a fresh psql session and "SHOW vacuum_cost_delay" to verify what
> the active setting is.
>
Alright. Restarted the 803 database. Cron based vacuum analyze is running
every 5 minutes. vacuum_cost_delay is 0. The
Robert Creager <[EMAIL PROTECTED]> writes:
> Alright. Restarted the 803 database. Cron based vacuum analyze is
> running every 5 minutes. vacuum_cost_delay is 0. The problem showed
> up after about 1/2 hour of running. I've got vacuum jobs stacked from
> the last 35 minutes, with 2 vacuums run
As I'm doing this, I'm noticing something *VERY* disturbing to me:
postmaster backend: 20.3% CPU
psql frontend: 61.2% CPU
WTF? The only thing going through the front end is the COPY command,
and it's sent to the backend to read from a file?
Chris
--
| Christopher Petrilli
| [EMAIL PROTECTED]
Christopher Petrilli <[EMAIL PROTECTED]> writes:
> As I'm doing this, I'm noticing something *VERY* disturbing to me:
> postmaster backend: 20.3% CPU
> psql frontend: 61.2% CPU
> WTF? The only thing going through the front end is the COPY command,
> and it's sent to the backend to read from a fil
On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > As I'm doing this, I'm noticing something *VERY* disturbing to me:
> > postmaster backend: 20.3% CPU
> > psql frontend: 61.2% CPU
>
> > WTF? The only thing going through the front end is the COP
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the load on the server is only about 21%.
I upped the sort_mem to 8192 (kB), and shared_buffers and effective_cache_size to 65536 (512MB), but neither
Oliver Crosby wrote:
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the
load on the server is only about 21%.
What queries?
What is your structure?
Have you tried explain analyze?
How many rows
Oliver Crosby wrote:
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the
load on the server is only about 21%.
I upped the sort_mem to 8192 (kB), and shared_buffers and
effective_cache_size to 6553
On Tue, 19 Jul 2005 12:54:22 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Hmm, I hadn't thought about the possible impact of multiple concurrent
> vacuums. Is the problem caused by that, or has performance already gone
> into the tank by the time the cron-driven vacuums are taking long enough
> to
Oliver Crosby wrote:
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the load
on the server is only about 21%.
What scripts? What do they do?
Oh, and 7.4.8 is the latest release - worth upgradin
Christopher Petrilli <[EMAIL PROTECTED]> writes:
>> Are you sure the backend is reading directly from the file, and not
>> through psql? (\copy, or COPY FROM STDIN, would go through psql.)
> The exact command is:
> COPY test (columnlist...) FROM '/tmp/loadfile';
I tried to replicate this by putt
I was hoping to start with tuning postgres to match the hardware, but
in any case..
The queries are all simple insert or select statements on single tables.
Eg. select x from table where y=?; or insert into table (a, b, c)
values (?, ?, ?);
In the case of selects where it's a large table, there's
The thread below has the test case that we were able to use
to reproduce the issue.
http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php
The last messages on this subject are from April of
2005. Has there been any successful ways to significantly reduce the
impact
> Identify what the problem is first of all. Some things to consider:
> - Are there particular queries giving you trouble?
> - Is your load mostly reads or mostly writes?
> - Do you have one user or 100?
> - Are you block-loading data efficiently where necessary?
> - Have you indexed both side
FWIW, I'm seeing this with a client at the moment. 40-60k CS per second
on Dual 3.2GHz.
There are plenty of other issues we're dealing with, but this is
obviously
disconcerting...
On 19 Jul 2005, at 19:23, Sailer, Denis (YBUSA-CDR) wrote:
The thread below has the test case that we were abl
> What programming language are these scripts written in ?
perl. using the DBD:Pg interface instead of command-lining it through psql
---(end of broadcast)---
TIP 6: explain analyze is your friend
"Sailer, Denis (YBUSA-CDR)" <[EMAIL PROTECTED]> writes:
> http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php
> The last messages on this subject are from April of 2005. Has there
> been any successful ways to significantly reduce the impact this has to
> multi-processing?
CVS
On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> >> Are you sure the backend is reading directly from the file, and not
> >> through psql? (\copy, or COPY FROM STDIN, would go through psql.)
>
> > The exact command is:
> > COPY test (columnlist.
What programming language are these scripts written in ?
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Christopher Petrilli <[EMAIL PROTECTED]> writes:
> On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
>> How *exactly* are you invoking psql?
> It is a subprocess of a Python process, driven using a pexpect
> interchange. I send the COPY command, then wait for the '=#' to come
> back.
Some weird int
Hi Oliver,
We had low resource utilization and poor throughput on inserts of
thousands of rows within a single database transaction. There were a
lot of configuration parameters we changed, but the one which helped the
most was wal_buffers -- we wound up setting it to 1000. This may be
higher th
Oliver Crosby <[EMAIL PROTECTED]> writes:
> The queries are all simple insert or select statements on single tables.
> Eg. select x from table where y=?; or insert into table (a, b, c)
> values (?, ?, ?);
> In the case of selects where it's a large table, there's an index on
> the column being sear
On Tue, Jul 19, 2005 at 03:01:00PM -0400, Tom Lane wrote:
> You could possibly get some improvement if you can re-use prepared plans
> for the queries; but this will require some fooling with the client code
> (I'm not sure if DBD::Pg even has support for it at all).
Newer versions has, when compi
It is a subprocess of a Python process, driven using a pexpect
interchange. I send the COPY command, then wait for the '=#' to come
back.
did you try sending the COPY as a normal query through psycopg ?
---(end of broadcast)---
TIP 9: In
I can't say wether MySQL is faster for very small queries (like
SELECT'ing one row based on an indexed field).
That's why I was asking you about the language...
I assume you're using a persistent connection.
For simple queries like this, PG 8.x seemed to be a lot faster than
Oliver Crosby <[EMAIL PROTECTED]> writes:
>> You could possibly get some improvement if you can re-use prepared plans
>> for the queries; but this will require some fooling with the client code
>> (I'm not sure if DBD::Pg even has support for it at all).
> Aye. We have prepared statements.
Ah, bu
> If you're running only a single query at a time (no multiple clients),
> then this is pretty much the definition of a MySQL-friendly workload;
> I'd have to say we are doing really well if we are only 50% slower.
> Postgres doesn't have any performance advantages until you get into
> complex quer
On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> >> How *exactly* are you invoking psql?
>
> > It is a subprocess of a Python process, driven using a pexpect
> > interchange. I send the COPY com
On Tue, Jul 19, 2005 at 03:16:31PM -0400, Tom Lane wrote:
> Ah, but are they really prepared, or is DBD::Pg faking it by inserting
> parameter values into the query text and then sending the assembled
> string as a fresh query?
They are really prepared.
/* Steinar */
--
Homepage: http://www.se
On Tue, 19 Jul 2005 12:54:22 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Robert Creager <[EMAIL PROTECTED]> writes:
>
> Hmm, I hadn't thought about the possible impact of multiple concurrent
> vacuums. Is the problem caused by that, or has performance already gone
> into the tank by the time the
On Jul 19, 2005, at 3:36 PM, Steinar H. Gunderson wrote:
On Tue, Jul 19, 2005 at 03:16:31PM -0400, Tom Lane wrote:
Ah, but are they really prepared, or is DBD::Pg faking it by
inserting
parameter values into the query text and then sending the assembled
string as a fresh query?
They are
> We had low resource utilization and poor throughput on inserts of
> thousands of rows within a single database transaction. There were a
> lot of configuration parameters we changed, but the one which helped the
> most was wal_buffers -- we wound up setting it to 1000. This may be
> higher than
On 7/19/05, Oliver Crosby <[EMAIL PROTECTED]> wrote:
> > We had low resource utilization and poor throughput on inserts of
> > thousands of rows within a single database transaction. There were a
> > lot of configuration parameters we changed, but the one which helped the
> > most was wal_buffers
PS: Where can I find benchmarks comparing PHP vs Perl vs Python in
terms of speed of executing prepared statements?
I'm afraid you'll have to do these yourself !
And, I don't think the Python drivers support real prepared statements
(the speed of psycopy is really good though).
I
> If it is possible try:
> 1) wrapping many inserts into one transaction
> (BEGIN;INSERT;INSERT;...INSERT;COMMIT;). As PostgreSQL will need to
> handle less transactions per second (each your insert is a transaction), it
> may work faster.
Aye, that's what I have it doing right now. The transacti
On 07/19/2005-02:41PM, Oliver Crosby wrote:
>
> No queries in particular appear to be a problem.
That could mean they are ALL a problem. Let see some EXPLAIN ANAYZE
results just to rule it out.
> At the moment it's just one user,
With 1 user PostgreSQL will probobaly never beat MySQL
but wit
On Tue, 2005-07-19 at 16:28 -0400, Oliver Crosby wrote:
> > If it is possible try:
> > 1) wrapping many inserts into one transaction
> > (BEGIN;INSERT;INSERT;...INSERT;COMMIT;). As PostgreSQL will need to
> > handle less transactions per second (each your insert is a transaction), it
> > may work
> since triggers work with COPY, you could probably write a trigger that
> looks for this condition and does the ID processsing you need; you could
> thereby enjoy the enormous speed gain resulting from COPY and maintain
> your data continuity.
So... (bear with me here.. trying to make sense of th
I tuned a query last week to obtain acceptable performance.
Here is my recorded explain analyze results:
-
LOG: duration: 826.505 ms statement: explain analyze
SELECT
c.id AS contact_id,
sr.id AS sales_rep_id,
LTRIM(RTRIM(sr.firstname || ' ' || sr.lastname)) AS sales_rep_n
On Tue, 2005-07-19 at 17:04 -0400, Oliver Crosby wrote:
> > since triggers work with COPY, you could probably write a trigger that
> > looks for this condition and does the ID processsing you need; you could
> > thereby enjoy the enormous speed gain resulting from COPY and maintain
> > your data co
I'll try that.
Let you know as soon as I can take a look.
Thank you-
-Mensaje original-
De: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] nombre de Kevin
Grittner
Enviado el: lunes, 18 de julio de 2005 17:48
Para: pgsql-performance@postgresql.org; [EMAIL PROTECTED]
Asunto: Re: [PERFORM] j
You could have a program pre-parse your log and put it in a format
understandable by COPY, then load it in a temporary table and write a part
of your application simply as a plpgsql function, reading from this table
and doing queries (or a plperl function)...
So... (bear with me here..
Sorry for the lack of specifics...
We have a file generated as a list of events, one per line. Suppose
lines 1,2,3,5,7,11,etc were related, then the last one would specify
that it's the last event. Gradually this gets assembled by a perl
script and when the last event is encountered, it gets inser
When grilled further on (Tue, 19 Jul 2005 12:09:51 -0600),
Robert Creager <[EMAIL PROTECTED]> confessed:
> On Tue, 19 Jul 2005 12:54:22 -0400
> Tom Lane <[EMAIL PROTECTED]> wrote:
>
> > Hmm, I hadn't thought about the possible impact of multiple concurrent
> > vacuums. Is the problem caused by t
59 matches
Mail list logo