Re: [PERFORM] Sustained inserts per sec ... ?

2005-04-01 Thread Mike Nolan
  Well, I just did an insert of 27,500 records with 9 fields, averaging 
  around 118 bytes per record, each insert statement coming from a separate 
  SQL statement fed to psql, and it took a bit over 4 minutes, or about 
  106 inserts per second.
 
 Is that with a separate transaction for each insert command?  I can get
 significantly higher rates on my devel machine if the inserts are
 bundled into transactions of reasonable length.

That's with autocommit on.  If I do it as a single transaction block,
it takes about 6.5 seconds, which is about 4200 transactions/second.
--
Mike Nolan

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Trigger Function

2004-06-01 Thread Mike Nolan
 My problem is I defined the before and after
 fields in the audit table as TEXT and when I try to move NEW or OLD into
 these fields I get the error NEW used in query that is not in a rule.  

You're trying to insert record data into a text field, that doesn't work.
OLD and NEW can be used as either record identifiers (as in RETURN OLD)
or column qualifiers (as in OLD.colname), but you can't intermingle them.

I don't think postgres (pl/pgsql) has row-to-variable and variable-to-row 
functions like serialize and unserialize, that's probably what you'd need. 
It would probably be necessary to write something like that in C, since 
at this point pl/perl cannot be used for trigger functions.  

I've not tried using pl/php yet, the announcement for it says it can be 
used for trigger functions.  

My first thought is that even if there was a serialize/unserialize 
capabiity you might be able to write something using it that creates 
the log entry but not anything that allows you to query the log for 
specific column or row entries.

It would probably require a MAJOR extension of SQL to add it to pg,
as there would need to be qualifiers that can be mapped to specific
tables and columns.  Even if we had that, storing values coming from 
multiple tables into a single audit table would present huge challenges.

I've found only two ways to implement audit logs:

1.  Have separate log tables that match the structure of
the tables they are logging.

2.  Write a trigger function that converts columns to something you can
store in a common log table.  (I've not found a way to do this without
inserting one row for each column being logged, though.)
--
Mike Nolan

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Long running queries degrade performance

2004-04-16 Thread Mike Nolan
 We have a web app with a postgres backend.  Most queries have subsecond 
 response times through the web even with high usage.  Every once in awhile 
 someone will run either an ad-hoc query or some other long running db 
 process.  

Are you sure it is postgres where the delay is occurring?  I ask this
because I also have a web-based front end to postgres, and while most of
the time the queries respond in about a second every now and then I see
one that takes much longer, sometimes 10-15 seconds.

I've seen this behavior on both my development system and on the
production server.  

The same query a while later might respond quickly again.

I'm not sure where to look for the delay, either, and it is intermittent
enough that I'm not even sure what monitoring techniques to use.
--
Mike Nolan

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Long running queries degrade performance

2004-04-16 Thread Mike Nolan
 Fairly sure, when it is happening, postgres usually is taking up the top slots 
 for cpu usage as reported by top.  Perhaps there is a better way to monitor 
 this?

Given the intermittent nature of the problem and its relative brevity 
(5-10 seconds), I don't know whether top offers the granularity needed to
locate the bottleneck.

 likely you have a situation where something else is happening which blocks 
 the current thread.  

It happens on my development system, and I'm the only one on it.  I know
I've seen it on the production server, but I think it is a bit more
common on the development server, though that may be a case of which system
I spend the most time on.  (Also, the production server is 1300 miles away
with a DSL connection, so I may just be seeing network delays some of
the time there.)

 Both of these were triggered by users double clicking links in our 
 web app and were fixed by a better implementation.  Perhaps something like 
 that is causing what you are seeing.

My web app traps double-clicks in javascript and ignores all but the first one.
That's because some of the users have mice that give double-clicks even
when they only want one click.
--
Mike Nolan

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] What's faster?

2003-12-26 Thread Mike Nolan
 Because Postgres requires VACUUM ANALYZE more frequently on updated tables,
 should I break this single field out into its own table, and if so what kind
 of a speed up can I expect to achieve. I would be appreciative of any
 guidance offered.

Unless that field is part of the key, I wouldn't think that a vacuum 
analyze would be needed, as the key distribution isn't changing. 

I don't know if that is still true if that field is indexed.  Tom?

Even then, as I understand things vacuum analyze doesn't rebuild indexes, 
so I could see a need to drop and rebuild indexes on a regular basis,
even if you move that field into a separate table.  
--
Mike Nolan

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings