Re: [PERFORM] Trigger Function

2004-06-01 Thread Mike Nolan
 My problem is I defined the before and after
 fields in the audit table as TEXT and when I try to move NEW or OLD into
 these fields I get the error NEW used in query that is not in a rule.  

You're trying to insert record data into a text field, that doesn't work.
OLD and NEW can be used as either record identifiers (as in RETURN OLD)
or column qualifiers (as in OLD.colname), but you can't intermingle them.

I don't think postgres (pl/pgsql) has row-to-variable and variable-to-row 
functions like serialize and unserialize, that's probably what you'd need. 
It would probably be necessary to write something like that in C, since 
at this point pl/perl cannot be used for trigger functions.  

I've not tried using pl/php yet, the announcement for it says it can be 
used for trigger functions.  

My first thought is that even if there was a serialize/unserialize 
capabiity you might be able to write something using it that creates 
the log entry but not anything that allows you to query the log for 
specific column or row entries.

It would probably require a MAJOR extension of SQL to add it to pg,
as there would need to be qualifiers that can be mapped to specific
tables and columns.  Even if we had that, storing values coming from 
multiple tables into a single audit table would present huge challenges.

I've found only two ways to implement audit logs:

1.  Have separate log tables that match the structure of
the tables they are logging.

2.  Write a trigger function that converts columns to something you can
store in a common log table.  (I've not found a way to do this without
inserting one row for each column being logged, though.)
--
Mike Nolan

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PERFORM] PostgreSQL on VMWare vs Windows vs CoLinux

2004-06-01 Thread Vitaly Belman
Hello pgsql-performance,

  I was using the native windows PostgreSQL 7.5Dev and was adviced by
  several people to use an emulated PostgreSQL instead, as it is just
  a beta.

  Well, I give it a whirl and tried both commercial VMWare and the
  freeweare open-source CoLinux, both work under Windows and both
  emulate Linux, that's a quick review of my experience with them, may
  someone in need learn from it.

  This might be not the best place for such a post, but since the
  subject was brought up here, I'll post it here as well. If someone
  thinks it should be posted somewhere else, let me know.

  Installation  Configuration
  
  
  VMWare:

  On the bright side, the installation went quite smoothly, VMWare
  configured all the network stuff by itself and I had no trouble
  using the net right away. On the grim side, the installation itself
  took ages, compared to the plug  play feel of CoLinux.

  Installing PostgreSQL on VMWare was quite straightforward, just as
  the the PostgreSQL documention goes.

  CoLinux:

  As I said, with CoLinux the installation itself goes very quickly.
  To get Linux running you need to download practically less than 20mb
  which include the distribution (Debian in my case) and the CoLinux
  setup. Configuring CoLinux took a bit longer than VMWare, yet, not
  long as I thought it would take. In fact, it can be very easy if you
  just follow the documention of CoLinux Wiki stuff, there are some
  very easy to follow tutorials there.

  Installing PostgreSQL on CoLinux proved a little more difficult
  (again, Debian), but I posted a quick tutorial that should smooth
  the process: http://www.colinux.org/wiki/index.php/PostgreSQL.

  Performance
  ---

  This was a totally subjective test (especially since one of the
  participants is in a beta stage), yet, that's what I tested and that's
  what I needed to know.

  To make the test as fair as possible, I did an exact dump of the
  same database. I ran the SQLs (around 10) in the same order on all
  of them and repeated the test several times. I also did an EXPLAIN
  on the queries to make sure all the databases work on the query the
  same way. It wasn't a full test though, I didn't test mass select
  load, nor inserts, nor work under heavy load, nor I tried different
  types of joins. All I did was to run some heavy (in execution time)
  queries. So you should take these tests just for what they are.

  That's what I got:

  The native window port performed poorly lagging
  30%-50% behind the VMWare/CoLinux solutions in execution times,
  rather sad, but not unexpected, I guess.

  CoLinux and VMWare give AROUND the same results, yet CoLinux did
  give slightly better performance (I'd say 5%-10%) but with such
  slight improvement and inconsistency I wouldn't count it as much.

  Conclusion
  --

  With all that said, VMWare is badly suited for running a database,
  while CoLinux can be run as a service (didn't try it yet though),
  VMWare always sits there, it is slow to go up, slow to go down and
  generally feels like a system hog.

  I'll go on with CoLinux for now and hope it will act as good as it
  looks.

  http://www.vmware.com/
  http://www.colinux.org/

  Thanks to Bryan and Matthew for their advices regarding the emulations.

Regards,
 Vitaly Belman
 
 ICQ: 1912453
 AIM: VitalyB1984
 MSN: [EMAIL PROTECTED]
 Yahoo!: VitalyBe


---(end of broadcast)---
TIP 8: explain analyze is your friend


[PERFORM] Disk performance, was Re: tablespaces and DB administration

2004-06-01 Thread Marty Scholes
This was a lively debate on what was faster, single spindles or RAID.
This is important, because I keep running into people who do not 
understand the performance dynamics of a RDBMS like Oracle or Pg.

Pg and Oracle make a zillion tiny reads and writes and fsync() 
regularly.  If your drive will copy a 4GB file at a sustained rate of 72 
MB/s, that tells you nothing about how it will do with an RDBMS.

I will throw in my experience on RAID vs spindles.
With the RAID write cache disabled, a well balanced set of spindles will 
kill a RAID system any day.

Enable the cache, and the RAID starts inching ahead.  My experience is 
that no one can continuously keep I/O properly balanced across several 
spindles on a production system.  Things change and the I/O mix changes. 
 Then, the RAID is outperforming the spindles.  If you want to spend 
the rest of your career constantly balancing I/O across spindles, then 
do so.

For the rest of us, with a write cache, a hardware RAID wins hands down 
over the long haul.

It might make sense to provide some sort of benchmarking tool for 
various systems so that we can predict I/O performance.

Run the following code both on a hardware RAID and on a single spindle.
#include stdlib.h
#include stdio.h
#include sys/types.h
#include sys/stat.h
#include fcntl.h
#include time.h
void makefile(int fs)
{
int i;
charbuf[8192];
int ld;
int blocks=4096;
int pos;
time_t  stim;
time_t  etim;
float   avg;
unlink(dump.out);
ld=open(dump.out, O_WRONLY | O_CREAT);
printf(Writing %d blocks sequentially\n, blocks);
time(stim);
for (i=0; iblocks; i++) {
write(ld, buf, sizeof(buf));
if (fs) {
fsync(ld);
}
}
time(etim);
avg = (blocks+0.0)/(etim-stim-0.0);
printf(Took %d seconds, avg %f iops\n\n, etim-stim, avg);
// purge the write cache
fsync(ld);
printf(Writing %d blocks (somewhat randomly)\n, blocks);
time(stim);
for (i=0; iblocks; i++) {
pos = (rand()%blocks)*sizeof(buf);
lseek(ld, pos, SEEK_SET);
write(ld, buf, sizeof(buf));
if (fs) {
fsync(ld);
}
}
time(etim);
avg = (blocks+0.0)/(etim-stim-0.0);
printf(Took %d seconds, avg %f iops\n\n, etim-stim, avg);
close(ld);
unlink(dump.out);
}
int main()
{
printf(No fsync()\n);
makefile(0);
printf(With fsync()\n);
makefile(1);
return 0;
}
The first operation shows how well the OS write cache is doing.  The 
second shows how poorly everything runs with fsync(), which is what Pg 
and Oracle do.

My RAID produced the following, but was also running production when I 
ran it:

No fsync()
Writing 4096 blocks sequentially
Took 1 seconds, avg 4096.00 iops
Writing 4096 blocks (somewhat randomly)
Took 4 seconds, avg 1024.00 iops
With fsync()
Writing 4096 blocks sequentially
Took 40 seconds, avg 102.42 iops
Writing 4096 blocks (somewhat randomly)
Took 66 seconds, avg 62.060608 iops
When I ran this on a decent fibre channel drive, I got:
No fsync()
Writing 4096 blocks sequentially
Took 1 seconds, avg 4096.00 iops
Writing 4096 blocks (somewhat randomly)
Took 7 seconds, avg 585.142883 iops
With fsync()
Writing 4096 blocks sequentially
Took 106 seconds, avg 38.641510 iops
Writing 4096 blocks (somewhat randomly)
Took 115 seconds, avg 35.617390 iops
You can see that the RAID array really helps out with small writes.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] PostgreSQL and Kernel 2.6.x

2004-06-01 Thread Rod Taylor
On Tue, 2004-06-01 at 23:16, V i s h a l Kashyap @ [Sai Hertz And
Control Systems] wrote:
 Dear all,
 
 Have anyone compiled PostgreSQL with kernel 2.6.x 
 if YES
 1. Was their any performance gains

OSDL reports approx 20% improvement. I've seen similar with some data
access patterns.

 2. What problems would keeping us away from compiling on kernel 2.6

Nothing that I know of assuming you have vendor support for it.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly