[PERFORM] postgresql is slow with larger table even it is in RAM

2008-03-25 Thread sathiya psql
Dear Friends,
 I have a table with 32 lakh record in it. Table size is nearly 700 MB,
and my machine had a 1 GB + 256 MB RAM, i had created the table space in
RAM, and then created this table in this RAM.

So now everything is in RAM, if i do a count(*) on this table it returns
327600 in 3 seconds, why it is taking 3 seconds ? because am sure that
no Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is
happening, swap is also not used )

Any Idea on this ???

I searched a lot in newsgroups ... can't find relevant things ( because
everywhere they are speaking about disk access speed, here i don't want to
worry about disk access )

If required i will give more information on this.


Re: [PERFORM] postgresql is slow with larger table even it is in RAM

2008-03-25 Thread sathiya psql
On Tue, Mar 25, 2008 at 2:09 PM, jose javier parra sanchez 
[EMAIL PROTECTED] wrote:

 It's been said zillions of times on the maillist. Using a select
 count(*) in postgres is slow, and probably will be slow for a long
 time. So that function is not a good way to measure perfomance.

Yes, but if the data is in HDD then we can say this...

but now the data is in RAM


Re: [PERFORM] postgresql is slow with larger table even it is in RAM

2008-03-25 Thread hubert depesz lubaczewski
On Tue, Mar 25, 2008 at 02:05:20PM +0530, sathiya psql wrote:
 Any Idea on this ???

yes. dont use count(*).

if you want whole-table row count, use triggers to store the count.

it will be slow. regeardless of whether it's in ram or on hdd.

depesz

-- 
quicksil1er: postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV! :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql is slow with larger table even it is in RAM

2008-03-25 Thread Craig Ringer
hubert depesz lubaczewski wrote:
 On Tue, Mar 25, 2008 at 02:05:20PM +0530, sathiya psql wrote:
 Any Idea on this ???
 
 yes. dont use count(*).
 
 if you want whole-table row count, use triggers to store the count.
 
 it will be slow. regeardless of whether it's in ram or on hdd.

In other words, if you're having performance problems please provide
EXPLAIN ANALYZE output from a more useful query that does real work,
rather than something like count(*).

COUNT(*) can be slow due to some MVCC limitations; it's been discussed
frequently here so you should search the archives for information.

--
Craig Ringer

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread sathiya psql
Ok, finally am changing my question.


Do get quick response from postgresql what is the maximum number of records
i can have in a table in postgresql 8.1 ???


Re: [PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread Harald Armin Massa
Sathiya,

th maximum number of records in one PostreSQL table ist unlimited:

http://www.postgresql.org/about/

[for some values of unlimited]

Some further help:

googling for:
postgresql limits site:postgresql.org

leads you to this answer quite quick, while googling for

maximum number of rows in a postgresql table

leads you to a lot of misleading pages.

Harald


On Tue, Mar 25, 2008 at 12:42 PM, sathiya psql [EMAIL PROTECTED] wrote:
 Ok, finally am changing my question.


 Do get quick response from postgresql what is the maximum number of records
 i can have in a table in postgresql 8.1 ???







-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread sathiya psql

 th maximum number of records in one PostreSQL table ist unlimited:

am asking for good performance, not just limitation..

If i have half a crore record, how the performance will be ?


 http://www.postgresql.org/about/

 [for some values of unlimited]

 Some further help:

 googling for:
 postgresql limits site:postgresql.org

but i need some experimentation result...

I have 1 GB RAM with Pentium Celeron.
50 lakh records and postgres performance is not good

It takes 30 sec for simple queries





Re: [PERFORM] postgresql is slow with larger table even it is in RAM

2008-03-25 Thread Alvaro Herrera
sathiya psql escribió:

 So now everything is in RAM, if i do a count(*) on this table it returns
 327600 in 3 seconds, why it is taking 3 seconds ? because am sure that
 no Disk I/O is happening.

It has to scan every page and examine visibility for every record.  Even
if there's no I/O involved, there's a lot of work to do.  I am not sure
if with your hardware it is expected for it to take 3 seconds though.
Do you see high CPU usage during that period?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread Alvaro Herrera
sathiya psql escribió:

 I have 1 GB RAM with Pentium Celeron.
 50 lakh records and postgres performance is not good
 
 It takes 30 sec for simple queries

Shows us the explain analyze.  There is no problem with a large number
of records, as long as you're not expecting to process all of them all
the time.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread sathiya psql



 Shows us the explain analyze.  There is no problem with a large number
 of records, as long as you're not expecting to process all of them all
 the time.

yes many a times i need to process all the records,

often i need to use count(*) 

so what to do  ?? ( those trigger options i know already, but i wil l do
count on different parameters )






Re: [PERFORM] increasing shared buffer slow downs query performance.

2008-03-25 Thread Pavan Deolasee
On Mon, Mar 24, 2008 at 3:37 PM, Andreas Kretschmer
[EMAIL PROTECTED] wrote:
 petchimuthu lingam [EMAIL PROTECTED] schrieb:


   Hi friends,
  
   I am using postgresql 8.1, I have shared_buffers = 5, now i execute the
   query, it takes 18 seconds to do sequential scan, when i reduced to 5000, 
 it
   takes one 10 seconds, Why.

  Wild guess: the second time the data are in the filesystem cache.



Another wild possibility is that the first query sets the hint bits for the
rows involved and hence the second time it runs fast. May be you want
to run the query few times in both the settings and then compare.

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread Craig Ringer

sathiya psql wrote:


yes many a times i need to process all the records,

often i need to use count(*) 

so what to do  ?? ( those trigger options i know already, but i wil l do
count on different parameters )
*** PLEASE *** post the output of an EXPLAIN ANALYSE on one or more of 
your queries, and POST THE QUERY TEXT TOO. For example, if your query was:


SELECT COUNT(*) FROM sometable WHERE somefield  42 ;

then you would run:

ANALYZE sometable;

then you would run:

EXPLAIN ANALYZE SELECT COUNT(*) FROM sometable WHERE somefield  42 ;

and paste the resulting text into an email message to this list. Without 
your query text and the EXPLAIN ANALYZE output from it it is much harder 
for anybody to help you. You should also post the output of a psql \d 
command on your main table definitions.



As for what you can do to improve performance, some (hardly an exclusive 
list) of options include:



- Maintaining a summary table using a trigger. The summary table might 
track counts for various commonly-searched-for criteria. Whether this is 
practical or not depends on your queries, which you have still not 
posted to the list.


- Tuning your use of indexes (adding, removing, or adjusting indexes to 
better service your queries). Use EXPLAIN ANALYZE to help with this, and 
READ THE MANUAL, which has excellent information on tuning index use and 
profiling queries.


- Tune the query planner parameters to make better planning decisions. 
In particular, if your data and indexes all fit in ram you should reduce 
the cost of index scans relative to sequential scans. There is plenty of 
information about that on this mailing list. Also, READ THE MANUAL, 
which has excellent information on tuning the planner.


- Investigating table partitioning and tablespaces (this requires 
considerable understanding of postgresql to use successfully). You 
probably want to avoid this unless you really need it, and I doubt it 
will help much for in-memory databases anyway.


- Buy a faster computer

--
Craig Ringer

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread sathiya psql
EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
QUERY
PLAN
--
 Aggregate  (cost=90760.80..90760.80 rows=1 width=0) (actual time=
6069.373..6069.374 rows=1 loops=1)
   -  Seq Scan on call_log_in_ram  (cost=0.00..89121.24 rows=3279119
width=0) (actual time=0.012..4322.345 rows=3279119 loops=1)
 Total runtime: 6069.553 ms
(3 rows)

zivah=# EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
QUERY
PLAN
--
 Aggregate  (cost=90760.80..90760.80 rows=1 width=0) (actual time=
6259.436..6259.437 rows=1 loops=1)
   -  Seq Scan on call_log_in_ram  (cost=0.00..89121.24 rows=3279119
width=0) (actual time=0.013..4448.549 rows=3279119 loops=1)
 Total runtime: 6259.543 ms


Re: [PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread Craig Ringer

sathiya psql wrote:

EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
  

And your usual query is:

SELECT count(*) from call_log_in_ram;

?

If so, you should definitely build a summary table maintained by a 
trigger to track the row count. That's VERY well explained in the 
mailing list archives. This was suggested to you very early on in the 
discussion.


If you have problems with other queries, how about showing EXPLAIN 
ANALYZE for the other queries you're having problems with?


--
Craig Ringer

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql is slow with larger table even it is in RAM

2008-03-25 Thread Bill Moran
In response to sathiya psql [EMAIL PROTECTED]:

 Dear Friends,
  I have a table with 32 lakh record in it. Table size is nearly 700 MB,
 and my machine had a 1 GB + 256 MB RAM, i had created the table space in
 RAM, and then created this table in this RAM.
 
 So now everything is in RAM, if i do a count(*) on this table it returns
 327600 in 3 seconds, why it is taking 3 seconds ? because am sure that
 no Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is
 happening, swap is also not used )
 
 Any Idea on this ???

Yes.  It takes your hardware about 3 seconds to read through 700M of ram.

Keep in mind that you're not just reading RAM.  You're pushing system
requests through the VFS layer of your operating system, which is treating
the RAM like a disk (with cylinder groups and inodes and blocks, etc) so
you have all that processing overhead as well.  What filesystem did you
format the RAM disk with?

Why are you doing this?  If you have enough RAM to store the table, why
not just allocate it to shared buffers?

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread Ivan Voras
sathiya psql wrote:
 EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
 QUERY
 PLAN
 
 --
  Aggregate  (cost=90760.80..90760.80 rows=1 width=0) (actual
 time=6069.373..6069.374 rows=1 loops=1)
-  Seq Scan on call_log_in_ram  (cost=0.00..89121.24 rows=3279119
 width=0) (actual time=0.012..4322.345 rows=3279119 loops=1)
  Total runtime: 6069.553 ms
 (3 rows)

You will never get good performance automatically with COUNT(*) in
PostgreSQL. You can either create your own infrastructure (triggers,
statistics tables, etc) or use an approximate result like this:

CREATE OR REPLACE FUNCTION fcount(varchar) RETURNS bigint AS $$
SELECT reltuples::bigint FROM pg_class WHERE relname=$1;
$$ LANGUAGE 'sql';


Use the above function as:

SELECT fcount('table_name');
 fcount

   7412
(1 row)


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql is slow with larger table even it is in RAM

2008-03-25 Thread sathiya psql

 Yes.  It takes your hardware about 3 seconds to read through 700M of ram.





 Keep in mind that you're not just reading RAM.  You're pushing system
 requests through the VFS layer of your operating system, which is treating
 the RAM like a disk (with cylinder groups and inodes and blocks, etc) so
 you have all that processing overhead as well.  What filesystem did you
 format the RAM disk with?

tmpfs


 Why are you doing this?  If you have enough RAM to store the table, why
 not just allocate it to shared buffers?


just allocating will  make read from hdd to RAM at first time, to eliminate
that



are you saying it will take 3 seconds surely if i have 50 lakh record


Re: [PERFORM] PostgreSQL NetApp and NFS

2008-03-25 Thread Jurgen Haan
Well, we're not running PGSQL on a Netapp over NFS, but a DB2 Database.
But nevertheless, it runs quite well. NFS is not a bad choice for your
database, the big memory buffer that allocates the raid6 blocks makes it
all very quick, like you're working directly on a 1+ TB ramdisk.

One important thing to keep in mind, is to make sure the NFS protocol
used is at least V3 and to check your locking options.
This made our DB2 crash, because when configured wrong, the file locking
mechanism on an NFS mount behaves differently than that of the local
storage. These parameters can be forced from the client side (fstab).

But still, with our 100+ GB OLTP database, I'm still quite fond of our
netapp.

-R-

Chris Hoover wrote:
 I just found out that my company is planning on migrating my databases
 from our current ISCSI storage solution to NetApps connected via NFS.  I
 knew about the NetApp migration, but always assumed (and shame on me)
 that I would have direct attachments to the servers.

 
 Chris
 -- 
 Come see how to SAVE money on fuel, decrease harmful emissions, and even
 make MONEY. Visit http://colafuelguy.mybpi.com and join the revolution!

-- 
Easyflex diensten b.v.
Acaciastraat 16
4921 MA  MADE
T: 0162 - 690410
F: 0162 - 690419
E: [EMAIL PROTECTED]
W: http://www.easyflex.nl

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread Bill Moran
In response to sathiya psql [EMAIL PROTECTED]:

 EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
 QUERY
 PLAN
 --
  Aggregate  (cost=90760.80..90760.80 rows=1 width=0) (actual time=
 6069.373..6069.374 rows=1 loops=1)
-  Seq Scan on call_log_in_ram  (cost=0.00..89121.24 rows=3279119
 width=0) (actual time=0.012..4322.345 rows=3279119 loops=1)
  Total runtime: 6069.553 ms
 (3 rows)
 
 zivah=# EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
 QUERY
 PLAN
 --
  Aggregate  (cost=90760.80..90760.80 rows=1 width=0) (actual time=
 6259.436..6259.437 rows=1 loops=1)
-  Seq Scan on call_log_in_ram  (cost=0.00..89121.24 rows=3279119
 width=0) (actual time=0.013..4448.549 rows=3279119 loops=1)
  Total runtime: 6259.543 ms

6 seconds doesn't sound like an unreasonable amount of time to count 3
million rows.  I don't see any performance issue here.

What were your expectations?

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql is slow with larger table even it is in RAM

2008-03-25 Thread Bill Moran
In response to sathiya psql [EMAIL PROTECTED]:
 
  Yes.  It takes your hardware about 3 seconds to read through 700M of ram.
 
 
  Keep in mind that you're not just reading RAM.  You're pushing system
  requests through the VFS layer of your operating system, which is treating
  the RAM like a disk (with cylinder groups and inodes and blocks, etc) so
  you have all that processing overhead as well.  What filesystem did you
  format the RAM disk with?

 tmpfs

I'm not an expert, but according to wikipedia:

tmpfs (previously known as shmfs) distinguishes itself from the Linux ramdisk 
device by allocating memory dynamically and by allowing less-used pages to be 
moved onto swap space.

Both dynamically allocating and swapping are potential problems, but I
don't know how to tell you to determine if they're issues or not.

  Why are you doing this?  If you have enough RAM to store the table, why
  not just allocate it to shared buffers?
 
 just allocating will  make read from hdd to RAM at first time, to eliminate
 that

PostgreSQL is still going to copy the data from your RAM disk into shared
buffers before working with it, so you still have that overhead.  All
you're escaping is the time involved in physical disk activity, which is
what shared_buffers are designed to avoid.

 are you saying it will take 3 seconds surely if i have 50 lakh record

No.  That is dependent on your hardware and other factors.

You are trying to use the system in a non-standard configuration.  If it
doesn't work that way, don't be surprised.

Also, what are you expectations?  Honestly, I don't see any problems with
the results you're getting, they're about what I would expect.  Are you
trying to compare PostgreSQL to MySQL/MyISAM?  More directly, what is
your purpose in starting this email conversation?  What are you hoping
to accomplish?

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] postgresql is slow with larger table even it is in RAM

2008-03-25 Thread Luke Lonergan
Hello Sathiya,

1st:  you should not use a ramdisk for this, it will slow things down as
compared to simply having the table on disk.  Scanning it the first time
when on disk will load it into the OS IO cache, after which you will get
memory speed.

2nd: you should expect the ³SELECT COUNT(*)² to run at a maximum of about
350 ­ 600 MB/s (depending on PG version and CPU speed).  It is CPU speed
limited to that rate of counting rows no matter how fast your IO is.

So, for your 700 MB table, you should expect a COUNT(*) to run in about 1-2
seconds best case.  This will approximate the speed at which other queries
can run against the table.

- Luke


On 3/25/08 1:35 AM, sathiya psql [EMAIL PROTECTED] wrote:

 Dear Friends,
  I have a table with 32 lakh record in it. Table size is nearly 700 MB,
 and my machine had a 1 GB + 256 MB RAM, i had created the table space in RAM,
 and then created this table in this RAM.
 
 So now everything is in RAM, if i do a count(*) on this table it returns
 327600 in 3 seconds, why it is taking 3 seconds ? because am sure that no
 Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is
 happening, swap is also not used )
 
 Any Idea on this ???
 
 I searched a lot in newsgroups ... can't find relevant things ( because
 everywhere they are speaking about disk access speed, here i don't want to
 worry about disk access )
 
 If required i will give more information on this.
 
 
 



Re: [PERFORM] postgresql is slow with larger table even it is in RAM

2008-03-25 Thread sathiya psql


 1st:  you should not use a ramdisk for this, it will slow things down as
 compared to simply having the table on disk.  Scanning it the first time
 when on disk will load it into the OS IO cache, after which you will get
 memory speed.

absolutely

after getting  some replies, i dropped the table from ramdisk,
and started to have that in the disk itself..


 2nd: you should expect the SELECT COUNT(*) to run at a maximum of about
 350 – 600 MB/s (depending on PG version and CPU speed).  It is CPU speed
 limited to that rate of counting rows no matter how fast your IO is.

am using 8.1
pentium duo core


 So, for your 700 MB table, you should expect a COUNT(*) to run in about
 1-2 seconds best case.  This will approximate the speed at which other
 queries can run against the table.

ok count(*) per say, but other queries is taking much time...

ok i ll do more experimentations and i ll be back


Very great thanks for all of your replies GUYZ.


 - Luke



 On 3/25/08 1:35 AM, sathiya psql [EMAIL PROTECTED] wrote:

 Dear Friends,
  I have a table with 32 lakh record in it. Table size is nearly 700
 MB, and my machine had a 1 GB + 256 MB RAM, i had created the table space in
 RAM, and then created this table in this RAM.

 So now everything is in RAM, if i do a count(*) on this table it
 returns 327600 in 3 seconds, why it is taking 3 seconds ? because am
 sure that no Disk I/O is happening. ( using vmstat i had confirmed, no disk
 I/O is happening, swap is also not used )

 Any Idea on this ???

 I searched a lot in newsgroups ... can't find relevant things (
 because everywhere they are speaking about disk access speed, here i don't
 want to worry about disk access )

 If required i will give more information on this.






[PERFORM] Preparing statements on connection startup

2008-03-25 Thread PFC


Hello,
I am using Postgres with PHP and persistent connections.
	For simple queries, parsing  preparing time is often longer than actual  
query execution time...


	I would like to execute a bunch of PREPARE statements to prepare my most  
often used small queries on connection startup, then reuse these prepared  
statements during all the life of the persistent connection.

(Simple queries in PG are actually faster than in MySQL if prepared, 
lol)
How do I achieve this ?

	Best way, would be of course a PERSISTENT PREPARE which would record  
the information (name, SQL, params, not the Plan) about the prepared  
statement in a system catalog shared by all connections ; when issuing  
EXECUTE, if the prepared statement does not exist in the current  
connection, pg would look there, and if it finds the name of the statement  
and corresponding SQL, issue a PREPARE so the current connection would  
then have this statement in its store, and be able to execute it faster  
for all the times this connection is reused.

Is such a feature planned someday ?

	I tried to write a function which is called by my PHP script just after  
establishing the connection, it is a simple function which looks in  
pg_prepared_statements, if it is empty it issues the PREPARE statements I  
need. It works, no problem, but it is less elegant and needs one extra  
query per page.


	I also tried to issue a dummy EXECUTE of a prepared SELECT 1 just after  
establishing the connection : if it fails, we prepare the plans (by  
issuing queries from PHP), if it succeeds, this means we are reusing a  
connection with all the plans already prepared. This also works well.


What do you think ?
Regards,
Pierre

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance