Re: [PERFORM] Context switch storm

2006-11-15 Thread Simon Riggs
On Tue, 2006-11-14 at 09:17 -0500, Merlin Moncure wrote:
> On 11/14/06, Cosimo Streppone <[EMAIL PROTECTED]> wrote:
> > I must say I lowered "shared_buffers" to 8192, as it was before.
> > I tried raising it to 16384, but I can't seem to find a relationship
> > between shared_buffers and performance level for this server.
> 
> My findings are pretty much the same here.  I don't see any link
> between shared buffers and performance.  I'm still looking for hard
> evidence to rebut this point.   Lower shared buffers leaves more
> memory for what really matters, which is sorting.

In 8.0 there is a performance issue such that bgwriter will cause a
performance problem with large shared_buffers setting. That in itself
could lead to some fairly poor measurements of the value of
shared_buffers.

In 7.4 and prior releases setting shared_buffers higher was counter
productive in many ways, so isn't highly recommended.

In general, setting shared_buffers higher works for some workloads and
doesn't for others. So any measurements anybody makes depend upon the
workload and the size of the database. The more uniformly/randomly you
access a large database, the more benefit you'll see from large
shared_buffers. 8.1 benefits from having a higher shared_buffers in some
cases because it reduces contention on the buffer lwlocks; 8.2 solves
this issue.

Even in 8.2 ISTM that a higher shared_buffers setting wastes memory with
many connected users since the PrivRefCount array uses memory that could
have been used as filesystem cache.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] Slow SELECT on three or more clients

2006-11-15 Thread AMIR FRANCO D. JOVEN
Hi!

Im new to PostgreSQL.

My current project uses PostgreSQL 7.3.4.

the problem is like this:

I have a table with 94 fields and a select with only one resultset in only one client consumes about 0.86 seconds.
The client executes three 'select' statements to perform the task which consumes 2.58 seconds.
With only one client this is acceptable, but the real problem is as i add more clients, it goes more and more slower.

for a single select with one field in one resultset, is 0.86 seconds normal?

I tried vacuuming and reindexing but to no avail.
the total record count in that particular table is 456,541.

Thanks in advance.



Re: [PERFORM] Slow SELECT on three or more clients

2006-11-15 Thread Andreas Kostyrka
* AMIR FRANCO D. JOVEN <[EMAIL PROTECTED]> [061115 12:44]:
> Hi!
> 
> Im new to PostgreSQL.
> 
> My current project uses PostgreSQL 7.3.4.
Ancient. Upgrade it, especially if it's a new database.

> 
> the problem is like this:
> 
> I have a table with 94 fields and a select with only one resultset in only
> one client consumes about 0.86 seconds.
> The client executes three 'select' statements to perform the task which
> consumes 2.58 seconds.
> With only one client this is acceptable, but the real problem is as i add
> more clients, it goes more and more slower.
That depends upon:
a) your table schema.
b) the data in the tables. E.g. how big are rows, how many rows.
c) the size of the result sets.
d) your indexes?

Andreas

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Slow SELECT on three or more clients

2006-11-15 Thread Russell Smith

AMIR FRANCO D. JOVEN wrote:

Hi!

Im new to PostgreSQL.

My current project uses PostgreSQL 7.3.4.
Upgrading your version of PostgreSQL to 8.1 will give you significant 
benefits to performance.


the problem is like this:

I have a table with 94 fields and a select with only one resultset in 
only one client consumes about 0.86 seconds.
The client executes three 'select' statements to perform the task 
which consumes 2.58 seconds.
With only one client this is acceptable, but the real problem is as i 
add more clients, it goes more and more slower.


for a single select with one field in one resultset, is 0.86 seconds 
normal?

You will need to attach the query.
EXPLAIN ANALYZE SELECT ...

where SELECT ... is your query.  That will help us work out what the 
problem is. 

0.86 seconds might be slow for a query that returns 1 row, it might be 
fast for a query that returns a large set with complex joins and where 
conditions.  Fast and slow are not objective terms.  They are very 
dependent on the query.




I tried vacuuming and reindexing but to no avail.
the total record count in that particular table is 456,541.

456,541 is not all that many records.  But again you will need to post 
more information for us to be able to assist.

Thanks in advance.




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Slow SELECT on three or more clients

2006-11-15 Thread Gregory S. Williamson
Operating system and some of the basic PostreSQL config settings would be 
helpful, plus any info you have on your disks, the size of the relevant tables, 
their structure and indexes & vacuum/analyze status ... plus what others have 
said:

Upgrade!

There are considerable improvements in, well, *everything* !, since 7.3 (we 
havew some database atb 7.4.x and I consider them out-of-date). Hopefully this 
list can provide help to get you through whatever your immediate crisis is, but 
do consider planning for this as soon as time and resource permit.

Data integrity is a _good_ thing!

Greg Williamson
DBA
GlobeXplorer LLC


-Original Message-
From:   [EMAIL PROTECTED] on behalf of Russell Smith
Sent:   Wed 11/15/2006 5:31 AM
To: AMIR FRANCO D. JOVEN
Cc: pgsql-performance@postgresql.org
Subject:Re: [PERFORM] Slow SELECT on three or more clients

AMIR FRANCO D. JOVEN wrote:
> Hi!
>
> Im new to PostgreSQL.
>
> My current project uses PostgreSQL 7.3.4.
Upgrading your version of PostgreSQL to 8.1 will give you significant 
benefits to performance.
>
> the problem is like this:
>
> I have a table with 94 fields and a select with only one resultset in 
> only one client consumes about 0.86 seconds.
> The client executes three 'select' statements to perform the task 
> which consumes 2.58 seconds.
> With only one client this is acceptable, but the real problem is as i 
> add more clients, it goes more and more slower.
>
> for a single select with one field in one resultset, is 0.86 seconds 
> normal?
You will need to attach the query.
EXPLAIN ANALYZE SELECT ...

where SELECT ... is your query.  That will help us work out what the 
problem is. 

0.86 seconds might be slow for a query that returns 1 row, it might be 
fast for a query that returns a large set with complex joins and where 
conditions.  Fast and slow are not objective terms.  They are very 
dependent on the query.

>
> I tried vacuuming and reindexing but to no avail.
> the total record count in that particular table is 456,541.
>
456,541 is not all that many records.  But again you will need to post 
more information for us to be able to assist.
> Thanks in advance.
>


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---
Click link below if it is SPAM [EMAIL PROTECTED]
"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=455b17b2223071076418835&[EMAIL
 PROTECTED]&retrain=spam&template=history&history_page=1"
!DSPAM:455b17b2223071076418835!
---






---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Slow SELECT on three or more clients

2006-11-15 Thread Merlin Moncure

On 11/15/06, AMIR FRANCO D. JOVEN <[EMAIL PROTECTED]> wrote:

Hi!

 Im new to PostgreSQL.

 My current project uses PostgreSQL 7.3.4.

 the problem is like this:

 I have a table with 94 fields and a select with only one resultset in only
one client consumes about 0.86 seconds.
 The client executes three 'select' statements to perform the task which
consumes 2.58 seconds.
 With only one client this is acceptable, but the real problem is as i add
more clients, it goes more and more slower.

 for a single select with one field in one resultset, is 0.86 seconds
normal?

 I tried vacuuming and reindexing but to no avail.
 the total record count in that particular table is 456,541.


returning 450k rows in around 1 second is about right for a result set
with one field.  imo, your best bet is to try and break up your table
and reorganize it so you dont have to query the whole thing every
time.  why do you need to return all the rows over and over?

merlin

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PERFORM] Slow SELECT on three or more clients

2006-11-15 Thread Markus Schaber
Hi, Amir,

AMIR FRANCO D. JOVEN wrote:

> My current project uses PostgreSQL 7.3.4.

By all means, please upgrade.

The newest 7.3 series version is 7.3.16, which fixes lots of critical
bugs, and can be used as a drop-in replacement for 7.3.4 (see Release
Notes at http://www.postgresql.org/docs/7.3/interactive/release.html )

The newest stable release is 8.1.5, and 8.2 is just on the roads...

> I have a table with 94 fields and a select with only one resultset in
> only one client consumes about 0.86 seconds.

"with only on resultset"?

You mean "with only one returned row", I presume.

Each SELECT has exactly one resultset, which can contain zero to many rows.

Please check the following:

- Did you create the appropriate indices?

- Version 7.3.X may suffer from index bloat, so REINDEX might help.

- Did you VACUUM and ANALYZE the table properly?

- Is your free space map setting, the statistics targets, and other
config options tuned to fit your environment?

- Maybe a VACUUM FULL or a CLUSTER command may help you.

> for a single select with one field in one resultset, is 0.86 seconds normal?

That depends on the circumstances.

Markus

-- 
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in Europe! www.ffii.org
www.nosoftwarepatents.org

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[PERFORM] Hundreds of database and FSM

2006-11-15 Thread Craig A. James

A few months ago a couple guys got "bragging rights" for having the most 
separate databases.  A couple guys claimed several hundred databases and one said he had 
several thousand databases.  The concensus was that Postgres has no problem handling many 
separate databases.

I took that to heart and redesigned our system; we now have about 150 "primary data 
sources" that are used to build couple of "warehouses" that our customers actually 
search.  Each database has about 20 tables.  The total size (all databases and all tables together) 
is not huge, about 40 million rows.  Eventually the warehouse (customer accessible) databases will 
be moved to separate servers, configured and indexed specifically for the task.

The only problem I've encountered is messages in the log:

  NOTICE:  number of page slots needed (131904) exceeds max_fsm_pages (10)
  HINT:  Consider increasing the configuration parameter "max_fsm_pages" to a 
value over 131904.

So I dutifully followed this advice:

  max_fsm_pages = 32
  max_fsm_relations = 2

This is based on our current 150 databases times 20 tables, or 3000 tables total.  But I 
wasn't sure if sequences count as "relations", which would double the number.  
So I set it at 20K relations to allow for growth.

Is there anything else I need to worry about?  What happens if I go to, say, 
500 databases (aside from increasing the FSM numbers even more)?  1000 
databases?

The servers are 4 GB, dual Xeon, Postgres 8.1.4 on Linux FC4.

Thanks,
Craig


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Hundreds of database and FSM

2006-11-15 Thread Alvaro Herrera
Craig A. James wrote:

> This is based on our current 150 databases times 20 tables, or 3000 tables 
> total.  But I wasn't sure if sequences count as "relations", which would 
> double the number.

They don't because they don't have free space.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Hundreds of database and FSM

2006-11-15 Thread Steinar H. Gunderson
On Wed, Nov 15, 2006 at 02:31:45PM -0300, Alvaro Herrera wrote:
>> This is based on our current 150 databases times 20 tables, or 3000 tables 
>> total.  But I wasn't sure if sequences count as "relations", which would 
>> double the number.
> They don't because they don't have free space.

OTOH, indexes do.

/* Steinar */
-- 
Homepage: http://www.sesse.net/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] Postgres server crash

2006-11-15 Thread Craig A. James

For the third time today, our server has crashed, or frozen, actually something 
in between.  Normally there are about 30-50 connections because of mod_perl 
processes that keep connections open.  After the crash, there are three 
processes remaining:

# ps -ef | grep postgres
postgres 23832 1  0 Nov11 pts/100:02:53 /usr/local/pgsql/bin/postmaster 
-D /postgres/main
postgres  1200 23832 20 14:28 pts/100:58:14 postgres: pubchem pubchem 
66.226.76.106(58882) SELECT
postgres  4190 23832 25 14:33 pts/101:09:12 postgres: asinex asinex 
66.226.76.106(56298) SELECT

But they're not doing anything: No CPU time consumed, no I/O going on, no 
progress.  If I try to connect with psql(1), it says:

  psql: FATAL:  the database system is in recovery mode

And the server log has:

LOG:  background writer process (PID 23874) was terminated by signal 9
LOG:  terminating any other active server processes
LOG:  statistics collector process (PID 23875) was terminated by signal 9
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the 
current transaction and exit, because another server process exited ab
normally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat 
your command.
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the 
current transaction and exit, because another server process exited ab
... repeats about 50 times, one per process.

Questions:
 1. Any idea what happened and how I can avoid this?  It's a *big* problem.
 2. Why didn't the database recover?  Why are there two processes
that couldn't be killed?
 3. Where did the "signal 9" come from?  (Nobody but me ever logs
in to the server machine.)

Help!

Thanks,
Craig


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Postgres server crash

2006-11-15 Thread Russell Smith

Craig A. James wrote:
For the third time today, our server has crashed, or frozen, actually 
something in between.  Normally there are about 30-50 connections 
because of mod_perl processes that keep connections open.  After the 
crash, there are three processes remaining:


# ps -ef | grep postgres
postgres 23832 1  0 Nov11 pts/100:02:53 
/usr/local/pgsql/bin/postmaster -D /postgres/main
postgres  1200 23832 20 14:28 pts/100:58:14 postgres: pubchem 
pubchem 66.226.76.106(58882) SELECT
postgres  4190 23832 25 14:33 pts/101:09:12 postgres: asinex 
asinex 66.226.76.106(56298) SELECT


But they're not doing anything: No CPU time consumed, no I/O going on, 
no progress.  If I try to connect with psql(1), it says:


  psql: FATAL:  the database system is in recovery mode

And the server log has:

LOG:  background writer process (PID 23874) was terminated by signal 9
LOG:  terminating any other active server processes
LOG:  statistics collector process (PID 23875) was terminated by signal 9
WARNING:  terminating connection because of crash of another server 
process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process 
exited ab

normally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.
WARNING:  terminating connection because of crash of another server 
process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process 
exited ab

... repeats about 50 times, one per process.

Questions:
 1. Any idea what happened and how I can avoid this?  It's a *big* 
problem.

 2. Why didn't the database recover?  Why are there two processes
that couldn't be killed?
 3. Where did the "signal 9" come from?  (Nobody but me ever logs
in to the server machine.)

I would guess it's the linux OOM if you are running linux. You need to 
turn off killing of processes when you run out of memory.  Are you 
getting close to running out of memory?



Help!

Thanks,
Craig


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster





---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate