[ADMIN] Creating users in a second postgres server.

2003-11-05 Thread skennedy
Hi..

I have a Solaris 2.8 machine on which I have Postgresql 7.2.4 postgres 
listening on port 5432.  This works fine.

I have a user who needs a newer version and has asked specifically for 
7.3.  I have compiled and installed a 7.3.4
successfully, using
./configure --prefix=/usr/local/p_sql734 --with-pg_port=5433

I created a new os user, p_sql734, group postgres (same group as the 
7.2.4) and initdb works fine.
pg_ctl start shows the following output
bash-2.03$ more p_sql734.log
LOG:  database system was shut down at 2003-11-05 10:23:33 WET
LOG:  checkpoint record is at 0/83B238
LOG:  redo record is at 0/83B238; undo record is at 0/0; shutdown TRUE
LOG:  next transaction id: 480; next oid: 16976
LOG:  database system is ready

which seems to indicate that everything is fine.

The processes are alive

ps -ef |grep post
p_sql734 26503 26502  0 10:25:44 ?0:00 
/usr/local/p_sql734/bin/postmaster
postgres 27012 26760  0 10:36:26 pts/10:00 -bash
postgres 27386 27385  0   Oct 22 ?0:01 
/usr/local/pgsql/bin/postmaster
p_sql734 26502 26501  0 10:25:44 ?0:00 
/usr/local/p_sql734/bin/postmaster
postgres 27383 1  0   Oct 22 ?0:02 
/usr/local/pgsql/bin/postmaster
postgres 27385 27383  0   Oct 22 ?0:00 
/usr/local/pgsql/bin/postmaster
p_sql734 26501 1  0 10:25:44 ?0:00 
/usr/local/p_sql734/bin/postmaster
postgres 27231 27012  0 10:39:45 pts/10:00 -bash

however, when I try and create a user  I get
bash-2.03$ createuser idip_734
Shall the new user be allowed to create databases? (y/n) y
Shall the new user be allowed to create more new users? (y/n) n
psql: FATAL 1:  user "p_sql734" does not exist
createuser: creation of user "idip_734" failed
I tried the same thing with p_sql734 (though I don't remember having had 
to create the owner in the database before) and got the same message

bash-2.03$ id
uid=122(p_sql734) gid=202(postgres)
bash-2.03$ createuser p_sql734
Shall the new user be allowed to create databases? (y/n) y
Shall the new user be allowed to create more new users? (y/n) y
psql: FATAL 1:  user "p_sql734" does not exist
createuser: creation of user "p_sql734" failed
I've edited ${PGDATA}/postgresql.conf and set port=5433 to make sure 
that I'm on the correct port.

I'm sure Ive missed something fairly basic but I can't find it. 

Any ideas?

Thanks

Stephen.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] Microsoft access verses postgresql

2003-11-05 Thread Geoffrey
Michelle Murrain wrote:
On Tue, 2003-11-04 at 21:39, Geoffrey wrote:


Okay, so how do you approach this?  Access front end will talk to 
postgresql?  Pointers to any docs would be appreciated.


Access front end talks with Postgresql via ODBC. Can be a little flakey
at times, but it works well in my experience, although it does depend
some on the complexity you need.
There are a couple of different docs at: http://techdocs.postgresql.org/
- one good one on the step by step under "Technical Guides".
Thanks for the pointer, I'll check it out.

--
Until later, Geoffrey   [EMAIL PROTECTED]
Building secure systems inspite of Microsoft

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Forcing pg_dump NOT to use "INSERT..."

2003-11-05 Thread Stef

=> This is extremely hard to believe.  I can see no way that pg_dump will
=> do that unless you explicitly ask for it (-d or -D switch, or one of the
=> long variants of same).

I know of a lot of people that confuses the -d switch as an option
indicating the database name to follow. It is also usually very hard
to convince people that they dumped INSERT statements in stead
of COPY once you sit with a restore that takes a little longer than
usual...



0003.mimetmp
Description: PGP signature


pgp0.pgp
Description: PGP signature


Re: [ADMIN] 7.3.5

2003-11-05 Thread Donald Fraser

- Original Message -
> >> Will there be a 7.3.5 version released, containing relevant patches,
before the release of the 7.4 version?
>
> > That seems unlikely. It seems likely that there will be such a release, but
> > most likely it will be after the 7.4 release.
>
> Yes.  I'd expect we'd wait till a little while after 7.4, since often
> interesting bugs are first reported just after a release --- there may
> be things we learn that ought to be back-patched into 7.3.5.  There is
> not currently anything in the 7.3 branch that looks like an immediate
> release forcer to me.  (I posted a summary of recent 7.3 commit messages
> to pgsql-hackers a couple days ago, if you're interested.)

Hi Tom,
I systematically went through every message you posted on pgsql-hackers
back dating to the 28th Oct.
I didn't find the summary you mentioned - there were a lot of messages
(how you manage to get work done between messages must be a small
mirical) so there is a chance I missed it.

The fix I am looking for in 7.3.5, which is in the 7.4 branch, is for the
regproc-reverse-conversion problem (the error comes when the planner tries
to convert that string back to OID form).

Regards
Donald Fraser


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Marek Florianczyk
Hi all

I was asking for help, a week ago. Performace tests took mi some more
time because of other things that I had to do.
The problem was: tune PostgreSQL to work with 10.000 databases.
Tom Lane (thanks) suggested solution: one database and 10.000 schemas.
>From now I will write switch "database" - thats mean one database, but
different schema.
I don't know how much traffic there will be on production site, so I
tested 3.000 schemas and 400 clients simultaneously connected.
Each "database" has 4 tables (int,text,int) with 1.000 records and no
idexes.
Each client was doing:

10 x connect,"select * from table[rand(1-4)] where
number=[rand(1-1000)]",disconnect--(fetch one row)

5 x connect,"select * from table[rand(1-4)] where
position=[rand(1-5)]","update table[rand(1-4)] set text='some text',
position='integer[rand(1-5)] where number=[rand(1-1000)]",disconnect(
fetch 250 row/update one row)

1 x connect,"update table[rand(1-4)] set text='some text',
position='[rand(1-5)]'",disconnect(update 250 rows)

1 x connect,"select * from table1 where position in (select position
from table2 where number in (select number from table3))",disconnect

after that client switch to another "database", and start testing from
the beginning.

During this test I was changing some parameters in postgres, and send
kill -HUP ( pg_ctl reload ). I still don't know what settings will be
best for me, except "shared buffers", and some kernel and shell
settings.

I noticed that queries like: "\d table1" "\di" "\dp" are extremly slow,
when 400 clients is connected not even postgres user can't do that query
- why, how to improve that? Will it be a problem ?

Below, there are some results of this test, and postgresql.conf settings
I didn't change random_page_cost because on this test machine I've got
only one scsi disk for /data directory.

Postgres use 90% of the 4 cpus and takes 2GB RAM but load average
doesn't jump over 10-20, so it works better with lot of schemas, than
with a lot of db's

Maybe some suggestion of my postgresql.conf settings? And why queries
"\d" are so extremly slow?

Thank You
Marek

[PostgreSQL]
max_connections = 512
shared_buffers = 65536
max_fsm_relations = 1
max_fsm_pages = 10
max_locks_per_transaction = 512
wal_buffers = 32   
sort_mem = 16384  
vacuum_mem = 8192
effective_cache_size = 100 
random_page_cost = 4  

[kernel]
kernel/shmmni = 8192
kernel/shmall = 134217728
kernel/shmmax = 536870912

[test] times in sec.
(dbname) (conn. time)   (q = queries)
  (1 row)(250 rows)(triple join)(update 250 rows)(update 1000 rows)
test2374: connect:1 q_fast:4 q_med:0 q_slow:46 q_upd:0 q_upd_all:33
test2347: connect:1 q_fast:4 q_med:1 q_slow:48 q_upd:1 q_upd_all:32
test2351: connect:0 q_fast:4 q_med:2 q_slow:49 q_upd:0 q_upd_all:31
test2373: connect:0 q_fast:5 q_med:0 q_slow:46 q_upd:0 q_upd_all:25

[PostgreSQL]
max_connections = 512
shared_buffers = 4096  
max_fsm_relations = 1000
max_fsm_pages = 1 
max_locks_per_transaction = 512
wal_buffers = 32 
sort_mem = 16384
vacuum_mem = 8192 
effective_cache_size = 100
random_page_cost = 4

[test]
test2430: connect:0 q_fast:2 q_med:1 q_slow:40 q_upd:0 q_upd_all:17
test2425: connect:0 q_fast:2 q_med:0 q_slow:45 q_upd:0 q_upd_all:20
test2434: connect:0 q_fast:2 q_med:0 q_slow:44 q_upd:0 q_upd_all:23
test2435: connect:1 q_fast:2 q_med:0 q_slow:50 q_upd:0 q_upd_all:18

[PostgreSQL]
max_fsm_relations = 2000
max_fsm_pages = 2

[test]
test2171: connect:0 q_fast:3 q_med:1 q_slow:42 q_upd:1 q_upd_all:20
test2177: connect:1 q_fast:3 q_med:0 q_slow:43 q_upd:0 q_upd_all:21
test2166: connect:1 q_fast:3 q_med:1 q_slow:39 q_upd:0 q_upd_all:20
test2165: connect:1 q_fast:3 q_med:1 q_slow:42 q_upd:0 q_upd_all:24
test2162: connect:1 q_fast:3 q_med:1 q_slow:39 q_upd:1 q_upd_all:23






---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Jeff
On 05 Nov 2003 14:33:33 +0100
Marek Florianczyk <[EMAIL PROTECTED]> wrote:

> 
> During this test I was changing some parameters in postgres, and send
> kill -HUP ( pg_ctl reload ). I still don't know what settings will be
> best for me, except "shared buffers", and some kernel and shell
> settings.
> 

as far as I know, -HUP won't make things like shared buffer changes
take. you need a full restart of PG.
..
but your numbers are different... I guess it did take. huh.
..

how much disk IO is going on during these tests? (vmstat 1)
Any swapping (also shown in vmstat)

Where any of these tables analyze'd?
I see you used no indexes, so on each of your tables it must do a seq
scan. Try adding an index to your test tables and rerun..



-- 
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] Creating users in a second postgres server.

2003-11-05 Thread Tom Lane
"skennedy" <[EMAIL PROTECTED]> writes:
> however, when I try and create a user  I get
> bash-2.03$ createuser idip_734
> Shall the new user be allowed to create databases? (y/n) y
> Shall the new user be allowed to create more new users? (y/n) n
> psql: FATAL 1:  user "p_sql734" does not exist

Looks to me like you are talking to the old server.  Perhaps you have
PGPORT set to 5432 in your environment?  Or you got your PATH wrong,
so that you are invoking the older copy of createuser, which will
naturally default to port 5432?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Marek Florianczyk
W liście z śro, 05-11-2003, godz. 14:48, Jeff pisze: 
> On 05 Nov 2003 14:33:33 +0100
> Marek Florianczyk <[EMAIL PROTECTED]> wrote:
> 
> > 
> > During this test I was changing some parameters in postgres, and send
> > kill -HUP ( pg_ctl reload ). I still don't know what settings will be
> > best for me, except "shared buffers", and some kernel and shell
> > settings.
> > 
> 
> as far as I know, -HUP won't make things like shared buffer changes
> take. you need a full restart of PG.
> ..
> but your numbers are different... I guess it did take. huh.

Well, I'm not sure, but I only did pg_ctl reload

> ..
> 
> how much disk IO is going on during these tests? (vmstat 1)
> Any swapping (also shown in vmstat)

I was watching iostat 1, and it shows about 600 tps, so it's not much,
and when we do raid(1+0) on production machine, disk will go fine.

> 
> Where any of these tables analyze'd?
> I see you used no indexes, so on each of your tables it must do a seq
> scan. Try adding an index to your test tables and rerun..

No they weren't analyzed, and I did not indexes specially.
I'm testing postgres to work as sql engine for a hosting environment,
these databases will be used by users=lamers, so many of them will not
do any indexes. I wanted to make a test really close to reality, and see
how many databases I can take on single machine.

One database with 3.000 schemas works better than 3.000 databases, but
there is REAL, BIG problem, and I won't be able to use this solution:
Every query, like "\d table" "\di" takes veeery long time.
Users have to have phpPgAdmin wich I modified to suit our needs, but now
it doesn't work, not even log-in. If I rewrite phpPgAdmin to log users
without checking all schemas, and tables within schemas, none of users
will be able to examine structure of table. Query like "\d table" from
psql monitor takes about 2-5 MINUTES :(

I see that only option is to create one database for every user, and
monitor traffic, and machine load to see when we need another PC and
another PostgreSQL...

Marek


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] Creating users in a second postgres server.

2003-11-05 Thread skennedy




Thanks Tom,

PGPORT was unset in the environment.
I set it to 5433 and things started to work.

If I configured the server to user 5433 before building (using
--with-pg_port=5433 in configure) shouldn't the clients, etc
automatically look for port 5433?

The executables ARE the ones I compiled using 5433, (according to
"which" anyway) :)

Everything seems to work now anyway but it would be interesting/useful
to know what I did wrong.  Is there a trace/log
of some sort which might hold the answer?

Stephen Kennedy.

Tom Lane wrote:

  "skennedy" <[EMAIL PROTECTED]> writes:
  
  
however, when I try and create a user  I get
bash-2.03$ createuser idip_734
Shall the new user be allowed to create databases? (y/n) y
Shall the new user be allowed to create more new users? (y/n) n
psql: FATAL 1:  user "p_sql734" does not exist

  
  
Looks to me like you are talking to the old server.  Perhaps you have
PGPORT set to 5432 in your environment?  Or you got your PATH wrong,
so that you are invoking the older copy of createuser, which will
naturally default to port 5432?

			regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

  





Re: [ADMIN] 7.3.5

2003-11-05 Thread Tom Lane
"Donald Fraser" <[EMAIL PROTECTED]> writes:
> The fix I am looking for in 7.3.5, which is in the 7.4 branch, is for the
> regproc-reverse-conversion problem (the error comes when the planner tries
> to convert that string back to OID form).

IIRC there is no solution to that without moving to 7.4 --- the fix
required a change in the representation of data values in pg_statistic.

The best workaround I can recommend in 7.3 is to delete the pg_statistic
rows that relate to regproc columns.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] PGSQL Mailing List Reply-To field

2003-11-05 Thread Andrew Sullivan
On Tue, Nov 04, 2003 at 03:40:21PM -0600, Bruno Wolff III wrote:
> On Tue, Nov 04, 2003 at 15:27:23 +,
>   Rob Fielding <[EMAIL PROTECTED]> wrote:

> > I keep doing this because I keep forgetting the Reply-To field isn't set 
> > to the pgsql- mailing list.
> 
> Its not. You should probably just get used to doing reply to all.

Or you can get a mail client that support "reply-to-list"
functionality.  I seem to recall hearing rumours that pine will do
that.  I know mutt does, because it's what I use.

A


Andrew Sullivan 204-4141 Yonge Street
Afilias CanadaToronto, Ontario Canada
<[EMAIL PROTECTED]>  M2P 2A8
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] Creating users in a second postgres server.

2003-11-05 Thread Tom Lane
"skennedy" <[EMAIL PROTECTED]> writes:
> If I configured the server to user 5433 before building (using 
> --with-pg_port=5433 in configure) shouldn't the clients, etc
> automatically look for port 5433?

They should, and they do for me (I routinely run several different
versions of PG on different ports on my development machines).

Hmm ... are you sure you spelled the configure option correctly?
It should be --with-pgport=PORTNUM.  One of the less endearing aspects
of GNU autoconf is that it won't complain about unrecognized --with-foo
and --enable-foo switches.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] Creating users in a second postgres server.

2003-11-05 Thread skennedy
Bang on Tom.

As you can see from the snippet of my mail below, I had it pg_port. 
Oh well, make distclean again I suppose

Thanks.

Tom Lane wrote:

"skennedy" <[EMAIL PROTECTED]> writes:
 

If I configured the server to user 5433 before building (using 
--with-pg_port=5433 in configure) shouldn't the clients, etc
automatically look for port 5433?
   

They should, and they do for me (I routinely run several different
versions of PG on different ports on my development machines).
Hmm ... are you sure you spelled the configure option correctly?
It should be --with-pgport=PORTNUM.  One of the less endearing aspects
of GNU autoconf is that it won't complain about unrecognized --with-foo
and --enable-foo switches.
			regards, tom lane

 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] data format problem for upgrede from 7.1->7.3

2003-11-05 Thread Oli Sennhauser
Hello admins

Tome Lane once wrote, if I remember well, that the biggest PostgreSQL DB 
he knows is about 4 TB!!! In Oracle we normaly say, it is "not possible" 
(= usefull) to exp/imp more than let's say 20 to 50 GB of data. This is 
more ore less the same we do with pg_dumpall???
Oracle was not changing the Fileformat since version 8.0 (or it does the 
conversion itself in the background?).

Now I would like to know:

* Experience of upgrading PostgreSQL db bigger than 100 GB???
* Recommendations, resp. what do the pg-kernel hackers think, when they 
"allways" change the file format.
* Is there a upgrade-path instead of pg_dump?
* How can we be a "professional highend" database if we have to 
pg_dump/load the data every year because of a new release?

Thanks for feedback and discussion
Oli
Tom Lane wrote:

Gautam Saha <[EMAIL PROTECTED]> writes:
 

Any idea what might be the resolution here?
   

Put back your 7.1 executables and make a pg_dumpall run before you
upgrade.  Then upgrade, initdb, reload the dump.
			regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
 



--
---
Oli Sennhauser
Database-Engineer (Oracle & PostgreSQL)
Rebenweg 6
CH - 8610 Uster / Switzerland
Phone (+41) 1 940 24 82 or Mobile (+41) 79 450 49 14
e-Mail [EMAIL PROTECTED]
Website http://mypage.bluewin.ch/shinguz/PostgreSQL/
Secure (signed/encrypted) e-Mail with a Free Personal SwissSign ID: http://www.swisssign.ch

Import the SwissSign Root Certificate: http://swisssign.net/cgi-bin/trust/import



smime.p7s
Description: S/MIME Cryptographic Signature


[ADMIN] directing stdout to log file

2003-11-05 Thread Kris Kiger
How do I direct logging output from stdout to a file?  I'm using PG 
7.3.4.  Thanks for the help!

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [ADMIN] Application monitoring

2003-11-05 Thread Oli Sennhauser
Hi Steve

In my old company we had some shell skripts, monitoring all the oracle 
stuff. I plan to transfer this to pg later. It was easy but verry stable 
concept: 2 Servers where watching each other with the same 
functionality: the master was checking all databases for availablity, 
free space in tablespace (does not (yet) exist in postgreSQL), free 
diskspace for oracle mountpoints, listener, apache, etc...
If there was a problem, after 3 alarms (3x5 minutes) a pikett dba was 
called via pager-call etc.

If it helps to you, please let me know.

Oli

Steve Lane wrote:

Hi all:

We maintain a number of web-based applications that use postgres as 
the back end. We recently had an unfortunate situation where the 
hosted server was hacked and the client had some significant downtime. 
We proposed a custom monitoring app, written in PHP, that would 
periodically monitor (for example) the web server (Apache) and 
database (postgres) so that we’d know more quickly when something 
happened.

The client responded that surely this problem of monitoring a 
database-backed web app was a known, solved problem, and wanted to 
know what other people did to solve the problem.

So my question, hopefully not too off-topic: if you administer a 
mission-critical postgres install that needs high availability, what 
do you do for monitoring? Commercial, freeware or open source tool? 
Custom scripts? Anything I haven’t thought of?

-- sgl

===
Steve Lane
Vice President
The Moyer Group
14 North Peoria St Suite 2H
Chicago, IL 60607
Voice: (312) 433-2421 Email: [EMAIL PROTECTED]
Fax: (312) 850-3930 Web: http://www.moyergroup.com
===


--
---
Oli Sennhauser
Database-Engineer (Oracle & PostgreSQL)
Rebenweg 6
CH - 8610 Uster / Switzerland
Phone (+41) 1 940 24 82 or Mobile (+41) 79 450 49 14
e-Mail [EMAIL PROTECTED]
Website http://mypage.bluewin.ch/shinguz/PostgreSQL/
Secure (signed/encrypted) e-Mail with a Free Personal SwissSign ID: http://www.swisssign.ch

Import the SwissSign Root Certificate: http://swisssign.net/cgi-bin/trust/import



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [ADMIN] Creating users in a second postgres server.

2003-11-05 Thread Oli Sennhauser
Hello Stephen

I am actually working on a concept for this problems: Many clusters with
different versions on a server:
It consists off 3 docs and some skripts:

* Optimal Flexible Architecture (OFA) for PostgreSQL
* Flexible Environment for PostgreSQL in a Multi-Cluster-Multi-Database
(MCMD) system
* Install-Guide for PostgreSQL with pg_env
* pg_env the PostgreSQL environment for MCMD and normal PostgreSQL systems

I am unfortuneately not yet finished. I plan to give everything under
GPL/GDL and would like to ask (later) PostgreSQL community for comments
(ev. Std. Env. for PostgreSQL???). If you like I can send you a draft.
Regards Oli

---

Oli Sennhauser
Database-Engineer (Oracle & PostgreSQL)
Rebenweg 6
CH - 8610 Uster / Switzerland
Phone (+41) 1 940 24 82 or Mobile (+41) 79 450 49 14
e-Mail [EMAIL PROTECTED]
Website http://mypage.bluewin.ch/shinguz/PostgreSQL/
Secure (signed/encrypted) e-Mail with a Free Personal SwissSign ID: 
http://www.swisssign.ch

Import the SwissSign Root Certificate: 
http://swisssign.net/cgi-bin/trust/import



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Bruno Wolff III
On Wed, Nov 05, 2003 at 16:14:59 +0100,
  Marek Florianczyk <[EMAIL PROTECTED]> wrote:
> One database with 3.000 schemas works better than 3.000 databases, but
> there is REAL, BIG problem, and I won't be able to use this solution:
> Every query, like "\d table" "\di" takes veeery long time.
> Users have to have phpPgAdmin wich I modified to suit our needs, but now
> it doesn't work, not even log-in. If I rewrite phpPgAdmin to log users
> without checking all schemas, and tables within schemas, none of users
> will be able to examine structure of table. Query like "\d table" from
> psql monitor takes about 2-5 MINUTES :(

Analyzing the system tables will likely make these queries go faster.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Andrew Sullivan
On Wed, Nov 05, 2003 at 08:48:52AM -0500, Jeff wrote:
> 
> as far as I know, -HUP won't make things like shared buffer changes
> take. you need a full restart of PG.

It definitely will not.  Anything that can only be set on "startup"
actually means startup.

> but your numbers are different... I guess it did take. huh.

No, that probably just means that there is more variability to runs
than people like to imagine.

A

-- 

Andrew Sullivan 204-4141 Yonge Street
Afilias CanadaToronto, Ontario Canada
<[EMAIL PROTECTED]>  M2P 2A8
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Marek Florianczyk
W liście z śro, 05-11-2003, godz. 17:24, Andrew Sullivan pisze: 
> On Wed, Nov 05, 2003 at 08:48:52AM -0500, Jeff wrote:
> > 
> > as far as I know, -HUP won't make things like shared buffer changes
> > take. you need a full restart of PG.
> 
> It definitely will not.  Anything that can only be set on "startup"
> actually means startup.
> 
> > but your numbers are different... I guess it did take. huh.
> 
> No, that probably just means that there is more variability to runs
> than people like to imagine.

It look's like I will have to make this tests again. It's not big deal,
just when I solve slow "\d table" problem.
VACUUM ANALYZE in progress...

thanks,

Marek


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Tom Lane
Marek Florianczyk <[EMAIL PROTECTED]> writes:
> Each client was doing:

> 10 x connect,"select * from table[rand(1-4)] where
> number=[rand(1-1000)]",disconnect--(fetch one row)

Seems like this is testing the cost of connect and disconnect to the
exclusion of nearly all else.  PG is not designed to process just one
query per connection --- backend startup is too expensive for that.
Consider using a connection-pooling module if your application wants
short-lived connections.

> I noticed that queries like: "\d table1" "\di" "\dp" are extremly slow,

I thought maybe you'd uncovered a performance issue with lots of
schemas, but I can't reproduce it here.  I made 1 schemas each
containing a table "mytab", which is about the worst case for an
unqualified "\d mytab", but it doesn't seem excessively slow --- maybe
about a quarter second to return the one mytab that's actually in my
search path.  In realistic conditions where the users aren't all using
the exact same table names, I don't think there's an issue.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Marek Florianczyk
W liście z śro, 05-11-2003, godz. 17:18, Bruno Wolff III pisze: 
> On Wed, Nov 05, 2003 at 16:14:59 +0100,
>   Marek Florianczyk <[EMAIL PROTECTED]> wrote:
> > One database with 3.000 schemas works better than 3.000 databases, but
> > there is REAL, BIG problem, and I won't be able to use this solution:
> > Every query, like "\d table" "\di" takes veeery long time.
> > Users have to have phpPgAdmin wich I modified to suit our needs, but now
> > it doesn't work, not even log-in. If I rewrite phpPgAdmin to log users
> > without checking all schemas, and tables within schemas, none of users
> > will be able to examine structure of table. Query like "\d table" from
> > psql monitor takes about 2-5 MINUTES :(
> 
> Analyzing the system tables will likely make these queries go faster.

I've made:
VACUUM FULL;
ANALYZE;

and it works better, but no revelation, when I do "\d schemaname.table"
it's better. I've to still wait about 10-30 sec. and now it's only 100
clients connected. :(


Marek


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Jeff
On 05 Nov 2003 19:01:38 +0100
Marek Florianczyk <[EMAIL PROTECTED]> wrote:

> and it works better, but no revelation, when I do "\d
> schemaname.table" it's better. I've to still wait about 10-30 sec. and
> now it's only 100 clients connected. :(
> 

So it only goes slow with hundred(s) of clients connecting and
disconnecting?

I'm guessing the CPU is pretty pegged on the box.


-- 
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Marek Florianczyk
W liście z śro, 05-11-2003, godz. 18:59, Tom Lane pisze: 
> Marek Florianczyk <[EMAIL PROTECTED]> writes:
> > Each client was doing:
> 
> > 10 x connect,"select * from table[rand(1-4)] where
> > number=[rand(1-1000)]",disconnect--(fetch one row)
> 
> Seems like this is testing the cost of connect and disconnect to the
> exclusion of nearly all else.  PG is not designed to process just one
> query per connection --- backend startup is too expensive for that.
> Consider using a connection-pooling module if your application wants
> short-lived connections.

You right, maybe typical php page will have more queries "per view"
How good is connection-pooling module when connection from each virtual
site is uniq? Different user and password, and differen schemas and
permissions, so this connect-pooling module would have to switch between
users, without reconnecting to database? Impossible ?

> 
> > I noticed that queries like: "\d table1" "\di" "\dp" are extremly slow,
> 
> I thought maybe you'd uncovered a performance issue with lots of
> schemas, but I can't reproduce it here.  I made 1 schemas each
> containing a table "mytab", which is about the worst case for an
> unqualified "\d mytab", but it doesn't seem excessively slow --- maybe
> about a quarter second to return the one mytab that's actually in my
> search path.  In realistic conditions where the users aren't all using
> the exact same table names, I don't think there's an issue.

But did you do that under some database load ? eg. 100 clients
connected, like in my example ? When I do these queries "\d" without any
clients connected and after ANALYZE it's fast, but only 100 clients is
enough to lengthen query time to 30 sec. :(

I've 3000 schemas named: test[1-3000] and 3000 users named test[1-3000]
in each schema there is four tables (table1 table2 table3 table4 )
each table has 3 column (int,text,int) and some of them has also
indexes.

If you want, I will send perl script that forks to 100 process and
perform my queries.

greetings
Marek

> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faqs/FAQ.html
> 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Marek Florianczyk
W liście z śro, 05-11-2003, godz. 19:23, Jeff pisze: 
> On 05 Nov 2003 19:01:38 +0100
> Marek Florianczyk <[EMAIL PROTECTED]> wrote:
> 
> > and it works better, but no revelation, when I do "\d
> > schemaname.table" it's better. I've to still wait about 10-30 sec. and
> > now it's only 100 clients connected. :(
> > 
> 
> So it only goes slow with hundred(s) of clients connecting and
> disconnecting?
> 
> I'm guessing the CPU is pretty pegged on the box.


Thats not exacly what you think, my test script ( some post earlier ) is
doing queries, look at the results below, queries are very fast, just
these with "\d" ale slow:

test2364: connect:0 q_fast:0 q_med:0 q_slow:15 q_upd:0 q_upd_all:0
test2346: connect:0 q_fast:0 q_med:0 q_slow:17 q_upd:0 q_upd_all:7
test2347: connect:0 q_fast:0 q_med:0 q_slow:15 q_upd:0 q_upd_all:9
test2350: connect:0 q_fast:0 q_med:0 q_slow:16 q_upd:0 q_upd_all:8
test2349: connect:0 q_fast:1 q_med:0 q_slow:15 q_upd:0 q_upd_all:8
test2343: connect:0 q_fast:1 q_med:0 q_slow:17 q_upd:0 q_upd_all:7
test2351: connect:0 q_fast:0 q_med:0 q_slow:17 q_upd:0 q_upd_all:9

output from command: top

 02:20:00 up 1 day,  6:19,  2 users,  load average: 6.62, 6.67, 6.60
130 processes: 123 sleeping, 7 running, 0 zombie, 0 stopped
CPU states:  82.4% user,  16.2% system,   0.0% nice,   1.4% idle
Mem:   2070084K total,  2052932K used,17152K free,78048K buffers
Swap:   995988K total, 6272K used,   989716K free,  1771140K cached

  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
18775 postgres  18   0  5532 5440  4688 S 8.7  0.2   0:01 postmaster
19001 postgres  15   0  5484 5392  4664 S 8.3  0.2   0:00 postmaster
19085 postgres  13   0  5492 5400  4680 S 8.3  0.2   0:00 postmaster
18831 postgres  16   0  5532 5440  4680 S 7.9  0.2   0:01 postmaster
19114 postgres  19   0  5436 5344  4624 S 7.9  0.2   0:00 postmaster





---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Tom Lane
Marek Florianczyk <[EMAIL PROTECTED]> writes:
> But did you do that under some database load ? eg. 100 clients
> connected, like in my example ? When I do these queries "\d" without any
> clients connected and after ANALYZE it's fast, but only 100 clients is
> enough to lengthen query time to 30 sec. :(

Then it's not \d's fault --- you simply don't have enough horsepower to
support 100 concurrent clients, regardless of what specific query you're
testing.

You might find that not reconnecting so often would improve matters;
I'm sure that a lot of your cycles are being taken by backend startup.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Marek Florianczyk
W liście z śro, 05-11-2003, godz. 19:34, Tom Lane pisze: 
> Marek Florianczyk <[EMAIL PROTECTED]> writes:
> > But did you do that under some database load ? eg. 100 clients
> > connected, like in my example ? When I do these queries "\d" without any
> > clients connected and after ANALYZE it's fast, but only 100 clients is
> > enough to lengthen query time to 30 sec. :(
> 
> Then it's not \d's fault --- you simply don't have enough horsepower to
> support 100 concurrent clients, regardless of what specific query you're
> testing.
> 
> You might find that not reconnecting so often would improve matters;
> I'm sure that a lot of your cycles are being taken by backend startup.

Maybe reconnect is to often, but how to explain that reular queries like
select * from table1 ale much faster than \d's ? ( my post to Jeff )

Marek


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Tom Lane
Marek Florianczyk <[EMAIL PROTECTED]> writes:
> Maybe reconnect is to often, but how to explain that reular queries like
> select * from table1 ale much faster than \d's ? ( my post to Jeff )

[ further experimentation... ]  Ah-hah, I see the problem in 7.3, though
not in 7.4 which is what I was testing to begin with.  7.4 is smarter
about optimizing the LEFT JOINs that are used in \d's queries.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread Marek Florianczyk
W liście z śro, 05-11-2003, godz. 19:52, Tom Lane pisze: 
> Marek Florianczyk <[EMAIL PROTECTED]> writes:
> > Maybe reconnect is to often, but how to explain that reular queries like
> > select * from table1 ale much faster than \d's ? ( my post to Jeff )
> 
> [ further experimentation... ]  Ah-hah, I see the problem in 7.3, though
> not in 7.4 which is what I was testing to begin with.  7.4 is smarter
> about optimizing the LEFT JOINs that are used in \d's queries.
> 

So how do you think
sould I try v7.4 on production machine and wait for stable version? 
Or better use v7.3 with 3000 databases?
I have to do that till december.
Will this optimization solve problem "\d" queries, or just speed it up
few seconds ?

greetings
Marek


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[ADMIN] why can't select for update be used with join

2003-11-05 Thread Kris Kiger
I'm trying to do a query:  select ta.x from ta join tb using (y) where z 
= 'somevalue' FOR UPDATE

Why can't this be executed without error in 7.3.x?  It worked just fine 
in 7.2.x.   Thanks for the input

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] why can't select for update be used with join

2003-11-05 Thread Tom Lane
Kris Kiger <[EMAIL PROTECTED]> writes:
> I'm trying to do a query:  select ta.x from ta join tb using (y) where z 
> = 'somevalue' FOR UPDATE

> Why can't this be executed without error in 7.3.x?  It worked just fine 
> in 7.2.x.   Thanks for the input

Try saying "FOR UPDATE OF ta, tb".

I agree there's no real reason the system should be that pedantic,
however.  Will see about fixing this for 7.4.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] performance problem - 10.000 databases

2003-11-05 Thread scott.marlowe
On 5 Nov 2003, Marek Florianczyk wrote:

> W li¶cie z ¶ro, 05-11-2003, godz. 19:52, Tom Lane pisze: 
> > Marek Florianczyk <[EMAIL PROTECTED]> writes:
> > > Maybe reconnect is to often, but how to explain that reular queries like
> > > select * from table1 ale much faster than \d's ? ( my post to Jeff )
> > 
> > [ further experimentation... ]  Ah-hah, I see the problem in 7.3, though
> > not in 7.4 which is what I was testing to begin with.  7.4 is smarter
> > about optimizing the LEFT JOINs that are used in \d's queries.
> > 
> 
> So how do you think
> sould I try v7.4 on production machine and wait for stable version? 
> Or better use v7.3 with 3000 databases?
> I have to do that till december.
> Will this optimization solve problem "\d" queries, or just speed it up
> few seconds ?

7.4 just went RC1 and is looking good for release in <2 weeks.  I'd test 
it to see if it works for you.  I've found the .0 releases in postgresql 
to be good enough for production, even if they do have one or two minor 
bugs that pop up.  Take frequent backups (you should anyway) and you'll 
probably be alright. 

Heck, you're already pushing the performance envelope with 3,000 users, 
might as well go for the faster of the two and you'll have one less 
scheduled upgrade ahead of you.

When do you need to go live?  If it's >1 month, then I'd definitely 
recommend 7.4.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] Microsoft access verses postgresql

2003-11-05 Thread Juan Miguel
kbd wrote:

I have many years experience with several database products: Foxbase,
FoxPro, MS SQL, Access, Postgresql, AS400 DB2.  Compared to any of the
above Access is a toy.  Case in point;  in my office we have a product
which is "multi-user" based upon Access.  When the users, there are
only four of them, start pumping data into the database all at once,
the product fails.  We then contact the vendor then replies with a
TechNet article from M$ which explains that Access should not be used
as a multi-user database.
Also, take note many many projects start as just a simple little job
and then take on a life of their own.
Oh what about security and user management.  And what about backups. 
Who is going to hang around until all the users are finished to backup
the database?

Considering Postgresql is free and actually works.  There is no
decision.
 

First, sorry about my bad command in english.

Ok. This is true, but there is a problem when you are developing an 
aplication that must be easy to install,
because the person who will probably install the system is a PC user, 
not a Software Ingenier neither a DB Admin.
In these cases, postgres, mysql, sql server, ... and other DBMS needs 
the interaction with an experimented
person who knows who to install and configure the DBMS for dealing with 
the application.
Often, these sceranios, the application could be monouser and 
selfinstallable. In these circunstances, Access could be
a option. In 80s, DBase was the poormen database, since 90s is Access.

I kown a lot of comercial applications, that todays uses DBase and 
Access database file structures.

Therefore, do you know a better Open Source DBMS than Access, that is 
easy to install and integrate with your applications ?
I mean, there is an Open Source DBMS whose features are bettween Access 
or DBase and Postgress/MySQL ?

Thanks.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[ADMIN] Postgres Web Page

2003-11-05 Thread Juan Miguel
What happends with http://www.postgresql.org ?

When I try to connect to this URL, the download stops.

The only avalaible version of Postgres website that I can find is 
http://www.postgresql.com, that is a comercial site.

Thanks.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [ADMIN] Microsoft access verses postgresql

2003-11-05 Thread David F. Skoll
On Wed, 5 Nov 2003, Juan Miguel wrote:

> Therefore, do you know a better Open Source DBMS than Access, that is
> easy to install and integrate with your applications ?

What is your target environment?

We sell a commercial program (http://www.canit.ca) that uses PostgreSQL
internally.  For our Red Hat 8 and 9 Linux targets, installation is
extremely simple; it goes something like this:

./install
./init-database

I would imagine that you could fairly easily write comparable wrapper
installation programs for PostgreSQL on other platforms.

Regards,

David.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] data format problem for upgrede from 7.1->7.3

2003-11-05 Thread Oli Sennhauser
Thanks Jonathan and Kaolin fire

Technicaly it is clear to me, how it works. But on a SUN E1 we had 
hours to exp/imp 20-30 GB of data. And I do not think, that we were able 
to exp/imp 1TB.

If somebody is telling me now, I have to do this several times a year 
(how can I sell this to a customer???)... It is not a problem handling a 
micky-mouse database. dump/load 100 GB I would gess it takes me more 
than 12 h, 2 or 4 times a year makes an availability of less than 99%. 
If I have 1 TB...???

Regards Oli

PS: It is not a technical question/problem its a strategical/marketing 
question/problem.

7.0.02000-05-08 A dump/restore using pg_dump is required
7.1.02001-04-13 A dump/restore using pg_dump is required 11 months
7.2.02002-02-04 A dump/restore using pg_dump is required 10 months
7.3.02002-11-27 A dump/restore using pg_dump is required 9 months
7.4.02003-11-xx A dump/restore using pg_dump is required 12 months
---

Oli Sennhauser
Database-Engineer (Oracle & PostgreSQL)
Rebenweg 6
CH - 8610 Uster / Switzerland
Phone (+41) 1 940 24 82 or Mobile (+41) 79 450 49 14
e-Mail [EMAIL PROTECTED]
Website http://mypage.bluewin.ch/shinguz/PostgreSQL/
Secure (signed/encrypted) e-Mail with a Free Personal SwissSign ID: http://www.swisssign.ch

Import the SwissSign Root Certificate: http://swisssign.net/cgi-bin/trust/import



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [ADMIN] Postgres Web Page

2003-11-05 Thread Marc G. Fournier

works fine for me from here, and they are all on the same set of servers,
so it isn't network related ... browser?

On Wed, 5 Nov 2003, Juan Miguel wrote:

> What happends with http://www.postgresql.org ?
>
> When I try to connect to this URL, the download stops.
>
> The only avalaible version of Postgres website that I can find is
> http://www.postgresql.com, that is a comercial site.
>
> Thanks.
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
>

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] Microsoft access verses postgresql

2003-11-05 Thread Geoffrey
Juan Miguel wrote:
kbd wrote:

I have many years experience with several database products: Foxbase,
FoxPro, MS SQL, Access, Postgresql, AS400 DB2.  Compared to any of the
above Access is a toy.  Case in point;  in my office we have a product
which is "multi-user" based upon Access.  When the users, there are
only four of them, start pumping data into the database all at once,
the product fails.  We then contact the vendor then replies with a
TechNet article from M$ which explains that Access should not be used
as a multi-user database.
Also, take note many many projects start as just a simple little job
and then take on a life of their own.
Oh what about security and user management.  And what about backups. 
Who is going to hang around until all the users are finished to backup
the database?

Considering Postgresql is free and actually works.  There is no
decision.
 

First, sorry about my bad command in english.
Not a problem, it's better then some who use it natively..

Ok. This is true, but there is a problem when you are developing an 
aplication that must be easy to install,
because the person who will probably install the system is a PC user, 
not a Software Ingenier neither a DB Admin.
I understand what you are saying, but I would submit that convenience is 
never a substitute for data integrity.

--
Until later, Geoffrey   [EMAIL PROTECTED]
Building secure systems inspite of Microsoft

---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] i dont wamt more mails

2003-11-05 Thread lekkala venkata praveen
hai admin,

 I've been recieving a lot of emails about PostgreSql 
administration, and I
don't want reciving it anymore, remove my account from your databases
records, please.

praveen

_
Download ringtones, logos and picture messages from MSN Malaysia 
http://www.msn.com.my/mobile/ringtones/default.asp

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] i dont wamt more mails

2003-11-05 Thread ezrapostgres
Do it yourself.

lekkala venkata praveen wrote:

hai admin,

 I've been recieving a lot of emails about PostgreSql 
administration, and I
don't want reciving it anymore, remove my account from your databases
records, please.

praveen

_
Download ringtones, logos and picture messages from MSN Malaysia 
http://www.msn.com.my/mobile/ringtones/default.asp

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] data format problem for upgrede from 7.1->7.3

2003-11-05 Thread Bruno Wolff III
On Thu, Nov 06, 2003 at 00:27:03 +0100,
  Oli Sennhauser <[EMAIL PROTECTED]> wrote:
> 
> If somebody is telling me now, I have to do this several times a year 
> (how can I sell this to a customer???)... It is not a problem handling a 
> micky-mouse database. dump/load 100 GB I would gess it takes me more 
> than 12 h, 2 or 4 times a year makes an availability of less than 99%. 
> If I have 1 TB...???

More like once a year. Only upgrades between major releases (e.g. 7.3.x
to 7.4.x) require dump and reloads.

People are looking at trying to do something about this, but it won't
happen for 7.4. Maybe by the time 7.5 is released there will be a better
way to upgrade.

---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] Process Files

2003-11-05 Thread Rudi Starcevic
Hi,

I'm moving a Postgresql database onto a co-located server - a Linux 
Virtual Server.

Today I got some error's in my logs:

ERROR:  _mdfd_getrelnfd: cannot open relation virtualusertable: Cannot 
allocate memory
If I reboot it goes away but has been reappearing.

I'd like to try limiting the amount of files per process.
By default, I believe Postgres is configured to permit 1000 open files 
per process,
and my server currently has 9 postgres processes running.

Is there an option I can configure to do this ??

Many thanks
Regards
Rudi Starcevic
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings