Re: [PERFORM] Questions about 2 databases.

2005-03-20 Thread Mirko Zeibig
jelle wrote:
 The insert heavy sessions average 175 page hits generating XML, 1000 
 insert/updates which comprise 90% of the insert/update load, of which
 200 inserts need to be transferred to the master db. The other 
 sessions are read/cache bound. I hoping to get a speed-up from moving
  the temporary stuff off the master db and using 1 transaction
 instead of 175 to the disk based master db.

Just a thought:
Wouldn't it be sufficient to have the temporary, fast session-table
in a RAM-disk? I suspect you could do this rather easily using a TABLESPACE.
All the indices could be in this TABLESPACE as well (at least after
having a quick look at the short help for CREATE TABLE and assuming you are
using PostgreSQL = 8.0).

Regards
Mirko

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[PERFORM] Questions about 2 databases.

2005-03-11 Thread jelle
Hello All,
I have a couple of questions about running 2 databases:
1) on a single 7.4.6 postgres instance does each database have it own WAL
   file or is that shared? Is it the same on 8.0.x?
2) what's the high performance way of moving 200 rows between similar
   tables on different databases? Does it matter if the databases are
   on the same or seperate postgres instances?
Background:
My web app does lots of inserts that aren't read until a session is 
complete. The plan is to put the heavy insert session onto a ramdisk based 
pg-db and transfer the relevant data to the master pg-db upon session 
completion. Currently running 7.4.6.

Individual session data is not as critical as the master pg-db so the risk 
associated with running the session pg-db on a ramdisk is acceptable. 
All this is to get past the I/O bottleneck, already tweaked the config 
files, run on multiple RAID-1 spindles, profiled the queries, maxed 
the CPU/ram. Migrating to 64bit fedora soon.

Thanks, this mailing list has been invaluable.
Jelle
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Questions about 2 databases.

2005-03-11 Thread Tom Lane
jelle [EMAIL PROTECTED] writes:
 1) on a single 7.4.6 postgres instance does each database have it own WAL
 file or is that shared? Is it the same on 8.0.x?

Shared.

 2) what's the high performance way of moving 200 rows between similar
 tables on different databases? Does it matter if the databases are
 on the same or seperate postgres instances?

COPY would be my recommendation.  For a no-programming-effort solution
you could just pipe the output of pg_dump --data-only -t mytable
into psql.  Not sure if it's worth developing a custom application to
replace that.

 My web app does lots of inserts that aren't read until a session is 
 complete. The plan is to put the heavy insert session onto a ramdisk based 
 pg-db and transfer the relevant data to the master pg-db upon session 
 completion. Currently running 7.4.6.

Unless you have a large proportion of sessions that are abandoned and
hence never need be transferred to the main database at all, this seems
like a dead waste of effort :-(.  The work to put the data into the main
database isn't lessened at all; you've just added extra work to manage
the buffer database.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Questions about 2 databases.

2005-03-11 Thread Richard_D_Levine
 this seems
 like a dead waste of effort :-(.  The work to put the data into the main
 database isn't lessened at all; you've just added extra work to manage
 the buffer database.

True from the view point of the server, but not from the throughput in the
client session (client viewpoint).  The client will have a blazingly fast
session with the buffer database.  I'm assuming the buffer database table
size is zero or very small.  Constraints will be a problem if there are
PKs, FKs that need satisfied on the server that are not adequately testable
in the buffer.  Might not be a problem if the full table fits on the RAM
disk, but you still have to worry about two clients inserting the same PK.

Rick



 
  Tom Lane  
 
  [EMAIL PROTECTED]To:   [EMAIL 
PROTECTED]
  Sent by:   cc:   
pgsql-performance@postgresql.org  
  [EMAIL PROTECTED]Subject:  Re: [PERFORM] 
Questions about 2 databases.
  tgresql.org   
 

 

 
  03/11/2005 03:33 PM   
 

 

 




jelle [EMAIL PROTECTED] writes:
 1) on a single 7.4.6 postgres instance does each database have it own WAL
 file or is that shared? Is it the same on 8.0.x?

Shared.

 2) what's the high performance way of moving 200 rows between similar
 tables on different databases? Does it matter if the databases are
 on the same or seperate postgres instances?

COPY would be my recommendation.  For a no-programming-effort solution
you could just pipe the output of pg_dump --data-only -t mytable
into psql.  Not sure if it's worth developing a custom application to
replace that.

 My web app does lots of inserts that aren't read until a session is
 complete. The plan is to put the heavy insert session onto a ramdisk
based
 pg-db and transfer the relevant data to the master pg-db upon session
 completion. Currently running 7.4.6.

Unless you have a large proportion of sessions that are abandoned and
hence never need be transferred to the main database at all, this seems
like a dead waste of effort :-(.  The work to put the data into the main
database isn't lessened at all; you've just added extra work to manage
the buffer database.

 regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Questions about 2 databases.

2005-03-11 Thread PFC

My web app does lots of inserts that aren't read until a session is  
complete. The plan is to put the heavy insert session onto a ramdisk  
based pg-db and transfer the relevant data to the master pg-db upon  
session completion. Currently running 7.4.6.
	From what you say I'd think you want to avoid making one write  
transaction to the main database on each page view, right ?
	You could simply store the data in a file, and at the end of the session,  
read the file and do all the writes in one transaction.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org