> this seems
> like a dead waste of effort :-(.  The work to put the data into the main
> database isn't lessened at all; you've just added extra work to manage
> the buffer database.

True from the view point of the server, but not from the throughput in the
client session (client viewpoint).  The client will have a blazingly fast
session with the buffer database.  I'm assuming the buffer database table
size is zero or very small.  Constraints will be a problem if there are
PKs, FKs that need satisfied on the server that are not adequately testable
in the buffer.  Might not be a problem if the full table fits on the RAM
disk, but you still have to worry about two clients inserting the same PK.

Rick


                                                                                
                                                                 
                      Tom Lane                                                  
                                                                 
                      <[EMAIL PROTECTED]>                To:       [EMAIL 
PROTECTED]                                                            
                      Sent by:                           cc:       
pgsql-performance@postgresql.org                                              
                      [EMAIL PROTECTED]        Subject:  Re: [PERFORM] 
Questions about 2 databases.                                    
                      tgresql.org                                               
                                                                 
                                                                                
                                                                 
                                                                                
                                                                 
                      03/11/2005 03:33 PM                                       
                                                                 
                                                                                
                                                                 
                                                                                
                                                                 




jelle <[EMAIL PROTECTED]> writes:
> 1) on a single 7.4.6 postgres instance does each database have it own WAL
>     file or is that shared? Is it the same on 8.0.x?

Shared.

> 2) what's the high performance way of moving 200 rows between similar
>     tables on different databases? Does it matter if the databases are
>     on the same or seperate postgres instances?

COPY would be my recommendation.  For a no-programming-effort solution
you could just pipe the output of pg_dump --data-only -t mytable
into psql.  Not sure if it's worth developing a custom application to
replace that.

> My web app does lots of inserts that aren't read until a session is
> complete. The plan is to put the heavy insert session onto a ramdisk
based
> pg-db and transfer the relevant data to the master pg-db upon session
> completion. Currently running 7.4.6.

Unless you have a large proportion of sessions that are abandoned and
hence never need be transferred to the main database at all, this seems
like a dead waste of effort :-(.  The work to put the data into the main
database isn't lessened at all; you've just added extra work to manage
the buffer database.

                                     regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match




---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to