On 12/19/05, Matthew Jarvis <[EMAIL PROTECTED]> wrote: > Looking at the following code in a sh file on one of our servers (I've > snipped nonrelavant stuff): > > #!/bin/sh > OUT_TO=/home/postgres/tmp/webtrans.sql > > #load the data to the destination DB > DST_DB=webbikedata > DST_IP=bikefriday.com > DST_USER=bikefriday > > /usr/local/pgsql/bin/psql $DST_DB -U $DST_USER --host=$DST_IP < $OUT_TO > > > Am I understanding this properly if I say that: > > running postgress on a local server > using webbikedata as the db > user is bikefriday > data is going to bikefriday.com > and postgress is importing the contents of $OUT_TO (in this case, > webtrans.sql) > > webtrans.sql is 9.2mb... is the above a good way to deal with this? I'd > think uploading webtrans.sql to the remote box THEN having postgres deal > with it would be faster.
You're running the postgres client (psql) locally using the database webbikedata as user bikefriday on host bikefriday.com and you are feeding the file webtrans.sql to the client. speed probably isn't an issue in most cases, copying the file in bulk and then executing it's contents on the remote server is not faster than copying it line by line and executing it as it arrives. what you do need to be concerned about is security, by default connections to the database are NOT ENCRYPTED. take a look at pg_hba.conf and make sure that the connection type is set to ssl, and that you are requiring authentication before access.
_______________________________________________ EUGLUG mailing list [email protected] http://www.euglug.org/mailman/listinfo/euglug
