Hi All, Postgresql version: 9.6
On Mon, May 6, 2019, 7:14 AM Sathish Kumar <satcs...@gmail.com> wrote: > Hi, > > I am trying to export our database in GCE instance to Google Cloud SQL. > > Below are the commands used to export/import the database. I am exporting > only 1 database which is required. > > Export: > > pg_dump -h olddbserver -U dbuser --format=plain --no-owner --no-acl > production | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 > EXTENSION/g' > prod.sql > > > Import: > > psql -h newcloudsqldbserver -U dbuser -d production -W < prod.sql > > > On Mon, May 6, 2019, 1:32 AM Vijaykumar Jain <vj...@opentable.com> wrote: > >> Yes. >> I do bump up maintenance_work_mem temporarily during a restore. >> it helps in rebuilding on indexes a little faster. >> Turning fsync off during restore will help the restore a little fast too >> but in case of any crash you may have to restart the restore from scratch. >> Also do have the option to take pg_dump and run pg_restore ? or you just >> the have the raw sql dump to work with? >> if you have the option of taking a dump again, you can try using pg_dump >> and pg_restore with -Fc (custom format) and -j n (parallel) option along >> with temp bump in maint memory. >> This will make the restore a little faster that raw sql dump I think. >> If you are on pg10 or above? you can use logical replication to mirror >> the database. >> There are blogs by several people explaining how to do that, that may be >> helpful. >> >> >> On Sun, 5 May 2019 at 10:29 PM Ravi Krishna <ravi_kris...@aol.com> wrote: >> >>> IMO you are using the slowest tool to import. >>> >>> Just one quick question: Why can't you take cluster backup using any of >>> the tools available and then drop all >>> unwanted databases after you import the cluster. >>> >>> pg_basebackup will do a good job. >>> >>> -- >> >> Regards, >> Vijay >> >