Re: Writing 1100 rows per second

2020-02-06 Thread Ogden Brash
On Wed, Feb 5, 2020 at 9:12 AM Laurenz Albe wrote: > One idea I can come up with is a table that is partitioned by a column > that appears > in a selective search condition, but have no indexes on the table, so that > you always get > away with a sequential scan of a single partition. > > This

Modification of data in base folder and very large tables

2019-10-09 Thread Ogden Brash
I have a question about the files in .../data/postgresql/11/main/base, specifically in relation to very large tables and how they are written. I have been attempting to restore a relatively large database with pg_restore and it has been running for more than a week. (I also have another thread

Re: Some observations on very slow pg_restore operations

2019-10-03 Thread Ogden Brash
mmitoff > wal_level minimal > max_wal_senders 0 > full_page_writesoff during DML bulk loading, restore operations > wal_buffers 16MB during DML bulk loading, restore operations > > > Regards, > Michael Vitale > > > Ogden Brash

Some observations on very slow pg_restore operations

2019-10-03 Thread Ogden Brash
I recently performed a pg_dump (data-only) of a relatively large database where we store intermediate results of calculations. It is approximately 3 TB on disk and has about 20 billion rows. We do the dump/restore about once a month and as the dataset has grown, the restores have gotten very