Re: Writing 1100 rows per second

2020-02-06 Thread Ogden Brash
On Wed, Feb 5, 2020 at 9:12 AM Laurenz Albe wrote: > One idea I can come up with is a table that is partitioned by a column > that appears > in a selective search condition, but have no indexes on the table, so that > you always get > away with a sequential scan of a single partition. > > This is

Re: Modification of data in base folder and very large tables

2019-10-10 Thread Ogden Brash
re close to finishing and has been churning for 10 days. On Wed, Oct 9, 2019 at 4:05 PM Jerry Sievers wrote: > Ogden Brash writes: > > > I have a question about the files in .../data/postgresql/11/main/ > > base, specifically in relation to very large tables and how they are

Modification of data in base folder and very large tables

2019-10-09 Thread Ogden Brash
I have a question about the files in .../data/postgresql/11/main/base, specifically in relation to very large tables and how they are written. I have been attempting to restore a relatively large database with pg_restore and it has been running for more than a week. (I also have another thread ab

Re: Some observations on very slow pg_restore operations

2019-10-03 Thread Ogden Brash
; min_wal_sizeIncrease significantly like you would to > checkpoint_segments > checkpoint_timeout Increase to at least 30min > archive_mode off > autovacuumoff > synchronous_commitoff > wal_level minimal > max_wal_senders

Some observations on very slow pg_restore operations

2019-10-03 Thread Ogden Brash
I recently performed a pg_dump (data-only) of a relatively large database where we store intermediate results of calculations. It is approximately 3 TB on disk and has about 20 billion rows. We do the dump/restore about once a month and as the dataset has grown, the restores have gotten very slow.