On Thu, Oct 10, 2019 at 3:40 AM Ogden Brash wrote:
> If each of the tables has about 3+ billion rows, the index is still going
> to be pretty large and spread over many files. In the source database that
> was backed up, the primary key sequence was sequentially assigned and
> written, but as var
> "Ogden" == Ogden Brash writes:
Ogden> I did the restore as a data only restore so that it would not
Ogden> try to recreate any tables.
Doing data-only restores is almost always a mistake.
pg_dump/pg_restore are very careful to create things in an order that
allows the data part of the r
First of all, thanks to Jeff for pointing out strace. I had not used it
before and it is quite informative. This is the rather depressing one
minute summary for the pg_restore:
% time seconds usecs/call callserrors syscall
-- --- --- - - ---
Ogden Brash writes:
> I have a question about the files in .../data/postgresql/11/main/
> base, specifically in relation to very large tables and how they are
> written.
>
> I have been attempting to restore a relatively large database with
> pg_restore and it has been running for more than a wee
On Wed, Oct 9, 2019 at 4:33 AM Ogden Brash wrote:
> # lsof -p 6600 | wc -l;
> 840
>
> # lsof -p 6601 | wc -l;
> 906
>
> Is that normal? That there be so many open file pointers? ~900 open file
> pointers for each of the processes?
>
I don't think PostgreSQL makes any effort to conserve file hand
> "Ogden" == Ogden Brash writes:
Ogden> I have a question about the files in
Ogden> .../data/postgresql/11/main/base, specifically in relation to
Ogden> very large tables and how they are written.
Ogden> I have been attempting to restore a relatively large database
Ogden> with pg_restor