1. The tables has no indexes at the time of load.2. The create table and copy
are in the same transaction.
So I guess that's pretty much it. I understand the long time it takes as some
of the tables have 400+ million rows.Also the env is a container and since this
is currently a POC system ,
Maybe he just has a large file that needs to be loaded into a table...
On 08/20/2018 11:47 AM, Vijaykumar Jain wrote:
Hey Ravi,
What is the goal you are trying to achieve here.
To make pgdump/restore faster?
To make replication faster?
To make backup faster ?
Also no matter how small you
On Mon, 20 Aug 2018 at 12:53, Ravi Krishna wrote:
> > What is the goal you are trying to achieve here.
> > To make pgdump/restore faster?
> > To make replication faster?
> > To make backup faster ?
>
> None of the above.
>
> We got csv files from external vendor which are 880GB in total size,
> What is the goal you are trying to achieve here.
> To make pgdump/restore faster?
> To make replication faster?
> To make backup faster ?
None of the above.
We got csv files from external vendor which are 880GB in total size, in 44
files. Some of the large tables had COPY running for
Hey Ravi,
What is the goal you are trying to achieve here.
To make pgdump/restore faster?
To make replication faster?
To make backup faster ?
Also no matter how small you split the files into, if network is your
bottleneck then I am not sure you can attain n times the benefit my simply
sending