Re: [GENERAL] Insert large number of records

2017-09-22 Thread Alban Hertroys
On 20 September 2017 at 22:55, Job wrote: > One further question: within a query launched on the MASTER table where i > need to scan every table, for exaple to search rows locatd in more partitions. > In there a way to improve "parallel scans" between more table at the same > time or not? > I no

Re: [GENERAL] Insert large number of records

2017-09-20 Thread Alban Hertroys
On 20 September 2017 at 07:42, Job wrote: > We use a "temporary" table, populated by pg_bulkload - it takes few minutes > in this first step. > Then, from the temporary table, datas are transferred by a trigger that copy > the record into the production table. > But *this step* takes really lots

Re: [GENERAL] Insert large number of records

2017-09-20 Thread David G. Johnston
On Tuesday, September 19, 2017, Job wrote: > and would not care about table partitioning (COPY command fire > partitioned-table triggers). You might want to write a script that inserts directly into the partitions and bypass routing altogether. Insert into ... select from ... is your only opti

Re: [GENERAL] Insert large number of records

2017-09-19 Thread Alban Hertroys
> On 19 Sep 2017, at 15:47, Job wrote: > > Hi guys, > > we need to insert from a table to another (Postgresql 9.6.1) a large amount > of data (about 10/20 millions of rows) without locking destination table. > Pg_bulkload is the fastest way but it locks the table. > > Are there other ways? >