On Mon, Jul 13, 2020 at 8:05 PM Jeff Janes wrote:
> On Mon, Jul 13, 2020 at 10:23 AM Henrique Montenegro
> wrote:
>
> insert into users_no_dups (
>> created_ts,
>> user_id,
>> name,
>> url
>> ) (
>> select
>> created_ts,
>> user_id,
>> name,
>>
On Mon, Jul 13, 2020 at 10:23 AM Henrique Montenegro
wrote:
insert into users_no_dups (
> created_ts,
> user_id,
> name,
> url
> ) (
> select
> created_ts,
> user_id,
> name,
> url
> from
> users
> ) on conflict do nothing
>
Once
On Mon, Jul 13, 2020 at 12:50 PM Sebastian Dressler
wrote:
> Hi Henrique,
>
> On 13. Jul 2020, at 18:42, Henrique Montenegro wrote:
>
> On Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler
> wrote:
>
>
>> Running the above loop worked fine for about 12 hours. Each file was
>> taking
>> about 30
Is this an insert only table and perhaps not being picked up by autovacuum?
If so, try a manual "vacuum analyze" before/after each batch run perhaps.
You don't mention updates, but also have been adjusting fillfactor so I am
not not sure.
Hi Henrique,
On 13. Jul 2020, at 18:42, Henrique Montenegro
mailto:typ...@gmail.com>> wrote:
On Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler
mailto:sebast...@swarm64.com>> wrote:
Running the above loop worked fine for about 12 hours. Each file was taking
about 30 seconds to be processed.
On Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler
wrote:
> Hi Henrique,
>
> On 13. Jul 2020, at 16:23, Henrique Montenegro wrote:
>
> [...]
>
> * Insert the data from the `users` table into the `users_no_dups` table
>
> ```
> insert into users_no_dups (
> created_ts,
> user_id,
>
On Mon, Jul 13, 2020 at 12:28 PM Michael Lewis wrote:
> Is this an insert only table and perhaps not being picked up by
> autovacuum? If so, try a manual "vacuum analyze" before/after each batch
> run perhaps. You don't mention updates, but also have been adjusting
> fillfactor so I am not not
Hi Henrique,
On 13. Jul 2020, at 16:23, Henrique Montenegro
mailto:typ...@gmail.com>> wrote:
[...]
* Insert the data from the `users` table into the `users_no_dups` table
```
insert into users_no_dups (
created_ts,
user_id,
name,
url
) (
select
created_ts,
Hello list,
I am having issues with performance inserting data in Postgres and would
like
to ask for help figuring out the problem as I ran out of ideas.
I have a process that generates a CSV file with 1 million records in it
every
5 minutes and each file is about 240MB. I need this data to be