Total RAM on your host is 5GB, really? Before touching anything else, increase your RAM. That will be your big performance boost right there. Then, you can "up" your effective_cache_size and maintenance_work_mem.

Regards,
Michael Vitale

Mariel Cherkassky <mailto:mariel.cherkas...@gmail.com>
Monday, August 14, 2017 9:24 AM

I have performance issues with two big tables. Those tables are located on an oracle remote database. I'm running the quert : |insert into local_postgresql_table select * from oracle_remote_table.|

The first table has 45M records and its size is 23G. The import of the data from the oracle remote database is taking 1 hour and 38 minutes. After that I create 13 regular indexes on the table and it takes 10 minutes per table ->2 hours and 10 minutes in total.

The second table has 29M records and its size is 26G. The import of the data from the oracle remote database is taking 2 hours and 30 minutes. The creation of the indexes takes 1 hours and 30 minutes (some are indexes on one column and the creation takes 5 min and some are indexes on multiples column and it takes 11 min.

Those operation are very problematic for me and I'm searching for a solution to improve the performance. The parameters I assigned :

min_parallel_relation_size =200MB
||
max_parallel_workers_per_gather =5
max_worker_processes =8
effective_cache_size =2500MB
work_mem =16MB
maintenance_work_mem =1500MB
shared_buffers =2000MB
RAM :5G
CPU CORES :8

*-I tried running select count(*) from table in oracle and in postgresql the running time is almost equal.*

*-Before importing the data I drop the indexes and the constraints.*

*-I tried to copy a 23G file from the oracle server to the postgresql server and it took me 12 minutes.*

Please advice how can I continue ? How can I improve something in this operation ?


Reply via email to