From: pgsql-performance-ow...@postgresql.org 
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Mariel Cherkassky
Sent: Monday, August 21, 2017 10:20 AM
To: MichaelDBA <michael...@sqlexec.com>
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] performance problem on big tables

I had a system that consist from many objects(procedures,functions..) on an 
oracle database. We decided to integrate that system to postgresql. That system 
coppied alot of big tables from a different read only oracle database and 
preformed on it alot of queries to produce reports. The part of getting the 
data is part of some procedures, I cant change it so freely. I'm searching a 
way to improve the perfomance of the database because I'm sure that I didnt 
conifgure something well. Moreover, When I run complicted queries (joint 
between 4 big tables and filtering) it takes alot of time and I see that the 
server is cacheing all my ram memory.


Probably your joins are done on Postgres side.

m.b. instead of Postgres pulling data from Oracle, you should try pushing data 
from Oracle to Postgres using Oracle’s Heterogeneous Services and Postgres ODBC 
driver. In this case you do your joins and filtering on Oracles side and just 
push the result set to Postgres.
That’s how I did migration from Oracle to Postgres.

Regards,
Igor Neyman

Reply via email to