I have the newest version :
select oracle_diag();
oracle_diag
-
oracle_fdw 1.5.0, PostgreSQL 9.6.3, Oracle client 11.2.0.4.0,
On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky
wrote:
> Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it
> but I'm getting error
>
> dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240
> );
> ERROR: syntax error
On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky
wrote:
> Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it
> but I'm getting error
>
> dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240
> );
> ERROR: syntax error
Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it
but I'm getting error
dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch
10240 );
ERROR: syntax error at or near "10240"
LINE 1: ...N TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 );
dbch=#
On Thu, Aug 24, 2017 at 4:51 AM, Mariel Cherkassky
wrote:
> Hi Claudio, how can I do that ? Can you explain me what is this option ?
>
> 2017-08-24 2:15 GMT+03:00 Claudio Freire :
>>
>> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
>>
Hi Claudio, how can I do that ? Can you explain me what is this option ?
2017-08-24 2:15 GMT+03:00 Claudio Freire :
> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
> wrote:
> > To summarize, I still have performance problems. My current
On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
wrote:
> To summarize, I still have performance problems. My current situation :
>
> I'm trying to copy the data of many tables in the oracle database into my
> postgresql tables. I'm doing so by running insert into
-ow...@postgresql.org] On Behalf Of Mariel Cherkassky
> Sent: Monday, August 21, 2017 10:20 AM
> To: MichaelDBA <michael...@sqlexec.com>
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] performance problem on big tables
>
> I had a system that c
chael...@sqlexec.com>
> *Cc:* pgsql-performance@postgresql.org
> *Subject:* Re: [PERFORM] performance problem on big tables
>
>
>
> I had a system that consist from many objects(procedures,functions..) on
> an oracle database. We decided to integrate that system to postgr
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Mariel Cherkassky
Sent: Monday, August 21, 2017 10:20 AM
To: MichaelDBA <michael...@sqlexec.com>
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] performance problem on big
I had a system that consist from many objects(procedures,functions..) on an
oracle database. We decided to integrate that system to postgresql. That
system coppied alot of big tables from a different read only oracle
database and preformed on it alot of queries to produce reports. The part
of
Maybe I missed it in this continuous thread activity, but have you tried
'''ora2pg"? You can export from Oracle and import to Postgres in
parallel jobs. The import commands use the efficient COPY command by
default (unless you override it in the ora2pg configuration file). You
can do the
I`m searching for a way to improve the current performance, I'm not
interesting in using a different tool or writing something new because I'm
trying to migrate a system on oracle database to a postgresql database.
2017-08-21 14:53 GMT+03:00 Daniel Blanch Bataller <
> El 21 ago 2017, a las 13:27, Mariel Cherkassky
> escribió:
>
> All this operation runs as part of a big transaction that I run.
> How can I create a dump in the oracle server and copy it to the postgresql
> server from a postgresql transaction ?
I guess you
All this operation runs as part of a big transaction that I run. How can I
create a dump in the oracle server and copy it to the postgresql server
from a postgresql transaction ? Chopping the table is optional when I use
copy, but when I use copy to remote oracle table it takes longer to create
> El 21 ago 2017, a las 10:00, Mariel Cherkassky
> escribió:
>
> To summarize, I still have performance problems. My current situation :
> I'm trying to copy the data of many tables in the oracle database into my
> postgresql tables. I'm doing so by running
To summarize, I still have performance problems. My current situation :
I'm trying to copy the data of many tables in the oracle database into my
postgresql tables. I'm doing so by running insert into
local_postgresql_temp select * from remote_oracle_table. The performance of
this operation are
When I run copy from local table the speed of the writing is 22 M/S. When I
use the copy from remote_oracle_Table it writes 3 M/s. SCP between the
servers coppies very fast. How should I continue ?
2017-08-20 14:00 GMT+03:00 Mariel Cherkassky :
> I realized something
I realized something weird. When I`m preforming the copy utility of
postgresql in order to create dump from a local table in my postgresql db
it takes for 32G table 20 minutes. When I try to use copy for a foregin
table (on oracle database) It takes more than 2 hours.. During the copy
operation
This server is dedicated to be a postgresql production database, therefore
postgresql is the only thing the runs on the server. The fs that I`m using
is xfs. I`ll add two different disks - one for the wals and one for the
temp tablespace. Regarding the disk, what size should they be considering
On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky
wrote:
> I checked with the storage team in the company and they saw that I have alot
> of io on the server. How should I reduce the io that the postgresql uses ?
Do you have concurrent activity on that server?
What
I would just check how does it take to copy 3GB using an standard copy command.
on my computer it took 10 secs.
> El 17 ago 2017, a las 11:00, Mariel Cherkassky
> escribió:
>
> I checked with the storage team in the company and they saw that I have alot
> of io
I checked with the storage team in the company and they saw that I have
alot of io on the server. How should I reduce the io that the postgresql
uses ?
2017-08-17 9:25 GMT+03:00 Mariel Cherkassky :
> Hi Daniel,
> I already tried to set the destination table to
Hi Daniel,
I already tried to set the destination table to unlogged - it improved the
performance slightly. Is there a way to make sure that I/O is the problem ?
2017-08-17 0:46 GMT+03:00 Daniel Blanch Bataller <
daniel.blanch.batal...@gmail.com>:
> Seems your disks are too slow. On my laptop
Seems your disks are too slow. On my laptop (nothing special, just one disk)
using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare copying 3G
takes 10 secs.
Similar proportion you had, but much faster.
confirm I/O is your bottleneck, and tell us how you solved your problem
My server is virtual and it have virtual hd from a vnx storage machine. The
logs and the data are on the same disk.
2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <
daniel.blanch.batal...@gmail.com>:
> Considering it has to write logs and data at checkpoints I don’t see it
> particularly slow
Considering it has to write logs and data at checkpoints I don’t see it
particularly slow compared to the extract phase. What kind of disks you have
SSD or regular disks? Different disks for ltransaction logs and data?
> El 16 ago 2017, a las 15:54, Mariel Cherkassky
I run the copy command via psql to create a local dump of a 3G table and it
took me 134059.732ms =~2 minutes. After that I imported the data via copy
and it took 458648.677ms =~7 minutes. So the copy command works but pretty
slow.
2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <
See if the copy command is actually working, copy should be very fast from your
local disk.
> El 16 ago 2017, a las 14:26, Mariel Cherkassky
> escribió:
>
>
> After all the changes of the memory parameters the same operation(without the
> copy utility) didnt
After all the changes of the memory parameters the same operation(without
the copy utility) didnt run much faster - it took one minute less. I made
a test with the copy command (without the 'with binary') and it took 1.5
hours to create the dumpfile in my local postgresql server. Then I tried to
2017-08-15 18:13 GMT+02:00 Jeff Janes :
> On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>> Hi,
>> So I I run the cheks that jeff mentioned :
>> \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1
>> hour and 35
On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky
wrote:
> Hi,
> So I I run the cheks that jeff mentioned :
> \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour
> and 35 minutes
So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound
On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> Hi,
> So I I run the cheks that jeff mentioned :
> \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour
> and 35 minutes
> \copy local_postresql_table from /tmp/tmp with binary - Didnt
Hi,
So I I run the cheks that jeff mentioned :
\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour
and 35 minutes
\copy local_postresql_table from /tmp/tmp with binary - Didnt run because
the remote oracle database is currently under maintenance work.
So I decided to follow
On Mon, Aug 14, 2017 at 6:24 AM, Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> I have performance issues with two big tables. Those tables are located on
> an oracle remote database. I'm running the quert : insert into
> local_postgresql_table select * from oracle_remote_table.
>
>
Moving that many gigs of data across your network could also take a long
time simply depending on your network configuration. Before spending a
huge amount of energy tuning postgresql, I'd probably look at how long it
takes to simply copy 20 or 30 G of data between the two machines.
> El 14
Hi.
In general using COPY is *much faster* than anything else. You can even split
the data load and run it in parallel, start with as many jobs as processors you
have. Same with indexes, run them in parallel. With parallel I mean various
psql running at the same time.
Tuning postgres will
Total RAM on your host is 5GB, really? Before touching anything else,
increase your RAM. That will be your big performance boost right
there. Then, you can "up" your effective_cache_size and
maintenance_work_mem.
Regards,
Michael Vitale
Mariel Cherkassky
38 matches
Mail list logo