On 19/08/17 13:49, Mark Kirkwood wrote:
On 19/08/17 02:21, Jeremy Finzel wrote:
On Tue, Aug 15, 2017 at 12:07 PM, Scott Marlowe
mailto:scott.marl...@gmail.com>> wrote:
So do iostat or iotop show you if / where your disks are working
hardest? Or is this CPU overhead that's killing pe
On 19 August 2017 at 04:46, kimaidou wrote:
> When we call the WHERE on the view:
>
> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
> SELECT *
> FROM "qgep"."vw_qgep_reach"
> WHERE "progression_geometry" &&
> st_makeenvelope(1728327.03249295568093657,8240789.26074041239917278,1728608.1098757213912904
Hi all
I also tried to change the values of join_collapse_limit and
rom_collapse_limit to higher values than default: 12, 50 or even 100, with
no improvement on the query plan.
Is this a typical behavior, or is there something particular in my query
that causes this big difference between the raw
On Sat, Aug 19, 2017 at 10:37 AM, anand086 wrote:
Your email is very hard to read, the formatting and line wrapping is
heavily mangled. You might want to attach the plans as files attachments
instead of or in addition to putting the in the body.
> -> Index Only Scan using ui_nkey_test_tab on
I think you query is a bit confusing and have many subqueries, so I tried
to simplify
If you cant´t have more import_num = 0 to the same login, try this
SELECT count(*)
FROM test_tab tab1
LEFT JOIN test_tab tab2
ON tab1.login = tab2.login AND tab2.import_num = '0'
WHERE
tab2.login IS NULL
Do you have an index on login column ?
If not, try creating an index and taking off those DISTICTs.
Em seg, 21 de ago de 2017 às 15:33, Justin Pryzby
escreveu:
> On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:
> >
> +
On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:
> +---+|
>
> QUER
Any thoughts on this?
--
View this message in context:
http://www.postgresql-archive.org/Performance-Issue-Materialize-tp5979128p5979481.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.o
If your procedures to get the data is part is a query predicate, then you can
still use ora2pg
Sent from my iPhone
> On Aug 21, 2017, at 10:35 AM, Igor Neyman wrote:
>
>
> From: pgsql-performance-ow...@postgresql.org
> [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Mariel Cher
I already finished migrating the system from oracle to postgresql. Right
now, I'm trying to improve its performance - Im bringing data from another
read only database that is updaded every minute. I cant push data from the
oracle side to the postgresql side because the oracle database is read only.
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Mariel Cherkassky
Sent: Monday, August 21, 2017 10:20 AM
To: MichaelDBA
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] performance problem on big tables
I had a system that con
I had a system that consist from many objects(procedures,functions..) on an
oracle database. We decided to integrate that system to postgresql. That
system coppied alot of big tables from a different read only oracle
database and preformed on it alot of queries to produce reports. The part
of getti
Maybe I missed it in this continuous thread activity, but have you tried
'''ora2pg"? You can export from Oracle and import to Postgres in
parallel jobs. The import commands use the efficient COPY command by
default (unless you override it in the ora2pg configuration file). You
can do the exp
I`m searching for a way to improve the current performance, I'm not
interesting in using a different tool or writing something new because I'm
trying to migrate a system on oracle database to a postgresql database.
2017-08-21 14:53 GMT+03:00 Daniel Blanch Bataller <
daniel.blanch.batal...@gmail.co
> El 21 ago 2017, a las 13:27, Mariel Cherkassky
> escribió:
>
> All this operation runs as part of a big transaction that I run.
> How can I create a dump in the oracle server and copy it to the postgresql
> server from a postgresql transaction ?
I guess you could create a user defined funct
All this operation runs as part of a big transaction that I run. How can I
create a dump in the oracle server and copy it to the postgresql server
from a postgresql transaction ? Chopping the table is optional when I use
copy, but when I use copy to remote oracle table it takes longer to create
the
> El 21 ago 2017, a las 10:00, Mariel Cherkassky
> escribió:
>
> To summarize, I still have performance problems. My current situation :
> I'm trying to copy the data of many tables in the oracle database into my
> postgresql tables. I'm doing so by running insert into local_postgresql_temp
To summarize, I still have performance problems. My current situation :
I'm trying to copy the data of many tables in the oracle database into my
postgresql tables. I'm doing so by running insert into
local_postgresql_temp select * from remote_oracle_table. The performance of
this operation are ve
18 matches
Mail list logo