On Mon, Aug 14, 2017 at 2:46 PM, Jeremy Finzel wrote:
> On Mon, Aug 14, 2017 at 3:01 PM, Scott Marlowe
> wrote:
>>
>> On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel wrote:
>> >
>> > Any insights would be greatly appreciated, as we
On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel wrote:
> This particular db is on 9.3.15. Recently we had a serious performance
> degradation related to a batch job that creates 4-5 temp tables and 5
> indexes. It is a really badly written job but what really confuses us is
>
On Mon, Aug 14, 2017 at 3:01 PM, Scott Marlowe
wrote:
> On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel wrote:
> > This particular db is on 9.3.15. Recently we had a serious performance
> > degradation related to a batch job that creates 4-5 temp
This particular db is on 9.3.15. Recently we had a serious performance
degradation related to a batch job that creates 4-5 temp tables and 5
indexes. It is a really badly written job but what really confuses us is
that this job has been running for years with no issue remotely approaching
this
Also if you're using newly loaded data the db could be setting hint
bits on the first select etc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Scott Marlowe writes:
> On Mon, Aug 14, 2017 at 2:46 PM, Jeremy Finzel wrote:
>
>> On Mon, Aug 14, 2017 at 3:01 PM, Scott Marlowe
>> wrote:
>>>
>>> On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel wrote:
>>>
On Mon, Aug 14, 2017 at 12:53 PM, Jeremy Finzel wrote:
> This particular db is on 9.3.15. Recently we had a serious performance
> degradation related to a batch job that creates 4-5 temp tables and 5
> indexes. It is a really badly written job but what really confuses us is
>
Peter Geoghegan writes:
> On Mon, Aug 14, 2017 at 12:53 PM, Jeremy Finzel wrote:
>
>> This particular db is on 9.3.15. Recently we had a serious performance
>> degradation related to a batch job that creates 4-5 temp tables and 5
>> indexes. It is a really
I have performance issues with two big tables. Those tables are located on
an oracle remote database. I'm running the quert : insert into
local_postgresql_table select * from oracle_remote_table.
The first table has 45M records and its size is 23G. The import of the data
from the oracle remote
Total RAM on your host is 5GB, really? Before touching anything else,
increase your RAM. That will be your big performance boost right
there. Then, you can "up" your effective_cache_size and
maintenance_work_mem.
Regards,
Michael Vitale
Mariel Cherkassky
Hi.
In general using COPY is *much faster* than anything else. You can even split
the data load and run it in parallel, start with as many jobs as processors you
have. Same with indexes, run them in parallel. With parallel I mean various
psql running at the same time.
Tuning postgres will
Moving that many gigs of data across your network could also take a long
time simply depending on your network configuration. Before spending a
huge amount of energy tuning postgresql, I'd probably look at how long it
takes to simply copy 20 or 30 G of data between the two machines.
> El 14
On Mon, Aug 14, 2017 at 6:24 AM, Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> I have performance issues with two big tables. Those tables are located on
> an oracle remote database. I'm running the quert : insert into
> local_postgresql_table select * from oracle_remote_table.
>
>
13 matches
Mail list logo