Thank you for the clarification.
בתאריך 20 בנוב׳ 2017 14:28, "Michael Paquier"
כתב:
> On Mon, Nov 20, 2017 at 6:02 PM, Mariel Cherkassky
> wrote:
> > This morning , I set the wal_keep_segments to 100 and I set the
> > archive_timeout to 6 minutes. Now, afte
Hi,
I'm trying to understand my wals behavior on my postgresql environment.
My wal settings are :
wal_keep_segments = 200
max_wal_size = 3GB
min_wal_size = 80MB
archive_command = 'cp %p /PostgreSQL-wal/9.6/pg_xlog/wal_archives/%f'
archive_timeout = 10
#checkpoint_flush_afte
ws)
2017-10-02 16:45 GMT+03:00 Gerardo Herzig :
>
>
> - Mensaje original -
> > De: "Mariel Cherkassky"
> > Para: "Andreas Kretschmer"
> > CC: pgsql-performance@postgresql.org
> > Enviados: Lunes, 2 de Octubre 2017 10:25:19
> &g
01.10.2017 um 14:41 schrieb Mariel Cherkassky:
>
>> Hi,
>> I need to use the max function in my query. I had very bad performance
>> when I used the max :
>>
>>SELECT Ma.User_Id,
>> COUNT(*) COUNT
>>
Hi,
I need to use the max function in my query. I had very bad performance when
I used the max :
SELECT Ma.User_Id,
COUNT(*) COUNT
FROM Manuim Ma
WHERE Ma.Bb_Open_Date =
(SELECT max(Bb_Open_Da
16:24 GMT+03:00 George Neuner :
> On Thu, 24 Aug 2017 16:15:19 +0300, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
> >I'm trying to understand what postgresql doing in an issue that I'm
> having.
> >Our app team wrote a function that runs with
Anyone?
2017-08-24 16:15 GMT+03:00 Mariel Cherkassky :
> I'm trying to understand what postgresql doing in an issue that I'm
> having. Our app team wrote a function that runs with a cursor over the
> results of a query and via the utl_file func they write some columns
Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky
> wrote:
> > Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run
> it
> > but I'm getting error
> >
> > dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch
> 10240
>
etch 10240 );
dbch=# alter foreign table tc_sub_rate_ver_prod OPTIONS (SET prefetch
'10240');
ERROR: option "prefetch" not found
2017-08-24 19:14 GMT+03:00 Claudio Freire :
> On Thu, Aug 24, 2017 at 4:51 AM, Mariel Cherkassky
> wrote:
> > Hi Claudio, how ca
I'm trying to understand what postgresql doing in an issue that I'm having.
Our app team wrote a function that runs with a cursor over the results of a
query and via the utl_file func they write some columns to a file. I dont
understand why, but postgresql write the data into the file in the fs in
Hi Claudio, how can I do that ? Can you explain me what is this option ?
2017-08-24 2:15 GMT+03:00 Claudio Freire :
> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
> wrote:
> > To summarize, I still have performance problems. My current situation :
> >
> > I'
Hi, I have a query that I run in my postgresql 9.6 database and it runs for
more than 24 hours and doesnt finish.
My select consist from few joins :
SELECT a.inst_prod_id,
product_id,
nap_area2,
only.
2017-08-21 17:35 GMT+03:00 Igor Neyman :
>
>
> *From:* pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-
> ow...@postgresql.org] *On Behalf Of *Mariel Cherkassky
> *Sent:* Monday, August 21, 2017 10:20 AM
> *To:* MichaelDBA
> *Cc:* pgsql-performance
import commands use the efficient COPY command by default
> (unless you override it in the ora2pg configuration file). You can do the
> export and subsequent import in memory, but I would suggest the actual file
> export and import so you can take advantage of the parallel feature.
>
.blanch.batal...@gmail.com>:
>
> El 21 ago 2017, a las 13:27, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> escribió:
>
> All this operation runs as part of a big transaction that I run.
>
> How can I create a dump in the oracle server and copy it to the postgresql
>
the dump.
2017-08-21 11:37 GMT+03:00 Daniel Blanch Bataller <
daniel.blanch.batal...@gmail.com>:
>
> El 21 ago 2017, a las 10:00, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> escribió:
>
> To summarize, I still have performance problems. My current situation :
>
ize = 12GB
work_mem = 128MB
maintenance_work_mem = 4GB
shared_buffers = 2000MB
RAM : 16G
CPU CORES : 8
HOW can I increase the writes ? How can I get the data faster from the
oracle database to my postgresql database?
2017-08-20 14:00 GMT+03:00 Mariel Cherkassky :
> I realized something we
When I run copy from local table the speed of the writing is 22 M/S. When I
use the copy from remote_oracle_Table it writes 3 M/s. SCP between the
servers coppies very fast. How should I continue ?
2017-08-20 14:00 GMT+03:00 Mariel Cherkassky :
> I realized something weird. When I`m preform
from the foreign table I dont see alot of write operations, with
iotop i see that its writes 3 M/s. What else I can check ?
2017-08-20 9:39 GMT+03:00 Mariel Cherkassky :
> This server is dedicated to be a postgresql production database, therefore
> postgresql is the only thing the runs on the
table. Should I
run vacuum before or after the operation ?
2017-08-17 19:37 GMT+03:00 Claudio Freire :
> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky
> wrote:
> > I checked with the storage team in the company and they saw that I have
> alot
> > of io on the server. Ho
I checked with the storage team in the company and they saw that I have
alot of io on the server. How should I reduce the io that the postgresql
uses ?
2017-08-17 9:25 GMT+03:00 Mariel Cherkassky :
> Hi Daniel,
> I already tried to set the destination table to unlogged - it improv
ou can cut import time by half if you set your destination table
> to unlogged (postgres will write half the data, it will save the
> transaction log writing). Remember to set it to logged when finished!!
>
>
> Regards,
>
> Daniel
>
> El 16 ago 2017, a las 16:32, Mariel
cularly slow compared to the extract phase. What kind of disks you
> have SSD or regular disks? Different disks for ltransaction logs and data?
>
>
> El 16 ago 2017, a las 15:54, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> escribió:
>
> I run the copy command via ps
<
daniel.blanch.batal...@gmail.com>:
> See if the copy command is actually working, copy should be very fast from
> your local disk.
>
>
> El 16 ago 2017, a las 14:26, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> escribió:
>
>
> After all the changes of the memory para
roving the
memory parameters and the memory on the server didnt help and for now the
copy command doesnt help either.
2017-08-15 20:14 GMT+03:00 Scott Marlowe :
> On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky
> wrote:
> > Hi,
> > So I I run the cheks that jeff mentione
about the new memory parameters that I cofigured ?
2017-08-14 16:24 GMT+03:00 Mariel Cherkassky :
> I have performance issues with two big tables. Those tables are located on
> an oracle remote database. I'm running the quert : insert into
> local_postgresql_table
I have performance issues with two big tables. Those tables are located on
an oracle remote database. I'm running the quert : insert into
local_postgresql_table select * from oracle_remote_table.
The first table has 45M records and its size is 23G. The import of the data
from the oracle remote dat
27 matches
Mail list logo