Re: [PERFORM] performance problem on big tables

2017-08-24 Thread Mariel Cherkassky
Hi Claudio, how can I do that ? Can you explain me what is this option ?

2017-08-24 2:15 GMT+03:00 Claudio Freire :

> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
>  wrote:
> > To summarize, I still have performance problems. My current situation :
> >
> > I'm trying to copy the data of many tables in the oracle database into my
> > postgresql tables. I'm doing so by running insert into
> local_postgresql_temp
> > select * from remote_oracle_table. The performance of this operation are
> > very slow and I tried to check the reason for that and mybe choose a
> > different alternative.
> >
> > 1)First method - Insert into local_postgresql_table select * from
> > remote_oracle_table this generated total disk write of 7 M/s and actual
> disk
> > write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.
> >
> > 2)second method - copy (select * from oracle_remote_table) to /tmp/dump
> > generates total disk write of 4 M/s and actuval disk write of 100 K/s.
> The
> > copy utility suppose to be very fast but it seems very slow.
>
> Have you tried increasing the prefetch option in the remote table?
>
> If you left it in its default, latency could be hurting your ability
> to saturate the network.
>


[PERFORM] printing results of query to file in different times

2017-08-24 Thread Mariel Cherkassky
I'm trying to understand what postgresql doing in an issue that I'm having.
Our app team wrote a function that runs with a cursor over the results of a
query and via the utl_file func they write some columns to a file. I dont
understand why, but postgresql write the data into the file in the fs in
parts. I mean that it runs the query and it takes time to get back results
and when I see that the results back postgresql write to file the data and
then suddenly stops for X minutes. After those x minutes it starts again to
write the data and it continues that way until its done. The query returns
total *100* rows. I want to understand why it stops suddenly. There arent
any locks in the database during this operation.

my function looks like that :

func(a,b,c...)

cursor cr for

select ab,c,d,e.

begin

raise notice - 'starting loop time - %',timeofday();

 for cr_record in cr

Raise notice 'print to file - '%',timeofday();

utl_file.write(file,cr_record)

 end loop

end

I see the log of the running the next output :

starting loop 16:00

print to file : 16:03

print to file : 16:03

print to file : 16:07

print to file : 16:07

print to file : 16:07

print to file : 16:010

..



Can somebody explain to me this kind of behavior ? Why is it taking some
much time to write and in different minutes after the query already been
executed and finished ? Mybe I'm getting from the cursor only part of the
rows ?


Re: [PERFORM] performance problem on big tables

2017-08-24 Thread Claudio Freire
On Thu, Aug 24, 2017 at 4:51 AM, Mariel Cherkassky
 wrote:
> Hi Claudio, how can I do that ? Can you explain me what is this option ?
>
> 2017-08-24 2:15 GMT+03:00 Claudio Freire :
>>
>> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky
>>  wrote:
>> > To summarize, I still have performance problems. My current situation :
>> >
>> > I'm trying to copy the data of many tables in the oracle database into
>> > my
>> > postgresql tables. I'm doing so by running insert into
>> > local_postgresql_temp
>> > select * from remote_oracle_table. The performance of this operation are
>> > very slow and I tried to check the reason for that and mybe choose a
>> > different alternative.
>> >
>> > 1)First method - Insert into local_postgresql_table select * from
>> > remote_oracle_table this generated total disk write of 7 M/s and actual
>> > disk
>> > write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.
>> >
>> > 2)second method - copy (select * from oracle_remote_table) to /tmp/dump
>> > generates total disk write of 4 M/s and actuval disk write of 100 K/s.
>> > The
>> > copy utility suppose to be very fast but it seems very slow.
>>
>> Have you tried increasing the prefetch option in the remote table?
>>
>> If you left it in its default, latency could be hurting your ability
>> to saturate the network.
>
>

Please don't top-post.

I'm assuming you're using this: http://laurenz.github.io/oracle_fdw/

If you check the docs, you'll see this:
https://github.com/laurenz/oracle_fdw#foreign-table-options

So I'm guessing you could:

ALTER FOREIGN TABLE remote_table OPTIONS ( SET prefetch 10240 );


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] latest perf tuning info

2017-08-24 Thread Sam Gendler
Many years ago, I considered myself fairly expert when it came to
performance tuning and query optimization in postgresql, but I've been
hiding out inside (insert big company that doesn't use Postgresql here) for
a long while while postgresql has continued to evolve, and much has changed
since Greg's book shipped in 2010. I'm wondering if there is anything more
up-to-date than that excellent reference, especially anything that touches
on tuning and performance in AWS - RDS or running it directly on an EC2
instance.

But really, I'm mostly looking for documentation on query optimization and
tuning for developers, rather than administration stuff.  I've got a lot of
fairly inexperienced engineers bumping into their first performance
problems resulting from very naive schema and query design as datasets have
started to get large, and I'd rather point them at good documentation than
just do their tuning for them. It's all low hanging fruit right now, but it
won't stay that way for long.

Thanks,

--sam


Re: [PERFORM] Hi

2017-08-24 Thread Daulat Ram
Hello,

Would I request to help me on this query.

SELECT 'Inspection Completed' as "ALL Status" ,COUNT(*) as "Number of Count" 
FROM ud_document WHERE status = 'Inspection Completed' union SELECT 'Pending', 
COUNT(*) FROM ud_document WHERE status = 'Pending' union SELECT 'Approved', 
COUNT(*) FROM ud_document WHERE status = 'Approved' union SELECT 'Rejected', 
COUNT(*) FROM ud_document WHERE status = 'Rejected' union SELECT 'Payment 
Due',count(*) from ud_document where payment_status = 'Payment Due' union 
SELECT 'Payment Done' ,count(*) from ud_document where payment_status = 
'Payment Done'

And now I want to exclude the uniqueid= '201708141701018' from the above query. 
how it can be ???


Regards,
Daulat



DISCLAIMER:

This email message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply email and destroy all copies of 
the original message. Check all attachments for viruses before opening them. 
All views or opinions presented in this e-mail are those of the author and may 
not reflect the opinion of Cyient or those of our affiliates.