Hey Kevin,

Thanks for such great help :
I analyzed on query before changing parameters;

 explain select count(distinct a.subsno )  from subsexpired a where
a.subsno not in (select b.subsno from subs b  where b.subsno>75043 and
b.subsno<=112565) and a.subsno>75043 and a.subsno<=112565;
                                               QUERY PLAN
--------------------------------------------------------------------------------------------------------
 Aggregate  (cost=99866998.67..99866998.68 rows=1 width=4)
   ->  Index Only Scan using ind_sub_new on subsexpired a
(cost=0.00..99866908.74 rows=35969 width=4)
         Index Cond: ((subsno > 75043) AND (subsno <= 112565))
         Filter: (NOT (SubPlan 1))
         SubPlan 1
           ->  Materialize  (cost=0.00..2681.38 rows=37977 width=4)
                 ->  Index Only Scan using subs_pkey on subs b
(cost=0.00..2342.49 rows=37977 width=4)
                       Index Cond: ((subsno > 75043) AND (subsno <= 112565))


*AFTER APPLYING YOUR SUGGESTED SETTINGS:*

explain select count(distinct a.subsno )  from subsexpired a where a.subsno
not in (select b.subsno from subs b  where b.subsno>75043 and
b.subsno<=112565) and a.subsno>75043 and a.subsno<=112565;
                                              QUERY PLAN
------------------------------------------------------------------------------------------------------
 Aggregate  (cost=7990.70..7990.71 rows=1 width=4)
   ->  Index Only Scan using ind_sub_new on subsexpired a
(cost=2437.43..7900.78 rows=35969 width=4)
         Index Cond: ((subsno > 75043) AND (subsno <= 112565))
         Filter: (NOT (hashed SubPlan 1))
         SubPlan 1
           ->  Index Only Scan using subs_pkey on subs b
(cost=0.00..2342.49 rows=37977 width=4)
                 Index Cond: ((subsno > 75043) AND (subsno <= 112565))

*PERFORMANCE WAS BOOSTED UP DRASTICALLY* ---when I edited the work_mem to
100 MB---just look at the difference;

One more thing Kevin, could you please help me out to understand how did
calculate those parameters?

Without more info, there's a bit of guesswork, but...
What exta info is required...please let me know...

Thanks again...

On Sat, Dec 15, 2012 at 12:20 AM, Kevin Grittner <kgri...@mail.com> wrote:

> Shams Khan wrote:
>
> > *Need to increase the response time of running queries on
> > server...*
>
> > 8 CPU's and 16 cores
>
> > [64GB RAM]
>
> > HDD 200GB
> > Database size = 40GB
>
> Without more info, there's a bit of guesswork, but...
>
> > maintenance_work_mem = Not initialised
>
> I would say probably 1GB
>
> > effective_cache_size = Not initialised
>
> 48GB
>
> > work_mem = Not initialised
>
> You could probably go 100MB on this.
>
> > wal_buffers = 8MB
>
> 16BM
>
> > checkpoint_segments = 16
>
> Higher. Probably not more than 128.
>
> > shared_buffers = 32MB (have read should 20% of Physical memory)
>
> 16GB to start. If you have episodes of high latency, where even
> queries which normally run very quickly all pause and then all
> complete close together after a delay, you may need to reduce this
> and/or increase the aggressiveness of the background writer. I've
> had to go as low as 1GB to overcome such latency spikes.
>
> > max_connections = 100
>
> Maybe leave alone, possibly reduce. You should be aiming to use a
> pool to keep about 20 database connections busy. If you can't do
> that in the app, look at pgbouncer.
>
> >  checkpoint_completion_target = Not initialised
>
> It is often wise to increase this to 0.8 or 0.9
>
> If I read this right, you have one 200GB drive for writes? That's
> going to be your bottleneck if you write much data. You need a RAID
> for both performance and reliability, with a good controller with
> battery-backed cache configured for write-back. Until you have one
> you can be less crippled on preformance by setting
> synchronous_commit = off. The trade-off is that there will be a
> slight delay between when PostgreSQL acknoleges a commit and when
> the data is actually persisted.
>
> -Kevin
>

Reply via email to