[PERFORM] Strange behavior of child table.

2011-05-31 Thread Jenish
Hi All,

I have created partition on table Round_Action , which has 3 child partition
tables.


When I am firing a simple select query with limit on parent table it is
taking huge time to execute. But when I am firing this query directly on
child table it is taking few milliseconds.


EXP.
select * from Round_Action where action_id =5 limit 100 → execution time
80 sec

select * from Round_Action_CH1 action_id =5 limit 100 → execution time
0.1 sec

Round_Action is the parent table and has no record in the tables, all the
records are lying in child tables.

Table is having index on action_id.

Partition is trigger based.
Postgres Version : (PostgreSQL) 8.4.6

Why there is difference in execution time? What I am doing wrong?



-- 
Thanks & regards,
  JENISH


Re: [PERFORM] Strange behavior of child table.

2011-06-02 Thread Jenish
Hi Jeff,

Thanks for the help.

This is the first post by me, and I did mistake unknowingly. I will
take care of it next time.

Again thanks a lot for the help.

--
Thanks & regards,
JENISH VYAS


On Thu, Jun 2, 2011 at 10:04 AM, Jeff Davis  wrote:
>
> In the future, please remember to CC the list when replying unless you
> have a reason not to. This thread is already off-list by now.
>
> Also, I just noticed that this plan has a sort, and the slow query in
> the previous email did not. That looks like it might have been a mistake
> when running the regular EXPLAIN (without ANALYZE), because the slow
> plan does not look correct without a sort. Anyway...
>
> On Thu, 2011-06-02 at 09:23 +0300, Jenish wrote:
> > Hi Jeff,
> >
> > This table is growing rapidly. Now the parent table is taking much
> > more time for the same query. below is the complite details.
>
>
> > "                    ->  Bitmap Heap Scan on game_round_actions_old
> > game_round_actions  (cost=73355.48..7277769.30 rows=2630099 width=65)
> > (actual time=78319.248..302586.235 rows=2304337 loops=1)"
> > "                          Recheck Cond: (table_id = 1)"
> > "                          ->  Bitmap Index Scan on
> > "PK_game_round_actions"  (cost=0.00..72697.95 rows=2630099 width=0)
> > (actual time=78313.095..78313.095 rows=2304337 loops=1)"
> > "                                Index Cond: (table_id = 1)"
>
> That is the part of the plan that is taking time. Compare that to the
> other plan:
>
> > 2)      Child query
> >  explain analyse Select * from game_round_actions_old where table_id =
> > 1 order by table_id,round_id limit 100
> >  "Limit  (cost=0.00..335.97 rows=100 width=65) (actual
> > time=0.035..0.216 rows=100 loops=1)"
> >  "  ->  Index Scan using "PK_game_round_actions" on
> > game_round_actions_old  (cost=0.00..8836452.71 rows=2630099 width=65)
> > (actual time=0.033..0.110 rows=100 loops=1)"
> >  "        Index Cond: (table_id = 1)"
>
> Notice that it's actually using the same index, but the slow plan is
> using a bitmap index scan, and the fast plan is using a normal (ordered)
> index scan.
>
> What's happening is that the top-level query is asking to ORDER BY
> table_id, round_id LIMIT 100. Querying the child table can get that
> order directly from the index, so it scans the index in order, fetches
> only 100 tuples, and then it's done.
>
> But when querying the parent table, it's getting tuples from two tables,
> and so the tuples aren't automatically in the right order to satisfy the
> ORDER BY. So, it's collecting all of the matching tuples, which is about
> 2.6M, then sorting them, then returning the first 100 -- much slower!
>
> A smarter approach is to scan both tables in the correct order
> individually, and merge the results until you get 100 tuples. That would
> make both queries run fast. 9.1 is smart enough to do that, but it's
> still in beta right now.
>
> The only answer right now is to rewrite your slow query to be more like
> the fast one. I think if you manually push down the ORDER BY ... LIMIT,
> it will do the job. Something like:
>
>  select * from
>    (select * from game_round_actions_old
>     where table_id = 1
>     order by table_id,round_id limit 100
>     UNION ALL
>     select * from game_round_actions_new
>     where table_id = 1
>     order by table_id,round_id limit 100)
>  order by table_id,round_id limit 100;
>
> might work. I haven't actually tested that query though.
>
> Regards,
>        Jeff Davis
>
>

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Performance issue with Insert

2011-06-27 Thread Jenish
Hi All,

I am facing some performance issue with insert into some table.

I am using postgres 8.4.x

Table is having 3 before insert trigger and one after insert trigger.

With all triggers enable it is inserting only 4-5 record per second.

But if I disable after insert trigger it is able to insert 667 records per
second.

After insert trigger is recursive trigger.


My question.

How to avoid the bottleneck?

Parallel processing is possible in Postgres? How?


Please give you suggestion.

-- 
Thanks & regards,
JENISH VYAS


Re: [PERFORM] Performance issue with Insert

2011-06-27 Thread Jenish
Hi,

DB : POSTGRES 8.4.8
OS  : Debian
HD : SAS 10k rpm

Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM

After insert trigger is again calling 2 more trigger and insert record in
another table depends on condition.

with all trigger enable there are 8 insert and 32 updates(approx. update is
depends on hierarchy)

Plz explain multiple connections. Current scenario application server is
sending all requests.

-- 
Thanks & regards,
JENISH VYAS

On Mon, Jun 27, 2011 at 5:37 PM, Kevin Grittner  wrote:

> Jenish  wrote:
>
> > I am using postgres 8.4.x
>
> With x being what?  On what OS and hardware?
>
> > Table is having 3 before insert trigger and one after insert
> > trigger.
> >
> > With all triggers enable it is inserting only 4-5 record per
> > second.
> >
> > But if I disable after insert trigger it is able to insert 667
> > records per second.
>
> http://wiki.postgresql.org/wiki/SlowQueryQuestions
>
> > After insert trigger is recursive trigger.
>
> So are you counting only the top level inserts or also the ones
> generated by the recursive inserts?
>
> > My question.
> >
> > How to avoid the bottleneck?
>
> First you need to find out what the bottleneck is.
>
> > Parallel processing is possible in Postgres? How?
>
> To achieve parallel processing in PostgreSQL you need to use
> multiple connections.
>
> -Kevin
>


Re: [PERFORM] Performance issue with Insert

2011-06-27 Thread Jenish
Hi,

I have already checked all the statements present in the trigger, no one is
taking more then 20 ms.

I am using 8-Processor, Quad-Core Server ,CPU utilization is more then 90-95
% for all. (htop result)

DB has 960 concurrent users.

io : writing 3-4 MB per second or less (iotop result).

Scenario :  All insert are waiting for previous insert to complete. Cant
we avoid this situation ?
What is the "max_connections" postgresql support?

Plz help


-- 
Thanks & regards,
JENISH VYAS






On Mon, Jun 27, 2011 at 6:32 PM,  wrote:

> > Hi,
> >
> > DB : POSTGRES 8.4.8
> > OS  : Debian
> > HD : SAS 10k rpm
> >
> > Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
> >
> > After insert trigger is again calling 2 more trigger and insert record in
> > another table depends on condition.
> >
> > with all trigger enable there are 8 insert and 32 updates(approx. update
> > is
> > depends on hierarchy)
>
> Hi,
>
> it's very difficult to give you reliable recommendations with this little
> info, but the triggers are obviously the bottleneck. We have no idea what
> queries are executed in them, but I guess there are some slow queries.
>
> Find out what queries are executed in the triggers, benchmark each of them
> and make them faster. Just don't forget that those SQL queries are
> executed as prepared statements, so they may behave a bit differently than
> plain queries. So use 'PREPARE' and 'EXPLAIN EXECUTE' to tune them.
>
> > Plz explain multiple connections. Current scenario application server is
> > sending all requests.
>
> PostgreSQL does not support parallel queries (i.e. a query distributed on
> multiple CPUs) so each query may use just a single CPU. If you're CPU
> bound (one CPU is 100% utilized but the other CPUs are idle), you can
> usually parallelize the workload on your own - just use multiple
> connections.
>
> But if you're using an application server and there are multiple
> connections used, this is not going to help you. How many connections are
> active at the same time? Are the CPUs idle or utilized?
>
> Tomas
>
>


Re: [PERFORM] Performance issue with Insert

2011-06-27 Thread Jenish
Hi ,

This server is the dedicated database server.

 And I am testing the limit for the concurrent active users. When I am
running my test for 400 concurrent user ie. Active connection. I am getting
good performance but when I am running the same the same test for 950
concurrent users I am getting very bad performance.



>> Scenario :  All insert are waiting for previous insert to complete.

I don’t know whether it is the same session or different session.



DB id huge but Triggers are not touching the whole database.

I’ll provide the result set of vmstat and iostat tomorrow.


-- 
Thanks & regards,
JENISH VYAS



On Mon, Jun 27, 2011 at 10:48 PM, Tomas Vondra  wrote:

> Dne 27.6.2011 17:01, Jenish napsal(a):
> > Hi,
> >
> > DB : POSTGRES 8.4.8
> > OS  : Debian
> > HD : SAS 10k rpm
> >
> > Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
>
> How much data are we talking about? Does that fit into the shared
> buffers or is it significantly larger? Do the triggers touch the whole
> database or just a small part of it (active part)?
>
> regards
> Tomas
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>