Re: [PERFORM] Partitions and max_locks_per_transaction

2009-11-20 Thread Hrishikesh Mehendale
It was Thursday 19 November 2009 11:08:10 pm that the wise Tom Lane thus 
wrote:
 hashincl...@gmail.com writes:
  To make make the retrieval faster, I'm using a
  partitioning scheme as follows:
 
  stats_300: data gathered at 5 mins, child tables named stats_300_t1_t2
  (where t2 - t1 = 2 hrs), i.e. 12 tables in one day
  stats_3600: data gathered / calculated over 1 hour, child tables
  similar to the above - stats_3600_t1_t2, where (t2 - t1) is 2 days
  (i.e. 15 tables a month)
  stats_86400: data gathered / calculated over 1 day, stored as
  stats_86400_t1_t2 where (t2 - t1) is 30 days (i.e. 12 tables a year).
 
 So you've got, um, something less than a hundred rows in any one child
 table?  This is carrying partitioning to an insane degree, and your
 performance is NOT going to be improved by it.

Sorry I forgot to mention - in the normal case, each of those tables will 
have a few hundred thousand records, and in the worst case (the tables store 
info on up to 2000 endpoints) it can be around 5 million.

Also, the partitioning is not final yet (we might move it to 6 hours / 12 
hours per partition) - which is why I need to run the load test :)

 I'd suggest partitioning on boundaries that will give you order of a
 million rows per child.  That could be argued an order of magnitude or
 two either way, but what you've got is well outside the useful range.
 
  I'm running into the error ERROR:  out of shared memory HINT:  You
  might need to increase max_locks_per_transaction.
 
 No surprise given the number of tables and indexes you're forcing
 the system to deal with ...

How many locks per table/index does PG require? Even with my current state 
(50 tables,  250 (tables + indexes)) is it reasonable to expect 2000 locks 
to run out?

Thanks,
Hrishi

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Partitions and max_locks_per_transaction

2009-11-19 Thread Tom Lane
=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=
 =?UTF-8?B?4KWHKQ==?= hashincl...@gmail.com writes:
 To make make the retrieval faster, I'm using a
 partitioning scheme as follows:

 stats_300: data gathered at 5 mins, child tables named stats_300_t1_t2
 (where t2 - t1 = 2 hrs), i.e. 12 tables in one day
 stats_3600: data gathered / calculated over 1 hour, child tables
 similar to the above - stats_3600_t1_t2, where (t2 - t1) is 2 days
 (i.e. 15 tables a month)
 stats_86400: data gathered / calculated over 1 day, stored as
 stats_86400_t1_t2 where (t2 - t1) is 30 days (i.e. 12 tables a year).

So you've got, um, something less than a hundred rows in any one child
table?  This is carrying partitioning to an insane degree, and your
performance is NOT going to be improved by it.

I'd suggest partitioning on boundaries that will give you order of a
million rows per child.  That could be argued an order of magnitude or
two either way, but what you've got is well outside the useful range.

 I'm running into the error ERROR:  out of shared memory HINT:  You
 might need to increase max_locks_per_transaction.

No surprise given the number of tables and indexes you're forcing
the system to deal with ...

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance