=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=
 =?UTF-8?B?4KWHKQ==?= <hashincl...@gmail.com> writes:
> To make make the retrieval faster, I'm using a
> partitioning scheme as follows:

> stats_300: data gathered at 5 mins, child tables named stats_300_t1_t2
> (where t2 - t1 = 2 hrs), i.e. 12 tables in one day
> stats_3600: data gathered / calculated over 1 hour, child tables
> similar to the above - stats_3600_t1_t2, where (t2 - t1) is 2 days
> (i.e. 15 tables a month)
> stats_86400: data gathered / calculated over 1 day, stored as
> stats_86400_t1_t2 where (t2 - t1) is 30 days (i.e. 12 tables a year).

So you've got, um, something less than a hundred rows in any one child
table?  This is carrying partitioning to an insane degree, and your
performance is NOT going to be improved by it.

I'd suggest partitioning on boundaries that will give you order of a
million rows per child.  That could be argued an order of magnitude or
two either way, but what you've got is well outside the useful range.

> I'm running into the error "ERROR:  out of shared memory HINT:  You
> might need to increase max_locks_per_transaction.

No surprise given the number of tables and indexes you're forcing
the system to deal with ...

                        regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to