kelvan wrote:
hi i need to know all the database overhead sizes and block header sizes etc
etc as I have a very complex database to build and it needs to be speed
tuned beyond reckoning
[snip]
I am using postgres 8.1 if anyone can post links to pages containing over
head information and
=?ISO-8859-2?Q?Piotr_Gasid=B3o?= [EMAIL PROTECTED] writes:
I've just hit problem, that is unusual for me.
View definition:
SELECT users.id, users.user_name, users.extra IS NOT NULL AS has_extra
FROM users;
What you've got here is a non-nullable target list, which creates an
Hello,
I've just hit problem, that is unusual for me.
quaker= \d sites
Table public.sites
Column | Type| Modifiers
---+---+
id| integer
Hi,
I'm currently trying to tune the Cost-Based Vacuum Delay in a
8.2.5 server. The aim is to reduce as much as possible the
performance impact of vacuums on application queries, with the
background idea of running autovacuum as much as possible[1].
My test involves vacuuming a large table, and
On Fri, 2007-12-07 at 12:45 +1200, kelvan wrote:
hi i need to know all the database overhead sizes and block header sizes etc
etc as I have a very complex database to build and it needs to be speed
tuned beyond reckoning
If your need-for-speed is so high, I would suggest using 8.3 or at
On Dec 7, 2007, at 4:50 AM, Guillaume Cottenceau wrote:
Hi,
I'm currently trying to tune the Cost-Based Vacuum Delay in a
8.2.5 server. The aim is to reduce as much as possible the
performance impact of vacuums on application queries, with the
background idea of running autovacuum as much as
On Thursday 06 December 2007 04:38, Simon Riggs wrote:
Robert,
On Wed, 2007-12-05 at 15:07 -0500, Robert Treat wrote:
If the whole performance of your system depends upon indexed access, then
maybe you need a database that gives you a way to force index access at
the query level?
That
Simon Riggs [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
On Fri, 2007-12-07 at 12:45 +1200, kelvan wrote:
hi i need to know all the database overhead sizes and block header sizes
etc
etc as I have a very complex database to build and it needs to be speed
tuned beyond
One of those things that comes up regularly on this list in particular are
people whose performance issues relate to bloated tables or indexes.
What I've always found curious is that I've never seen a good way
suggested to actually measure said bloat in any useful numeric
terms--until today.
Tom Lane wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Tom Lane wrote:
There's something fishy about this --- given that that plan has a lower
cost estimate, it should've picked it without any artificial
constraints.
One final thing I find curious about this is that the estimated
number of rows
On Dec 7, 2007, at 10:44 AM, Guillaume Cottenceau wrote:
Erik Jones erik 'at' myemma.com writes:
vacuum_cost_delay/vacuum_cost_limit (deactivated) 20/200
40/200 100/1000 150/1000 200/1000 300/1000
VACUUM ANALYZE time54 s112 s188
s109 s
Erik Jones erik 'at' myemma.com writes:
vacuum_cost_delay/vacuum_cost_limit (deactivated) 20/200
40/200 100/1000 150/1000 200/1000 300/1000
VACUUM ANALYZE time54 s112 s188
s109 s 152 s 190 s 274 s
SELECT time
12 matches
Mail list logo