> > That's exactly my point.  We cannot provide enough documentation in
> > the CONF file without septupling its length.  IF we remove all
> > commentary, and instead provide a pointer to the documentation, more
> > DBAs will read it.
> Which I don't think would happen and why I think the terse bits that
> are included are worth while.  :)

Depressingly enough, you are probably correct, unless we assemble a more 
user-friendly "getting started" guide.

> *) concurrent disk activity
> A disk/database activity metric is different than the cost of a seek
> on the platters.  :) Because PostgreSQL doesn't currently support such
> a disk concurrency metric doesn't mean that its definition should get
> rolled into a different number in an attempt to accommodate for a lack
> thereof.

I was talking about concurrent activity by *other* applications.  For example, 
if a DBA has a java app that is accessing XML on the same array as postgres 
500 times/minute, then you'd need to adjust random_page_cost upwards to allow 
for the resource contest.

> An "ideal" value isn't obtained via guess and check.  Checking is only
> the verification of some calculable set of settings....though right now
> those calculated settings are guessed, unfortunately.

> Works for me, though a benchmark will be less valuable than adding a
> disk concurrency stat, improving data trend/distribution analysis, and
> using numbers that are concrete and obtainable through the OS kernel
> API or an admin manually plunking numbers in.  I'm still recovering
> from my move from Cali to WA so with any luck, I'll be settled in by
> then.

The idea is that for a lot of statistics, we're only going to be able to 
obtain valid numbers if you have something constant to check them against.

Talk to you later this month!

Josh Berkus
Aglio Database Solutions
San Francisco

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?


Reply via email to