On Thu, 2004-04-22 at 10:37 -0700, Josh Berkus wrote:
> Tom,
>
> > The tricky
> > part is that a slow adaptation rate means we can't have every backend
> > figuring this out for itself --- the right value would have to be
> > maintained globally, and I'm not sure how to do that without adding a
>
It is set at max_fsm_pages = 150 .
We are running a
DELL PowerEdge 6650 with 4 CPU's
Mem: 3611320k av from top.
The database is on a shared device (SAN) raid5, 172 GB.
Qlogic Fibre optic cards(desc: "QLogic Corp.|QLA2312 Fibre Channel Adapter")
connected to the Dell version of an EMC SAN (FC
On Mon, 19 Apr 2004 12:00:10 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
>A possible compromise is to limit the number of pages sampled to
>something a bit larger than n, perhaps 2n or 3n. I don't have a feeling
>for the shape of the different-pages probability function; would this
>make a signific
On Sun, 25 Apr 2004 09:05:11 -0400, "Shea,Dan [CIS]" <[EMAIL PROTECTED]>
wrote:
>It is set at max_fsm_pages = 150 .
This might be too low. Your index has ca. 5 M pages, you are going to
delete half of its entries, and what you delete is a contiguous range of
values. So up to 2.5 M index page