On Tue, 2003-02-11 at 13:01, Tom Lane wrote:
> "Jon Griffin" <[EMAIL PROTECTED]> writes:
> > So it appears that linux at least is way above your 8 meg point, unless I
> > am missing something.
>
> Yeah, AFAIK all recent Linuxen are well above the range of parameters
> that I was suggesting (and ev
"Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> ... So we ship postgresql.conf with 32M of
> shared memory and auto_shared_mem_reduction = true. With a comment that
> the administrator might want to turn this off for production.
This really doesn't address Justin's point about clueless benchm
> If I thought that pgbench was representative of anything, or even
> capable of reliably producing repeatable numbers, then I might subscribe
> to results derived this way. But I have little or no confidence in
> pgbench. Certainly I don't see how you'd use it to produce
> recommendations for a
It's interesting that people focus on shared_buffers. From my
experience the most dominating parameter for performance is
wal_sync_method. It sometimes makes ~20% performance difference. On
the otherhand, shared_buffers does very little for
performance. Moreover too many shared_buffers cause perfor
I hate to poo-poo this, but this "web of trust" sounds more like a "web
of confusion". I liked the idea of mentioning the MD5 in the email
announcement. It doesn't require much extra work, and doesn't require a
'web of %$*&" to be set up to check things. Yea, it isn't as secure as
going through
On Tue, 11 Feb 2003, Bruce Momjian wrote:
>
> I hate to poo-poo this, but this "web of trust" sounds more like a "web
> of confusion". I liked the idea of mentioning the MD5 in the email
> announcement. It doesn't require much extra work, and doesn't require a
> 'web of %$*&" to be set up to che
Hannu Krosing <[EMAIL PROTECTED]> writes:
> Relying on hash aggregation will become essential if we are ever going
> to implement the "other" groupings (CUBE, ROLLUP, (), ...), so it would
> be nice if hash aggregation could also overflow to disk
I did not make this happen, but it sounds like Joe
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Tom Lane writes:
>> We could retarget to try to stay under SHMMAX=4M, which I think is
>> the next boundary that's significant in terms of real-world platforms
>> (isn't that the default SHMMAX on some BSDen?). That would allow us
>> 350 or so shared_
> >> We could retarget to try to stay under SHMMAX=4M, which I think is
> >> the next boundary that's significant in terms of real-world platforms
> >> (isn't that the default SHMMAX on some BSDen?). That would allow us
> >> 350 or so shared_buffers, which is better, but still not really a
> >> se
> A separate line of investigation is "what is the lowest common
> denominator nowadays?" I think we've established that SHMMAX=1M
> is obsolete, but what replaces it as the next LCD? 4M seems to be
> correct for some BSD flavors, and I can confirm that that's the
> current default for Mac OS X -
101 - 110 of 110 matches
Mail list logo