Some corrections:
On Thu, Jul 10, 2008 at 6:11 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
SNIP
> If you commonly have 100 transactions doing that at once, then you
> multiply much memory they use times 100 to get total buffer >> SPACE << in
> use,
> and the rest is likely NEVER going to get u
I just wanted to add to my previous post that shared_memory generally
has a performance envelope of quickly increasing performance as you
first increase share_memory, then a smaller performance step with each
increase in shared_memory. Once all of the working set of your data
fits, the return star
On Thu, Jul 10, 2008 at 4:53 AM, Jessica Richard <[EMAIL PROTECTED]> wrote:
> On a Linux system, if the total memory is 4G and the shmmax is set to 4G, I
> know it is bad, but how bad can it be? Just trying to understand the impact
> the "shmmax" parameter can have on Postgres and the entire sys
In response to Jessica Richard <[EMAIL PROTECTED]>:
> On a Linux system, if the total memory is 4G and the shmmax is set to 4G, I
> know it is bad, but how bad can it be? Just trying to understand the impact
> the "shmmax" parameter can have on Postgres and the entire system after
> Postgres
On a Linux system, if the total memory is 4G and the shmmax is set to 4G, I
know it is bad, but how bad can it be? Just trying to understand the impact the
"shmmax" parameter can have on Postgres and the entire system after Postgres
comes up on this number.
What is the reasonable setting for