On Tue, Aug 23, 2011 at 5:25 AM, Robert Haas <robertmh...@gmail.com> wrote:

> I've been giving this quite a bit more thought, and have decided to
> abandon the scheme described above, at least for now.  It has the
> advantage of avoiding virtually all locking, but it's extremely
> inefficient in its use of memory in the presence of long-running
> transactions.  For example, if there's an open transaction that's been
> sitting around for 10 million transactions or so and has an XID
> assigned, any new snapshot is going to need to probe into the big
> array for any XID in that range.  At 8 bytes per entry, that means
> we're randomly accessing about ~80MB of memory-mapped data.  That
> seems problematic both in terms of blowing out the cache and (on small
> machines) possibly even blowing out RAM.  Nor is that the worst case
> scenario: a transaction could sit open for 100 million transactions.
>
> First i respectfully disagree with you on the point of 80MB. I would say
that its very rare that a small system( with <1 GB RAM ) might have a long
running transaction sitting idle, while 10 million transactions are sitting
idle. Should an optimization be left, for the sake of a very small system to
achieve high enterprise workloads?

Second, if we make use of the memory mapped files, why should we think, that
all the 80MB of data will always reside in memory? Won't they get paged out
by the  operating system, when it is in need of memory? Or do you have some
specific OS in mind?

Thanks,
Gokul.

Reply via email to