On Tue, Sep 13, 2011 at 7:49 AM, Amit Kapila amit.kap...@huawei.com wrote:
Yep, that's pretty much what it does, although xmax is actually
defined as the XID *following* the last one that ended, and I think
xmin needs to also be in xip, so in this case you'd actually end up
with xmin = 15, xmax =
: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux
On Mon, Sep 12, 2011 at 11:07 AM, Amit Kapila amit.kap...@huawei.com
wrote:
If you know what transactions were running the last time a snapshot
summary
was written and what transactions have ended since then, you can work
On Sun, Sep 11, 2011 at 11:08 PM, Amit Kapila amit.kap...@huawei.com wrote:
In the approach mentioned in your idea, it mentioned that once after
taking snapshot, only committed XIDs will be updated and sometimes snapshot
itself.
So when the xmin will be updated according to your idea as
On Mon, Sep 12, 2011 at 11:07 AM, Amit Kapila amit.kap...@huawei.com wrote:
If you know what transactions were running the last time a snapshot summary
was written and what transactions have ended since then, you can work out
the new xmin on the fly. I have working code for this and it's
-
From: Robert Haas [mailto:robertmh...@gmail.com]
Sent: Monday, September 12, 2011 7:39 PM
To: Amit Kapila
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux
On Sun, Sep 11, 2011 at 11:08 PM, Amit Kapila amit.kap...@huawei.com
wrote:
In the approach mentioned
it!
-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com]
Sent: Thursday, September 08, 2011 7:50 PM
To: Amit Kapila
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux
On Tue, Sep 6, 2011 at 11:06 PM, Amit Kapila amit.kap...@huawei.com wrote:
1
4. Won't it effect if we don't update xmin everytime and just noting the
committed XIDs. The reason I am asking is that it is used in tuple
visibility check so with new idea in some cases instead of just returning
from begining by checking xmin it has to go through the committed XID
On Tue, Sep 6, 2011 at 11:06 PM, Amit Kapila amit.kap...@huawei.com wrote:
1. With the above, you want to reduce/remove the concurrency issue between
the GetSnapshotData() [used at begining of sql command execution] and
ProcArrayEndTransaction() [used at end transaction]. The concurrency issue
, August 28, 2011 7:17 AM
To: Gokulakannan Somasundaram
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] cheaper snapshots redux
On Sat, Aug 27, 2011 at 1:38 AM, Gokulakannan Somasundaram
gokul...@gmail.com wrote:
First i respectfully disagree with you on the point of 80MB. I would say
No, I don't think it will all be in memory - but that's part of the
performance calculation. If you need to check on the status of an XID
and find that you need to read a page of data in from disk, that's
going to be many orders of magnitude slower than anything we do with s
snapshot now.
On Sun, Aug 28, 2011 at 4:33 AM, Gokulakannan Somasundaram
gokul...@gmail.com wrote:
No, I don't think it will all be in memory - but that's part of the
performance calculation. If you need to check on the status of an XID
and find that you need to read a page of data in from disk, that's
On Sat, Aug 27, 2011 at 1:38 AM, Gokulakannan Somasundaram
gokul...@gmail.com wrote:
First i respectfully disagree with you on the point of 80MB. I would say
that its very rare that a small system( with 1 GB RAM ) might have a long
running transaction sitting idle, while 10 million transactions
On Thu, Aug 25, 2011 at 6:24 PM, Jim Nasby j...@nasby.net wrote:
On Aug 25, 2011, at 8:24 AM, Robert Haas wrote:
My hope (and it might turn out that I'm an optimist) is that even with
a reasonably small buffer it will be very rare for a backend to
experience a wraparound condition. For
On Thu, Aug 25, 2011 at 6:29 PM, Jim Nasby j...@nasby.net wrote:
Actually, I wasn't thinking about the system dynamically sizing shared memory
on it's own... I was only thinking of providing the ability for a user to
change something like shared_buffers and allow that change to take effect
On Tue, Aug 23, 2011 at 5:25 AM, Robert Haas robertmh...@gmail.com wrote:
I've been giving this quite a bit more thought, and have decided to
abandon the scheme described above, at least for now. It has the
advantage of avoiding virtually all locking, but it's extremely
inefficient in its
On Thu, Aug 25, 2011 at 1:55 AM, Markus Wanner mar...@bluegap.ch wrote:
One difference with snapshots is that only the latest snapshot is of
any interest.
Theoretically, yes. But as far as I understood, you proposed the
backends copy that snapshot to local memory. And copying takes some
Robert,
On 08/25/2011 03:24 PM, Robert Haas wrote:
My hope (and it might turn out that I'm an optimist) is that even with
a reasonably small buffer it will be very rare for a backend to
experience a wraparound condition.
It certainly seems less likely than with the ring-buffer for imessages,
On Thu, Aug 25, 2011 at 10:19 AM, Markus Wanner mar...@bluegap.ch wrote:
Note, however, that for imessages, I've also had the policy in place
that a backend *must* consume its message before sending any. And that
I took great care for all receivers to consume their messages as early
as
Robert Haas robertmh...@gmail.com writes:
Well, one long-running transaction that only has a single XID is not
really a problem: the snapshot is still small. But one very old
transaction that also happens to have a large number of
subtransactions all of which have XIDs assigned might be a
Robert,
On 08/25/2011 04:48 PM, Robert Haas wrote:
What's a typical message size for imessages?
Most message types in Postgres-R are just a couple bytes in size.
Others, especially change sets, can be up to 8k.
However, I think you'll have an easier job guaranteeing that backends
consume their
Tom,
On 08/25/2011 04:59 PM, Tom Lane wrote:
That's a good point. If the ring buffer size creates a constraint on
the maximum number of sub-XIDs per transaction, you're going to need a
fallback path of some sort.
I think Robert envisions the same fallback path we already have:
On Thu, Aug 25, 2011 at 11:15 AM, Markus Wanner mar...@bluegap.ch wrote:
On 08/25/2011 04:59 PM, Tom Lane wrote:
That's a good point. If the ring buffer size creates a constraint on
the maximum number of sub-XIDs per transaction, you're going to need a
fallback path of some sort.
I think
On Aug 25, 2011, at 8:24 AM, Robert Haas wrote:
My hope (and it might turn out that I'm an optimist) is that even with
a reasonably small buffer it will be very rare for a backend to
experience a wraparound condition. For example, consider a buffer
with ~6500 entries, approximately 64 *
On Aug 22, 2011, at 6:22 PM, Robert Haas wrote:
With respect to a general-purpose shared memory allocator, I think
that there are cases where that would be useful to have, but I don't
think there are as many of them as many people seem to think. I
wouldn't choose to implement this using a
Hello Dimitri,
On 08/23/2011 06:39 PM, Dimitri Fontaine wrote:
I'm far from familiar with the detailed concepts here, but allow me to
comment. I have two open questions:
- is it possible to use a distributed algorithm to produce XIDs,
something like Vector Clocks?
Then each
Robert, Jim,
thanks for thinking out loud about dynamic allocation of shared memory.
Very much appreciated.
On 08/23/2011 01:22 AM, Robert Haas wrote:
With respect to a general-purpose shared memory allocator, I think
that there are cases where that would be useful to have, but I don't
think
On Wed, Aug 24, 2011 at 4:30 AM, Markus Wanner mar...@bluegap.ch wrote:
I'm in respectful disagreement regarding the ring-buffer approach and
think that dynamic allocation can actually be more efficient if done
properly, because there doesn't need to be head and tail pointers, which
might turn
Robert,
On 08/25/2011 04:59 AM, Robert Haas wrote:
True; although there are some other complications. With a
sufficiently sophisticated allocator you can avoid mutex contention
when allocating chunks, but then you have to store a pointer to the
chunk somewhere or other, and that then
On Mon, Aug 22, 2011 at 10:25 PM, Robert Haas robertmh...@gmail.com wrote:
I've been giving this quite a bit more thought, and have decided to
abandon the scheme described above, at least for now.
I liked your goal of O(1) snapshots and think you should go for that.
I didn't realise you were
Robert Haas robertmh...@gmail.com writes:
With respect to the first problem, what I'm imagining is that we not
do a complete rewrite of the snapshot in shared memory on every
commit. Instead, when a transaction ends, we'll decide whether to (a)
write a new snapshot or (b) just record the XIDs
On Tue, Aug 23, 2011 at 12:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I'm a bit concerned that this approach is trying to optimize the heavy
contention situation at the cost of actually making things worse anytime
that you're not bottlenecked by contention for access to this shared
data
Robert Haas robertmh...@gmail.com writes:
I think the real trick is figuring out a design that can improve
concurrency.
I'm far from familiar with the detailed concepts here, but allow me to
comment. I have two open questions:
- is it possible to use a distributed algorithm to produce XIDs,
Robert Haas robertmh...@gmail.com writes:
That's certainly a fair concern, and it might even be worse than
O(n^2). On the other hand, the current approach involves scanning the
entire ProcArray for every snapshot, even if nothing has changed and
90% of the backends are sitting around playing
On Aug 22, 2011, at 4:25 PM, Robert Haas wrote:
What I'm thinking about
instead is using a ring buffer with three pointers: a start pointer, a
stop pointer, and a write pointer. When a transaction ends, we
advance the write pointer, write the XIDs or a whole new snapshot into
the buffer, and
On Mon, Aug 22, 2011 at 6:45 PM, Jim Nasby j...@nasby.net wrote:
Something that would be really nice to fix is our reliance on a fixed size of
shared memory, and I'm wondering if this could be an opportunity to start in
a new direction. My thought is that we could maintain two distinct shared
35 matches
Mail list logo