On Wed, May 14, 2014 at 8:46 PM, Jeff Janes jeff.ja...@gmail.com wrote:
+1. I can't think of many things we might do that would be more
important.
Can anyone guess how likely this approach is to make it into 9.5? I've been
pondering some incremental improvements over what we have now, but
On Monday, January 27, 2014, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jan 27, 2014 at 4:16 PM, Simon Riggs
si...@2ndquadrant.comjavascript:;
wrote:
On 26 January 2014 12:58, Andres Freund
and...@2ndquadrant.comjavascript:;
wrote:
On 2014-01-25 20:26:16 -0800, Peter Geoghegan
On Wed, May 14, 2014 at 05:46:49PM -0700, Jeff Janes wrote:
On Monday, January 27, 2014, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jan 27, 2014 at 4:16 PM, Simon Riggs
si...@2ndquadrant.comjavascript:;
wrote:
On 26 January 2014 12:58, Andres Freund
On 26 January 2014 12:58, Andres Freund and...@2ndquadrant.com wrote:
On 2014-01-25 20:26:16 -0800, Peter Geoghegan wrote:
Shouldn't this patch be in the January commitfest?
I think we previously concluded that there wasn't much chance to get
this into 9.4 and there's significant work to be
On Mon, Jan 27, 2014 at 4:16 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 26 January 2014 12:58, Andres Freund and...@2ndquadrant.com wrote:
On 2014-01-25 20:26:16 -0800, Peter Geoghegan wrote:
Shouldn't this patch be in the January commitfest?
I think we previously concluded that there
On 2014-01-25 20:26:16 -0800, Peter Geoghegan wrote:
Shouldn't this patch be in the January commitfest?
I think we previously concluded that there wasn't much chance to get
this into 9.4 and there's significant work to be done on the patch
before new reviews are required, so not submitting it
Shouldn't this patch be in the January commitfest?
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, 2013-09-25 at 12:31 +0300, Heikki Linnakangas wrote:
On 19.09.2013 16:24, Andres Freund wrote:
...
There's probably more.
I think _bt_check_unique is also a problem.
Hmm, some of those are trivial, but others, rewrite_heap_tuple() are
currently only passed the HeapTuple, with no
On 2013-09-25 12:31:20 +0300, Heikki Linnakangas wrote:
Hmm, some of those are trivial, but others, rewrite_heap_tuple() are
currently only passed the HeapTuple, with no reference to the page where the
tuple came from. Instead of changing code to always pass that along with a
tuple, I think we
On 2013-10-01 04:47:42 +0300, Ants Aasma wrote:
I still think we should have a macro for the volatile memory accesses.
As a rule, each one of those needs a memory barrier, and if we
consolidate them, or optimize them out, the considerations why this is
safe should be explained in a comment.
On Tue, Oct 1, 2013 at 2:13 PM, Andres Freund and...@2ndquadrant.com wrote:
Agreed. The wait free LW_SHARED thing[1] I posted recently had a simple
#define pg_atomic_read(atomic) (*(volatile uint32 *)(atomic))
That should be sufficient and easily greppable, right?
Looks good enough for me. I
Just found this in my drafts folder. Sorry for the late response.
On Fri, Sep 20, 2013 at 3:32 PM, Robert Haas robertmh...@gmail.com wrote:
I am entirely unconvinced that we need this. Some people use acquire
+ release fences, some people use read + write fences, and there are
all
On 18.09.2013 22:55, Jeff Janes wrote:
On Mon, Sep 16, 2013 at 6:59 AM, Heikki Linnakangashlinnakan...@vmware.com
wrote:
Here's a rebased version of the patch, including the above-mentioned
fixes. Nothing else new.
I've applied this to 0892ecbc015930d, the last commit to which it applies
On 9/25/13 5:31 AM, Heikki Linnakangas wrote:
Attached is a new version, which adds that field to HeapTupleData. Most
of the issues on you listed above have been fixed, plus a bunch of other
bugs I found myself. The bug that Jeff ran into with his count.pl script
has also been fixed.
This
On 25.09.2013 15:48, Peter Eisentraut wrote:
On 9/25/13 5:31 AM, Heikki Linnakangas wrote:
Attached is a new version, which adds that field to HeapTupleData. Most
of the issues on you listed above have been fixed, plus a bunch of other
bugs I found myself. The bug that Jeff ran into with his
On Fri, Sep 20, 2013 at 11:11 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-09-20 16:47:24 +0200, Andres Freund wrote:
I think we should go through the various implementations and make sure
they are actual compiler barriers and then change the documented policy.
From a quick look
*
On 2013-09-23 11:50:05 -0400, Robert Haas wrote:
On Fri, Sep 20, 2013 at 11:11 AM, Andres Freund and...@2ndquadrant.com
wrote:
On 2013-09-20 16:47:24 +0200, Andres Freund wrote:
I think we should go through the various implementations and make sure
they are actual compiler barriers and
Just some notes, before I forget them again.
On 2013-09-23 11:50:05 -0400, Robert Haas wrote:
On Fri, Sep 20, 2013 at 11:11 AM, Andres Freund and...@2ndquadrant.com
wrote:
On 2013-09-20 16:47:24 +0200, Andres Freund wrote:
I think we should go through the various implementations and make
On Thu, Sep 19, 2013 at 6:27 PM, Ants Aasma a...@cybertec.at wrote:
I'm tackling similar issues in my patch. What I'm thinking is that we should
change our existing supposedly atomic accesses to be more explicit and make
the API compatible with C11 atomics[1]. For now I think the changes should
Hi,
I agree with most of what you said - I think that's a littlebit too much
change for too little benefit.
On 2013-09-20 08:32:29 -0400, Robert Haas wrote:
Personally, I think the biggest change that would help here is to
mandate that spinlock operations serve as compiler fences. That would
On Fri, Sep 20, 2013 at 8:40 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-09-20 08:32:29 -0400, Robert Haas wrote:
Personally, I think the biggest change that would help here is to
mandate that spinlock operations serve as compiler fences. That would
eliminate the need for scads of
On 2013-09-20 08:54:26 -0400, Robert Haas wrote:
On Fri, Sep 20, 2013 at 8:40 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-09-20 08:32:29 -0400, Robert Haas wrote:
Personally, I think the biggest change that would help here is to
mandate that spinlock operations serve as compiler
Andres Freund escribió:
Hi,
On 2013-09-19 14:42:19 +0300, Heikki Linnakangas wrote:
On 18.09.2013 16:22, Andres Freund wrote:
* Why can we do a GetOldestXmin(allDbs = false) in
BeginXidLSNRangeSwitch()?
Looks like a bug. I think I got the arguments backwards, was supposed to be
On 2013-09-20 16:47:24 +0200, Andres Freund wrote:
I think we should go through the various implementations and make sure
they are actual compiler barriers and then change the documented policy.
From a quick look
* S_UNLOCK for PPC isn't a compiler barrier
* S_UNLOCK for MIPS isn't a compiler
Hi
On Fri, Sep 20, 2013 at 5:11 PM, Andres Freund and...@2ndquadrant.comwrote:
On 2013-09-20 16:47:24 +0200, Andres Freund wrote:
I think we should go through the various implementations and make sure
they are actual compiler barriers and then change the documented policy.
From a quick
Hi,
IMO it's a bug if S_UNLOCK is a not a compiler barrier.
Moreover for volatile remember:
https://www.securecoding.cert.org/confluence/display/seccode/DCL17-C.+Beware+of+miscompiled+volatile-qualified+variables
Who is double checking compiler output? :)
regards
Didier
On Fri, Sep 20, 2013
On 18.09.2013 16:22, Andres Freund wrote:
On 2013-09-16 16:59:28 +0300, Heikki Linnakangas wrote:
Here's a rebased version of the patch, including the above-mentioned fixes.
Nothing else new.
* We need some higherlevel description of the algorithm somewhere in the
source. I don't think
Hi,
On 2013-09-19 14:42:19 +0300, Heikki Linnakangas wrote:
On 18.09.2013 16:22, Andres Freund wrote:
* Why can we do a GetOldestXmin(allDbs = false) in
BeginXidLSNRangeSwitch()?
Looks like a bug. I think I got the arguments backwards, was supposed to be
allDbs = true and ignoreVacuum =
On 2013-09-19 14:40:35 +0200, Andres Freund wrote:
* I think heap_lock_tuple() needs to unset all-visible, otherwise we
won't vacuum that page again which can lead to problems since we
don't do full-table vacuums again?
It's OK if the page is never vacuumed again, the whole point
On Thu, Sep 19, 2013 at 2:42 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
* switchFinishXmin and nextSwitchXid should probably be either volatile
or have a compiler barrier between accessing shared memory and
checking them. The compiler very well could optimize them away and
On 2013-09-16 16:59:28 +0300, Heikki Linnakangas wrote:
Here's a rebased version of the patch, including the above-mentioned fixes.
Nothing else new.
* We need some higherlevel description of the algorithm somewhere in the
source. I don't think I've understood the concept from the patch alone
On Mon, Sep 16, 2013 at 6:59 AM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:
Here's a rebased version of the patch, including the above-mentioned
fixes. Nothing else new.
I've applied this to 0892ecbc015930d, the last commit to which it applies
cleanly.
When I test this by repeatedly
On 9/16/13 9:59 AM, Heikki Linnakangas wrote:
Here's a rebased version of the patch, including the above-mentioned
fixes. Nothing else new.
It still fails to apply. You might need to rebase it again.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
On 2013-09-17 09:41:47 -0400, Peter Eisentraut wrote:
On 9/16/13 9:59 AM, Heikki Linnakangas wrote:
Here's a rebased version of the patch, including the above-mentioned
fixes. Nothing else new.
It still fails to apply. You might need to rebase it again.
FWIW, I don't think it's too
Heikki Linnakangas escribió:
Here's a rebased version of the patch, including the above-mentioned
fixes. Nothing else new.
Nice work. I apologize for the conflicts I created yesterday.
I would suggest renaming varsup_internal.h to varsup_xlog.h.
You added a FIXME comment to
On 27.08.2013 19:37, Heikki Linnakangas wrote:
On 27.08.2013 18:56, Heikki Linnakangas wrote:
Here's an updated patch.
Ah, forgot one thing:
Here's a little extension I've been using to test this. It contains two
functions; one to simply consume N xids, making it faster to hit the
creation
On Mon, Sep 2, 2013 at 3:16 PM, Jeff Davis pg...@j-davis.com wrote:
On Fri, 2013-08-30 at 20:34 +0200, Andres Freund wrote:
I have a quick question: The reason I'd asked about the status of the
patch was that I was thinking about the state of the forensic freezing
patch. After a quick look at
On Fri, 2013-08-30 at 20:34 +0200, Andres Freund wrote:
I have a quick question: The reason I'd asked about the status of the
patch was that I was thinking about the state of the forensic freezing
patch. After a quick look at your proposal, we still need to freeze in
some situations (old new
Hi Heikki,
On 2013-08-27 18:56:15 +0300, Heikki Linnakangas wrote:
Here's an updated patch. The race conditions I mentioned above have been
fixed.
Thanks for posting the new version!
I have a quick question: The reason I'd asked about the status of the
patch was that I was thinking about the
On 10.06.2013 21:58, Heikki Linnakangas wrote:
On 01.06.2013 23:21, Robert Haas wrote:
On Sat, Jun 1, 2013 at 2:48 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
We define a new page-level bit, something like PD_RECENTLY_FROZEN.
When this bit is set, it means there are no unfrozen
On 27.08.2013 18:56, Heikki Linnakangas wrote:
Here's an updated patch.
Ah, forgot one thing:
Here's a little extension I've been using to test this. It contains two
functions; one to simply consume N xids, making it faster to hit the
creation of new XID ranges and wraparound. The other,
On 01.06.2013 23:21, Robert Haas wrote:
On Sat, Jun 1, 2013 at 2:48 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
We define a new page-level bit, something like PD_RECENTLY_FROZEN.
When this bit is set, it means there are no unfrozen tuples on the
page with XIDs that predate the
On 10 June 2013 19:58, Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 01.06.2013 23:21, Robert Haas wrote:
On Sat, Jun 1, 2013 at 2:48 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
We define a new page-level bit, something like PD_RECENTLY_FROZEN.
When this bit is set, it
On Mon, Jun 10, 2013 at 4:48 PM, Simon Riggs si...@2ndquadrant.com wrote:
Well done, looks like good progress.
+1.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
On Thu, Jun 6, 2013 at 6:28 PM, Greg Stark st...@mit.edu wrote:
On Thu, Jun 6, 2013 at 1:39 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
That will keep OldestXmin from advancing. Which will keep vacuum from
advancing relfrozenxid/datfrozenxid. Which will first trigger the warnings
On 07.06.2013 20:54, Robert Haas wrote:
On Thu, Jun 6, 2013 at 6:28 PM, Greg Starkst...@mit.edu wrote:
On Thu, Jun 6, 2013 at 1:39 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
That will keep OldestXmin from advancing. Which will keep vacuum from
advancing relfrozenxid/datfrozenxid.
On 7 June 2013 19:08, Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 07.06.2013 20:54, Robert Haas wrote:
On Thu, Jun 6, 2013 at 6:28 PM, Greg Starkst...@mit.edu wrote:
On Thu, Jun 6, 2013 at 1:39 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
That will keep OldestXmin from
On 07.06.2013 21:33, Simon Riggs wrote:
Now that I consider Greg's line of thought, the idea we focused on
here was about avoiding freezing. But Greg makes me think that we may
also wish to look at allowing queries to run longer than one epoch as
well, if the epoch wrap time is likely to come
On 7 June 2013 19:56, Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 07.06.2013 21:33, Simon Riggs wrote:
Now that I consider Greg's line of thought, the idea we focused on
here was about avoiding freezing. But Greg makes me think that we may
also wish to look at allowing queries to run
On Fri, Jun 7, 2013 at 3:10 PM, Simon Riggs si...@2ndquadrant.com wrote:
The long running query problem hasn't ever been looked at, it seems,
until here and now.
For what it's worth (and that may not be much), I think most people
will die a horrible death due to bloat after holding a
On 2013-06-07 20:10:55 +0100, Simon Riggs wrote:
On 7 June 2013 19:56, Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 07.06.2013 21:33, Simon Riggs wrote:
Now that I consider Greg's line of thought, the idea we focused on
here was about avoiding freezing. But Greg makes me think
On 7 June 2013 20:16, Andres Freund and...@2ndquadrant.com wrote:
On 2013-06-07 20:10:55 +0100, Simon Riggs wrote:
On 7 June 2013 19:56, Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 07.06.2013 21:33, Simon Riggs wrote:
Now that I consider Greg's line of thought, the idea we focused
On 07.06.2013 22:15, Robert Haas wrote:
On Fri, Jun 7, 2013 at 3:10 PM, Simon Riggssi...@2ndquadrant.com wrote:
The long running query problem hasn't ever been looked at, it seems,
until here and now.
For what it's worth (and that may not be much), I think most people
will die a horrible
On 06/07/2013 08:56 PM, Heikki Linnakangas wrote:
On 07.06.2013 21:33, Simon Riggs wrote:
Now that I consider Greg's line of thought, the idea we focused on
here was about avoiding freezing. But Greg makes me think that we may
also wish to look at allowing queries to run longer than one epoch
On 06/07/2013 09:16 PM, Andres Freund wrote:
On 2013-06-07 20:10:55 +0100, Simon Riggs wrote:
On 7 June 2013 19:56, Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 07.06.2013 21:33, Simon Riggs wrote:
Now that I consider Greg's line of thought, the idea we focused on
here was about
On Fri, May 31, 2013 at 3:04 AM, Robert Haas robertmh...@gmail.com wrote:
Even at a more modest 10,000 tps, with default
settings, you'll do anti-wraparound vacuums of the entire cluster
about every 8 hours. That's not fun.
I've forgotten now. What happens if you have a long-lived transaction
On 06.06.2013 15:16, Greg Stark wrote:
On Fri, May 31, 2013 at 3:04 AM, Robert Haasrobertmh...@gmail.com wrote:
Even at a more modest 10,000 tps, with default
settings, you'll do anti-wraparound vacuums of the entire cluster
about every 8 hours. That's not fun.
I've forgotten now. What
On Thu, Jun 6, 2013 at 1:39 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
That will keep OldestXmin from advancing. Which will keep vacuum from
advancing relfrozenxid/datfrozenxid. Which will first trigger the warnings
about wrap-around, then stops new XIDs from being generated, and
On 30 May 2013 14:33, Heikki Linnakangas hlinnakan...@vmware.com wrote:
Since we're bashing around ideas around freezing, let me write down the idea
I've been pondering and discussing with various people for years. I don't
think I invented this myself, apologies to whoever did for not giving
On 30 May 2013 19:39, Robert Haas robertmh...@gmail.com wrote:
On Thu, May 30, 2013 at 9:33 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
The reason we have to freeze is that otherwise our 32-bit XIDs wrap around
and become ambiguous. The obvious solution is to extend XIDs to 64 bits,
On 31.05.2013 06:02, Robert Haas wrote:
On Thu, May 30, 2013 at 2:39 PM, Robert Haasrobertmh...@gmail.com wrote:
Random thought: Could you compute the reference XID based on the page
LSN? That would eliminate the storage overhead.
After mulling this over a bit, I think this is definitely
On 1 June 2013 19:48, Heikki Linnakangas hlinnakan...@vmware.com wrote:
On 31.05.2013 06:02, Robert Haas wrote:
On Thu, May 30, 2013 at 2:39 PM, Robert Haasrobertmh...@gmail.com
wrote:
Random thought: Could you compute the reference XID based on the page
LSN? That would eliminate the
On Sat, Jun 1, 2013 at 2:48 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
We define a new page-level bit, something like PD_RECENTLY_FROZEN.
When this bit is set, it means there are no unfrozen tuples on the
page with XIDs that predate the current half-epoch. Whenever we know
this to
On Sat, Jun 1, 2013 at 3:22 PM, Simon Riggs si...@2ndquadrant.com wrote:
If we set a bit, surely we need to write the page. Isn't that what we
were trying to avoid?
No, the bit only gets set in situations when we were going to dirty
the page for some other reason anyway. Specifically, if a
On 1 June 2013 21:26, Robert Haas robertmh...@gmail.com wrote:
On Sat, Jun 1, 2013 at 3:22 PM, Simon Riggs si...@2ndquadrant.com wrote:
If we set a bit, surely we need to write the page. Isn't that what we
were trying to avoid?
No, the bit only gets set in situations when we were going to
On 31.05.2013 00:06, Bruce Momjian wrote:
On Thu, May 30, 2013 at 04:33:50PM +0300, Heikki Linnakangas wrote:
This would also be the first step in allowing the clog to grow
larger than 2 billion transactions, eliminating the need for
anti-wraparound freezing altogether. You'd still want to
On Thu, May 30, 2013 at 10:04:23PM -0400, Robert Haas wrote:
Hm. Why? If freezing gets notably cheaper I don't really see much need
for keeping that much clog around? If we still run into anti-wraparound
areas, there has to be some major operational problem.
That is true, but we have a
On Fri, May 31, 2013 at 1:26 PM, Bruce Momjian br...@momjian.us wrote:
On Thu, May 30, 2013 at 10:04:23PM -0400, Robert Haas wrote:
Hm. Why? If freezing gets notably cheaper I don't really see much need
for keeping that much clog around? If we still run into anti-wraparound
areas, there has
Heikki,
This sounds a lot like my idea for 9.3, which didn't go anywhere.
You've worked out the issues I couldn't, I think.
Another method is
to store the 32-bit xid values in tuple headers as offsets from the
per-page 64-bit value, but then you'd always need to have the 64-bit
value at hand
On Thu, May 30, 2013 at 9:33 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
The reason we have to freeze is that otherwise our 32-bit XIDs wrap around
and become ambiguous. The obvious solution is to extend XIDs to 64 bits, but
that would waste a lot space. The trick is to add a field to
On Thu, May 30, 2013 at 1:39 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, May 30, 2013 at 9:33 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
The reason we have to freeze is that otherwise our 32-bit XIDs wrap around
and become ambiguous. The obvious solution is to extend XIDs
On 30.05.2013 21:46, Merlin Moncure wrote:
On Thu, May 30, 2013 at 1:39 PM, Robert Haasrobertmh...@gmail.com wrote:
On Thu, May 30, 2013 at 9:33 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
The reason we have to freeze is that otherwise our 32-bit XIDs wrap around
and become
On 2013-05-30 14:39:46 -0400, Robert Haas wrote:
Since we're not storing 64-bit wide XIDs on every tuple, we'd still need to
replace the XIDs with FrozenXid whenever the difference between the smallest
and largest XID on a page exceeds 2^31. But that would only happen when
you're updating
On Thu, May 30, 2013 at 04:33:50PM +0300, Heikki Linnakangas wrote:
This would also be the first step in allowing the clog to grow
larger than 2 billion transactions, eliminating the need for
anti-wraparound freezing altogether. You'd still want to truncate
the clog eventually, but it would be
On Thu, May 30, 2013 at 3:22 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-05-30 14:39:46 -0400, Robert Haas wrote:
Since we're not storing 64-bit wide XIDs on every tuple, we'd still need to
replace the XIDs with FrozenXid whenever the difference between the
smallest
and
On Thu, May 30, 2013 at 2:39 PM, Robert Haas robertmh...@gmail.com wrote:
Random thought: Could you compute the reference XID based on the page
LSN? That would eliminate the storage overhead.
After mulling this over a bit, I think this is definitely possible.
We begin a new half-epoch every 2
76 matches
Mail list logo