On Sat, May 7, 2016 at 5:34 AM, Robert Haas <[email protected]> wrote:
> On Mon, May 2, 2016 at 8:25 PM, Andres Freund <[email protected]> wrote:
>> + * heap_tuple_needs_eventual_freeze
>> + *
>> + * Check to see whether any of the XID fields of a tuple (xmin, xmax, xvac)
>> + * will eventually require freezing. Similar to heap_tuple_needs_freeze,
>> + * but there's no cutoff, since we're trying to figure out whether freezing
>> + * will ever be needed, not whether it's needed now.
>> + */
>> +bool
>> +heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
>>
>> Wouldn't redefining this to heap_tuple_is_frozen() and then inverting the
>> checks be easier to understand?
>
> I thought it much safer to keep this as close to a copy of
> heap_tuple_needs_freeze() as possible. Copying a function and
> inverting all of the return values is much more likely to introduce
> bugs, IME.
I agree.
>> + /*
>> + * If xmax is a valid xact or multixact, this tuple is also not
>> frozen.
>> + */
>> + if (tuple->t_infomask & HEAP_XMAX_IS_MULTI)
>> + {
>> + MultiXactId multi;
>> +
>> + multi = HeapTupleHeaderGetRawXmax(tuple);
>> + if (MultiXactIdIsValid(multi))
>> + return true;
>> + }
>>
>> Hm. What's the test inside the if() for? There shouldn't be any case
>> where xmax is invalid if HEAP_XMAX_IS_MULTI is set. Now there's a
>> check like that outside of this commit, but it seems strange to me
>> (Alvaro, perhaps you could comment on this?).
>
> Here again I was copying existing code, with appropriate simplifications.
>
>> + *
>> + * Clearing both visibility map bits is not separately WAL-logged. The
>> callers
>> * must make sure that whenever a bit is cleared, the bit is cleared on WAL
>> * replay of the updating operation as well.
>>
>> I think including "both" here makes things less clear, because it
>> differentiates clearing one bit from clearing both. There's no practical
>> differentce atm, but still.
>
> I agree.
Fixed.
>> *
>> * VACUUM will normally skip pages for which the visibility map bit is set;
>> * such pages can't contain any dead tuples and therefore don't need
>> vacuuming.
>> - * The visibility map is not used for anti-wraparound vacuums, because
>> - * an anti-wraparound vacuum needs to freeze tuples and observe the latest
>> xid
>> - * present in the table, even on pages that don't have any dead tuples.
>> *
>>
>> I think the remaining sentence isn't entirely accurate, there's now more
>> than one bit, and they're different with regard to scan_all/!scan_all
>> vacuums (or will be - maybe this updated further in a later commit? But
>> if so, that sentence shouldn't yet be removed...).
>
> We can adjust the language, but I don't really see a big problem here.
This comment is not incorporate this patch so far.
>> -/* Number of heap blocks we can represent in one byte. */
>> -#define HEAPBLOCKS_PER_BYTE 8
>> -
>> Hm, why was this moved to the header? Sounds like something the outside
>> shouldn't care about.
>
> Oh... yeah. Let's undo that.
Fixed.
>> #define HEAPBLK_TO_MAPBIT(x) (((x) % HEAPBLOCKS_PER_BYTE) *
>> BITS_PER_HEAPBLOCK)
>>
>> Hm. This isn't really a mapping to an individual bit anymore - but I
>> don't really have a better name in mind. Maybe TO_OFFSET?
>
> Well, it sorta is... but we could change it, I suppose.
>
>> +static const uint8 number_of_ones_for_visible[256] = {
>> ...
>> +};
>> +static const uint8 number_of_ones_for_frozen[256] = {
>> ...
>> };
>>
>> Did somebody verify the new contents are correct?
>
> I admit that I didn't. It seemed like an unlikely place for a goof,
> but I guess we should verify.
>> /*
>> - * visibilitymap_clear - clear a bit in visibility map
>> + * visibilitymap_clear - clear all bits in visibility map
>> *
>>
>> This seems rather easy to misunderstand, as this really only clears all
>> the bits for one page, not actually all the bits.
>
> We could change "in" to "for one page in the".
Fixed.
>> * the bit for heapBlk, or InvalidBuffer. The caller is responsible for
>> - * releasing *buf after it's done testing and setting bits.
>> + * releasing *buf after it's done testing and setting bits, and must pass
>> flags
>> + * for which it needs to check the value in visibility map.
>> *
>> * NOTE: This function is typically called without a lock on the heap page,
>> * so somebody else could change the bit just after we look at it. In fact,
>> @@ -327,17 +351,16 @@ visibilitymap_set(Relation rel, BlockNumber heapBlk,
>> Buffer heapBuf,
>>
>> I'm not seing what flags the above comment change is referring to?
>
> Ugh. I think that's leftover cruft from an earlier patch version that
> should have been excised from what got committed.
Fixed.
>> /*
>> - * A single-bit read is atomic. There could be memory-ordering
>> effects
>> + * A single byte read is atomic. There could be memory-ordering
>> effects
>> * here, but for performance reasons we make it the caller's job to
>> worry
>> * about that.
>> */
>> - result = (map[mapByte] & (1 << mapBit)) ? true : false;
>> -
>> - return result;
>> + return ((map[mapByte] >> mapBit) & VISIBILITYMAP_VALID_BITS);
>> }
>>
>> Not a new issue, and *very* likely to be irrelevant in practice (given
>> the value is only referenced once): But there's really no guarantee
>> map[mapByte] is only read once here.
>
> Meh. But we can fix if you want to.
Fixed.
>> -BlockNumber
>> -visibilitymap_count(Relation rel)
>> +void
>> +visibilitymap_count(Relation rel, BlockNumber *all_visible, BlockNumber
>> *all_frozen)
>>
>> Not really a new issue again: The parameter types (previously return
>> type) to this function seem wrong to me.
>
> Not this patch's job to tinker.
This comment is not incorporate this patch yet.
>> @@ -1934,5 +1992,14 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
>> TransactionId *visibility_cut
>> }
>> + /*
>> + * We don't bother clearing *all_frozen when the page is discovered
>> not
>> + * to be all-visible, so do that now if necessary. The page might
>> fail
>> + * to be all-frozen for other reasons anyway, but if it's not
>> all-visible,
>> + * then it definitely isn't all-frozen.
>> + */
>> + if (!all_visible)
>> + *all_frozen = false;
>> +
>>
>> Why don't we just set *all_frozen to false when appropriate? It'd be
>> just as many lines and probably easier to understand?
>
> I thought that looked really easy to mess up, either now or down the
> road. This way seemed more solid to me. That's a judgement call, of
> course.
To be understanding easier, I changed it so.
>> + /*
>> + * If the page is marked as all-visible but not all-frozen,
>> we should
>> + * so mark it. Note that all_frozen is only valid if
>> all_visible is
>> + * true, so we must check both.
>> + */
>>
>> This kinda seems to imply that all-visible implies all_frozen. Also, why
>> has that block been added to the end of the if/else if chain? Seems like
>> it belongs below the (all_visible && !all_visible_according_to_vm) block.
>
> We can adjust the comment a bit to make it more clear, if you like,
> but I doubt it's going to cause serious misunderstanding. As for the
> placement, the reason I put it at the end is because I figured that we
> did not want to mark it all-frozen if any of the "oh crap, emit a
> warning" cases applied.
>
Fixed comment.
I think that we should care about all-visible problem first, and then
care all-frozen problem.
So this patch doesn't change the placement.
Attached patch fixes only above comments, other are being addressed now.
--
Regards,
--
Masahiko Sawada
diff --git a/src/backend/access/heap/visibilitymap.c b/src/backend/access/heap/visibilitymap.c
index eaab4be..05422f1 100644
--- a/src/backend/access/heap/visibilitymap.c
+++ b/src/backend/access/heap/visibilitymap.c
@@ -33,7 +33,7 @@
* is set, we know the condition is true, but if a bit is not set, it might or
* might not be true.
*
- * Clearing both visibility map bits is not separately WAL-logged. The callers
+ * Clearing visibility map bits is not separately WAL-logged. The callers
* must make sure that whenever a bit is cleared, the bit is cleared on WAL
* replay of the updating operation as well.
*
@@ -104,13 +104,16 @@
*/
#define MAPSIZE (BLCKSZ - MAXALIGN(SizeOfPageHeaderData))
+/* Number of heap blocks we can represent in one byte */
+#define HEAPBLOCKS_PER_BYTE (BITS_PER_BYTE / BITS_PER_HEAPBLOCK)
+
/* Number of heap blocks we can represent in one visibility map page. */
#define HEAPBLOCKS_PER_PAGE (MAPSIZE * HEAPBLOCKS_PER_BYTE)
/* Mapping from heap block number to the right bit in the visibility map */
#define HEAPBLK_TO_MAPBLOCK(x) ((x) / HEAPBLOCKS_PER_PAGE)
#define HEAPBLK_TO_MAPBYTE(x) (((x) % HEAPBLOCKS_PER_PAGE) / HEAPBLOCKS_PER_BYTE)
-#define HEAPBLK_TO_MAPBIT(x) (((x) % HEAPBLOCKS_PER_BYTE) * BITS_PER_HEAPBLOCK)
+#define HEAPBLK_TO_OFFSET(x) (((x) % HEAPBLOCKS_PER_BYTE) * BITS_PER_HEAPBLOCK)
/* tables for fast counting of set bits for visible and frozen */
static const uint8 number_of_ones_for_visible[256] = {
@@ -156,7 +159,7 @@ static void vm_extend(Relation rel, BlockNumber nvmblocks);
/*
- * visibilitymap_clear - clear all bits in visibility map
+ * visibilitymap_clear - clear all bits for one page in visibility map
*
* You must pass a buffer containing the correct map page to this function.
* Call visibilitymap_pin first to pin the right one. This function doesn't do
@@ -167,8 +170,8 @@ visibilitymap_clear(Relation rel, BlockNumber heapBlk, Buffer buf)
{
BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
int mapByte = HEAPBLK_TO_MAPBYTE(heapBlk);
- int mapBit = HEAPBLK_TO_MAPBIT(heapBlk);
- uint8 mask = VISIBILITYMAP_VALID_BITS << mapBit;
+ int mapOffset = HEAPBLK_TO_OFFSET(heapBlk);
+ uint8 mask = VISIBILITYMAP_VALID_BITS << mapOffset;
char *map;
#ifdef TRACE_VISIBILITYMAP
@@ -267,7 +270,7 @@ visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,
{
BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
uint32 mapByte = HEAPBLK_TO_MAPBYTE(heapBlk);
- uint8 mapBit = HEAPBLK_TO_MAPBIT(heapBlk);
+ uint8 mapOffset = HEAPBLK_TO_OFFSET(heapBlk);
Page page;
uint8 *map;
@@ -291,11 +294,11 @@ visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,
map = (uint8 *)PageGetContents(page);
LockBuffer(vmBuf, BUFFER_LOCK_EXCLUSIVE);
- if (flags != (map[mapByte] >> mapBit & VISIBILITYMAP_VALID_BITS))
+ if (flags != (map[mapByte] >> mapOffset & VISIBILITYMAP_VALID_BITS))
{
START_CRIT_SECTION();
- map[mapByte] |= (flags << mapBit);
+ map[mapByte] |= (flags << mapOffset);
MarkBufferDirty(vmBuf);
if (RelationNeedsWAL(rel))
@@ -338,8 +341,7 @@ visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,
* earlier call to visibilitymap_pin or visibilitymap_get_status on the same
* relation. On return, *buf is a valid buffer with the map page containing
* the bit for heapBlk, or InvalidBuffer. The caller is responsible for
- * releasing *buf after it's done testing and setting bits, and must pass flags
- * for which it needs to check the value in visibility map.
+ * releasing *buf after it's done testing and setting bits.
*
* NOTE: This function is typically called without a lock on the heap page,
* so somebody else could change the bit just after we look at it. In fact,
@@ -353,8 +355,9 @@ visibilitymap_get_status(Relation rel, BlockNumber heapBlk, Buffer *buf)
{
BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
uint32 mapByte = HEAPBLK_TO_MAPBYTE(heapBlk);
- uint8 mapBit = HEAPBLK_TO_MAPBIT(heapBlk);
+ uint8 mapOffset = HEAPBLK_TO_OFFSET(heapBlk);
char *map;
+ uint8 result;
#ifdef TRACE_VISIBILITYMAP
elog(DEBUG1, "vm_get_status %s %d", RelationGetRelationName(rel), heapBlk);
@@ -384,7 +387,8 @@ visibilitymap_get_status(Relation rel, BlockNumber heapBlk, Buffer *buf)
* here, but for performance reasons we make it the caller's job to worry
* about that.
*/
- return ((map[mapByte] >> mapBit) & VISIBILITYMAP_VALID_BITS);
+ result = ((map[mapByte] >> mapOffset) & VISIBILITYMAP_VALID_BITS);
+ return result;
}
/*
@@ -456,7 +460,7 @@ visibilitymap_truncate(Relation rel, BlockNumber nheapblocks)
/* last remaining block, byte, and bit */
BlockNumber truncBlock = HEAPBLK_TO_MAPBLOCK(nheapblocks);
uint32 truncByte = HEAPBLK_TO_MAPBYTE(nheapblocks);
- uint8 truncBit = HEAPBLK_TO_MAPBIT(nheapblocks);
+ uint8 truncOffset = HEAPBLK_TO_OFFSET(nheapblocks);
#ifdef TRACE_VISIBILITYMAP
elog(DEBUG1, "vm_truncate %s %d", RelationGetRelationName(rel), nheapblocks);
@@ -478,7 +482,7 @@ visibilitymap_truncate(Relation rel, BlockNumber nheapblocks)
* because we don't get a chance to clear the bits if the heap is extended
* again.
*/
- if (truncByte != 0 || truncBit != 0)
+ if (truncByte != 0 || truncOffset != 0)
{
Buffer mapBuffer;
Page page;
@@ -511,7 +515,7 @@ visibilitymap_truncate(Relation rel, BlockNumber nheapblocks)
* ((1 << 7) - 1) = 01111111
*----
*/
- map[truncByte] &= (1 << truncBit) - 1;
+ map[truncByte] &= (1 << truncOffset) - 1;
MarkBufferDirty(mapBuffer);
UnlockReleaseBuffer(mapBuffer);
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 426e756..e39321b 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1192,9 +1192,9 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats,
}
/*
- * If the page is marked as all-visible but not all-frozen, we should
- * so mark it. Note that all_frozen is only valid if all_visible is
- * true, so we must check both.
+ * If the all-visible page is turned out to be all-frozen but not marked,
+ * we should so mark it. Note that all_frozen is only valid if all_visible
+ * is true, so we must check both.
*/
else if (all_visible_according_to_vm && all_visible && all_frozen &&
!VM_ALL_FROZEN(onerel, blkno, &vmbuffer))
@@ -2068,6 +2068,7 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
if (ItemIdIsDead(itemid))
{
all_visible = false;
+ *all_frozen = false;
break;
}
@@ -2087,6 +2088,7 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
if (!HeapTupleHeaderXminCommitted(tuple.t_data))
{
all_visible = false;
+ *all_frozen = false;
break;
}
@@ -2098,6 +2100,7 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
if (!TransactionIdPrecedes(xmin, OldestXmin))
{
all_visible = false;
+ *all_frozen = false;
break;
}
@@ -2116,23 +2119,16 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
case HEAPTUPLE_RECENTLY_DEAD:
case HEAPTUPLE_INSERT_IN_PROGRESS:
case HEAPTUPLE_DELETE_IN_PROGRESS:
- all_visible = false;
- break;
-
+ {
+ all_visible = false;
+ *all_frozen = false;
+ break;
+ }
default:
elog(ERROR, "unexpected HeapTupleSatisfiesVacuum result");
break;
}
} /* scan along page */
- /*
- * We don't bother clearing *all_frozen when the page is discovered not to
- * be all-visible, so do that now if necessary. The page might fail to be
- * all-frozen for other reasons anyway, but if it's not all-visible, then
- * it definitely isn't all-frozen.
- */
- if (!all_visible)
- *all_frozen = false;
-
return all_visible;
}
diff --git a/src/include/access/visibilitymap.h b/src/include/access/visibilitymap.h
index b8dc54c..65e78ec 100644
--- a/src/include/access/visibilitymap.h
+++ b/src/include/access/visibilitymap.h
@@ -19,8 +19,8 @@
#include "storage/buf.h"
#include "utils/relcache.h"
+/* Number of bits for one heap page */
#define BITS_PER_HEAPBLOCK 2
-#define HEAPBLOCKS_PER_BYTE (BITS_PER_BYTE / BITS_PER_HEAPBLOCK)
/* Flags for bit map */
#define VISIBILITYMAP_ALL_VISIBLE 0x01
--
Sent via pgsql-hackers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers