> Generally, this patch looks like a pretty straightforward adaptation
> of the similar btree mechanism to hash indexes, so if it works for
> btree it ought to work here, too.  But I noticed a few things while
> reading through it.
> +        /* Get RelfileNode from relation OID */
> +        rel = relation_open(htup->t_tableOid, NoLock);
> +        rnode = rel->rd_node;
> +        relation_close(rel, NoLock);
>          itup->t_tid = htup->t_self;
> -        _hash_doinsert(index, itup);
> +        _hash_doinsert(index, itup, rnode);
> This is an awfully low-level place to be doing something like this.
> I'm not sure exactly where this should be happening, but not in the
> per-tuple callback.

Okay, Now I have done this inside _hash_doinsert() instead of callback
function. Please have a look into the attached v7 patch.

> +    /*
> +     * If there's nothing running on the standby we don't need to derive a
> +     * full latestRemovedXid value, so use a fast path out of here.  This
> +     * returns InvalidTransactionId, and so will conflict with all HS
> +     * transactions; but since we just worked out that that's zero people,
> +     * it's OK.
> +     */
> +    if (CountDBBackends(InvalidOid) == 0)
> +        return latestRemovedXid;
> I see that this comment (and most of what surrounds it) was just
> copied from the existing btree example, but isn't there a discrepancy
> between the comment and the code?  It says it returns
> InvalidTransactionId, but it doesn't.  Also, you dropped the XXX from
> the btree original, and the following reachedConsistency check.

It does return InvalidTransactionId if there are no backends running
across any database in the standby. As shown below 'latestRemovedXid'
is initialised with InvalidTransactionId,

TransactionId   latestRemovedXid = InvalidTransactionId;

So, If there are no backend processes running across any databases in
standby latestRemovedXid will be returned as it is.

I have also added the note in XXX in above comment. Please check v7
patch attached with this mail.

> +     * Hash Index delete records can conflict with standby queries.You might
> +     * think that vacuum records would conflict as well, but we've handled
> But they're not called delete records in a hash index.  The function
> is called hash_xlog_vacuum_one_page.  The corresponding btree function
> is btree_xlog_delete.  So this comment needs a little more updating.

Okay, I have tried to rephrase it to avoid the confusion.

> +            if (IsBufferCleanupOK(bucket_buf))
> +            {
> +                _hash_vacuum_one_page(rel, metabuf, buf, bucket_buf,
> +                                      (buf == bucket_buf) ? true : false,
> +                                      hnode);
> +                if (bucket_buf != buf)
> +                    LockBuffer(bucket_buf, BUFFER_LOCK_UNLOCK);
> +
> +                if (PageGetFreeSpace(page) >= itemsz)
> +                    break;              /* OK, now we have enough space */
> +            }
> I might be missing something, but I don't quite see why this needs a
> cleanup lock on the primary bucket page.  I would think a cleanup lock
> on the page we're actually modifying would suffice, and I think if
> that's correct it would be a better way to go.  If that's not correct,
> then I think the comments needs some work.

Thanks for your that suggestion... I spent a lot of time thinking on
this and also had a small discussion with Amit but could not find any
issue with taking cleanup lock on modified page instead of primary
bucket page.I had to do some decent code changes for this. Attached v7
patch has the changes.

With Regards,
Ashutosh Sharma

Attachment: microvacuum_hash_index_v7.patch
Description: application/download

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to