On Sat, Mar 18, 2017 at 4:35 AM, Amit Kapila wrote:
> This version looks good to me.
Committed.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make
On Fri, Mar 17, 2017 at 8:34 PM, Ashutosh Sharma wrote:
> On Fri, Mar 17, 2017 at 6:13 PM, Amit Kapila wrote:
>> On Fri, Mar 17, 2017 at 12:27 PM, Ashutosh Sharma
>> wrote:
>>> On Fri, Mar 17, 2017 at 8:20 AM, Amit Kapila
On Wed, Mar 15, 2017 at 07:26:45PM -0700, Andres Freund wrote:
> On 2017-03-15 16:31:11 -0400, Robert Haas wrote:
> > On Wed, Mar 15, 2017 at 3:54 PM, Ashutosh Sharma
> > wrote:
> > > Changed as per suggestions. Attached v9 patch. Thanks.
> >
> > Wow, when do you sleep?
>
On Fri, Mar 17, 2017 at 6:13 PM, Amit Kapila wrote:
> On Fri, Mar 17, 2017 at 12:27 PM, Ashutosh Sharma
> wrote:
>> On Fri, Mar 17, 2017 at 8:20 AM, Amit Kapila wrote:
>>
>> As I said in my previous e-mail, I think you
On Fri, Mar 17, 2017 at 12:27 PM, Ashutosh Sharma wrote:
> On Fri, Mar 17, 2017 at 8:20 AM, Amit Kapila wrote:
>
> As I said in my previous e-mail, I think you need
>> to record clearing of this flag in WAL record XLOG_HASH_DELETE as you
>> are not
On Fri, Mar 17, 2017 at 8:20 AM, Amit Kapila wrote:
> On Thu, Mar 16, 2017 at 9:39 PM, Ashutosh Sharma
> wrote:
>>>
>>> Don't you think, we should also clear it during the replay of
>>> XLOG_HASH_DELETE? We might want to log the clear of
On Thu, Mar 16, 2017 at 9:39 PM, Ashutosh Sharma wrote:
>>>
>>
>> Don't you think, we should also clear it during the replay of
>> XLOG_HASH_DELETE? We might want to log the clear of flag along with
>> WAL record for XLOG_HASH_DELETE.
>>
>
> Yes, it should be cleared. I
>>> Sure, but we are not clearing in conditionally. I am not sure, how
>>> after recovery it will be cleared it gets set during normal operation.
>>> Moreover, btree already clears similar flag during replay (refer
>>> btree_xlog_delete).
>>
>> You were right. In case datachecksum is enabled or
On Thu, Mar 16, 2017 at 1:02 PM, Ashutosh Sharma wrote:
> On Thu, Mar 16, 2017 at 11:11 AM, Amit Kapila wrote:
>> On Wed, Mar 15, 2017 at 9:23 PM, Ashutosh Sharma
>> wrote:
>>>
Few other comments:
1.
+
On Thu, Mar 16, 2017 at 11:11 AM, Amit Kapila wrote:
> On Wed, Mar 15, 2017 at 9:23 PM, Ashutosh Sharma
> wrote:
>>
>>>
>>> Few other comments:
>>> 1.
>>> + if (ndeletable > 0)
>>> + {
>>> + /* No ereport(ERROR) until changes are logged */
>>> +
On Wed, Mar 15, 2017 at 9:23 PM, Ashutosh Sharma wrote:
>
>>
>> Few other comments:
>> 1.
>> + if (ndeletable > 0)
>> + {
>> + /* No ereport(ERROR) until changes are logged */
>> + START_CRIT_SECTION();
>> +
>> + PageIndexMultiDelete(page, deletable, ndeletable);
>> +
>> +
On Mar 16, 2017 7:49 AM, "Robert Haas" wrote:
On Wed, Mar 15, 2017 at 4:31 PM, Robert Haas wrote:
> On Wed, Mar 15, 2017 at 3:54 PM, Ashutosh Sharma
wrote:
>> Changed as per suggestions. Attached v9 patch. Thanks.
>
> Wow,
On 2017-03-15 16:31:11 -0400, Robert Haas wrote:
> On Wed, Mar 15, 2017 at 3:54 PM, Ashutosh Sharma
> wrote:
> > Changed as per suggestions. Attached v9 patch. Thanks.
>
> Wow, when do you sleep?
I think that applies to a bunch of people, including yourself ;)
--
Sent
On Wed, Mar 15, 2017 at 4:31 PM, Robert Haas wrote:
> On Wed, Mar 15, 2017 at 3:54 PM, Ashutosh Sharma
> wrote:
>> Changed as per suggestions. Attached v9 patch. Thanks.
>
> Wow, when do you sleep? Will have a look.
Committed with a few
On Wed, Mar 15, 2017 at 3:54 PM, Ashutosh Sharma wrote:
> Changed as per suggestions. Attached v9 patch. Thanks.
Wow, when do you sleep? Will have a look.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via
> +action = XLogReadBufferForRedo(record, 0, );
> +
> +if (!IsBufferCleanupOK(buffer))
> +elog(PANIC, "hash_xlog_vacuum_one_page: failed to acquire
> cleanup lock");
>
> That could fail, I think, because of a pin from a Hot Standby backend.
> You want to call
On Wed, Mar 15, 2017 at 2:10 PM, Ashutosh Sharma wrote:
> Okay, I have done the changes as suggested by you. Please refer to the
> attached v8 patch. Thanks.
Cool, but this doesn't look right:
+action = XLogReadBufferForRedo(record, 0, );
+
+if
On Wed, Mar 15, 2017 at 9:28 PM, Robert Haas wrote:
> On Wed, Mar 15, 2017 at 11:37 AM, Ashutosh Sharma
> wrote:
>>> +/* Get RelfileNode from relation OID */
>>> +rel = relation_open(htup->t_tableOid, NoLock);
>>> +rnode =
On Wed, Mar 15, 2017 at 11:37 AM, Ashutosh Sharma wrote:
>> +/* Get RelfileNode from relation OID */
>> +rel = relation_open(htup->t_tableOid, NoLock);
>> +rnode = rel->rd_node;
>> +relation_close(rel, NoLock);
>> itup->t_tid =
>
> I think one possibility is to get it using
> indexrel->rd_index->indrelid in _hash_doinsert().
>
Thanks. I have tried the same in the v7 patch shared upthread.
>
>>
>> But they're not called delete records in a hash index. The function
>> is called hash_xlog_vacuum_one_page. The
> Generally, this patch looks like a pretty straightforward adaptation
> of the similar btree mechanism to hash indexes, so if it works for
> btree it ought to work here, too. But I noticed a few things while
> reading through it.
>
> +/* Get RelfileNode from relation OID */
> +
On Wed, Mar 15, 2017 at 1:15 AM, Robert Haas wrote:
> On Tue, Mar 14, 2017 at 8:02 AM, Ashutosh Sharma
> wrote:
>> Attached is the v6 patch for microvacuum in hash index rebased on top
>> of 'v10 patch for WAL in hash index - [1]' and 'v1 patch for
On Tue, Mar 14, 2017 at 8:02 AM, Ashutosh Sharma wrote:
> Attached is the v6 patch for microvacuum in hash index rebased on top
> of 'v10 patch for WAL in hash index - [1]' and 'v1 patch for WAL
> consistency check for hash index - [2]'.
>
> [1] -
>
Hi,
Attached is the v6 patch for microvacuum in hash index rebased on top
of 'v10 patch for WAL in hash index - [1]' and 'v1 patch for WAL
consistency check for hash index - [2]'.
[1] -
On Sat, Jan 28, 2017 at 8:02 PM, Amit Kapila wrote:
> On Fri, Jan 27, 2017 at 5:15 PM, Ashutosh Sharma
> wrote:
>>>
>>> Don't you think we should try to identify the reason of the deadlock
>>> error reported by you up thread [1]? I know that you
On Fri, Jan 27, 2017 at 5:15 PM, Ashutosh Sharma wrote:
>>
>> Don't you think we should try to identify the reason of the deadlock
>> error reported by you up thread [1]? I know that you and Ashutosh are
>> not able to reproduce it, but still I feel some investigation is
>>> I have done some more testing with this, and have moved to the patch
>>> back to 'Needs Review' pending Amit's comments.
>>>
>>
>> Moved to "Ready for Committer".
>>
>
> Don't you think we should try to identify the reason of the deadlock
> error reported by you up thread [1]? I know that you
On Thu, Jan 26, 2017 at 6:38 PM, Jesper Pedersen
wrote:
> On 01/23/2017 02:53 PM, Jesper Pedersen wrote:
>>
>> I have done some more testing with this, and have moved to the patch
>> back to 'Needs Review' pending Amit's comments.
>>
>
> Moved to "Ready for Committer".
On 01/23/2017 02:53 PM, Jesper Pedersen wrote:
I have done some more testing with this, and have moved to the patch
back to 'Needs Review' pending Amit's comments.
Moved to "Ready for Committer".
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
Hi Ashutosh,
On 01/20/2017 03:24 PM, Jesper Pedersen wrote:
Yeah, those are the steps; just with a Skylake laptop.
However, I restarted with a fresh master, with WAL v8 and MV v5, and
can't reproduce the issue.
I have done some more testing with this, and have moved to the patch
back to
Hi Ashutosh,
On 01/20/2017 04:18 AM, Ashutosh Sharma wrote:
okay, Thanks for confirming that.
I would like to update you that I am not able to reproduce this issue
at my end. I suspect that the steps i am following might be slightly
different than your's. Could you please have a look at steps
Hi Jesper,
> I'm not seeing this deadlock with just the WAL v8 patch applied.
>
okay, Thanks for confirming that.
I would like to update you that I am not able to reproduce this issue
at my end. I suspect that the steps i am following might be slightly
different than your's. Could you please
Hi Ashutosh,
On 01/10/2017 08:40 AM, Jesper Pedersen wrote:
On 01/10/2017 05:24 AM, Ashutosh Sharma wrote:
Thanks for reporting this problem. It is basically coming because i
forgot to unpin the bucketbuf in hash_xlog_vacuum_one_page(). Please
find the attached v5 patch that fixes the issue.
Hi Ashutosh,
On 01/10/2017 05:24 AM, Ashutosh Sharma wrote:
Thanks for reporting this problem. It is basically coming because i
forgot to unpin the bucketbuf in hash_xlog_vacuum_one_page(). Please
find the attached v5 patch that fixes the issue.
The crash is now fixed, but the
--- test.sql
Hi Jesper,
> However (master / WAL v7 / MV v4),
>
> --- ddl.sql ---
> CREATE TABLE test AS SELECT generate_series(1, 10) AS id, 0 AS val;
> CREATE INDEX IF NOT EXISTS idx_id ON test USING hash (id);
> CREATE INDEX IF NOT EXISTS idx_val ON test USING hash (val);
> ANALYZE;
> --- ddl.sql ---
>
>
Hi Ashutosh,
On 01/06/2017 12:54 AM, Ashutosh Sharma wrote:
using pgbench -M prepared -c 10 -j 10 -T 600 -f test.sql test
crashes after a few minutes with
TRAP: FailedAssertion("!(LWLockHeldByMeInMode(((LWLock*)
(&(bufHdr)->content_lock)), LW_EXCLUSIVE))", File: "bufmgr.c", Line: 3781)
Hi,
> using pgbench -M prepared -c 10 -j 10 -T 600 -f test.sql test
>
> crashes after a few minutes with
>
> TRAP: FailedAssertion("!(LWLockHeldByMeInMode(((LWLock*)
> (&(bufHdr)->content_lock)), LW_EXCLUSIVE))", File: "bufmgr.c", Line: 3781)
Attached v4 patch fixes this assertion failure.
>
Hi Ashutosh,
On 01/04/2017 06:13 AM, Ashutosh Sharma wrote:
Attached is the v3 patch rebased on postgreSQL HEAD and WAL v7 patch.
It also takes care of all the previous comments from Jesper - [1].
With an --enable-cassert build (master / WAL v7 / MV v3) and
-- ddl.sql --
CREATE TABLE test
Hi,
> This can be rebased on the WAL v7 patch [1]. In addition to the previous
> comments you need to take commit 7819ba into account.
>
Attached is the v3 patch rebased on postgreSQL HEAD and WAL v7 patch.
It also takes care of all the previous comments from Jesper - [1].
Also, I have changed
On 11/11/2016 12:11 AM, Ashutosh Sharma wrote:
Hi Jesper,
Some initial comments.
_hash_vacuum_one_page:
+ END_CRIT_SECTION();
+ _hash_chgbufaccess(rel, metabuf, HASH_READ, HASH_NOLOCK);
The _hash_chgbufaccess() needs a comment.
You also need a place where you
On 11/11/2016 12:11 AM, Ashutosh Sharma wrote:
Thanks for reviewing this patch. I would like to update you that this
patch has got dependency on a patch for concurrent hash index and WAL
log in hash index. So, till these two patches for hash index are not
stable I won't be able to share you a
Hi Jesper,
> Some initial comments.
>
> _hash_vacuum_one_page:
>
> + END_CRIT_SECTION();
> + _hash_chgbufaccess(rel, metabuf, HASH_READ, HASH_NOLOCK);
>
> The _hash_chgbufaccess() needs a comment.
>
> You also need a place where you call pfree for so->killedItems -
Hi,
On 11/02/2016 01:38 AM, Ashutosh Sharma wrote:
While replaying the delete/vacuum record on standby, it can conflict
with some already running queries. Basically the replay can remove
some row which can be visible on standby. You need to resolve
conflicts similar to what we do in btree
Hi,
> While replaying the delete/vacuum record on standby, it can conflict
> with some already running queries. Basically the replay can remove
> some row which can be visible on standby. You need to resolve
> conflicts similar to what we do in btree delete records (refer
> btree_xlog_delete).
Hi Amit,
Thanks for showing your interest and reviewing my patch. I have
started looking into your review comments. I will share the updated
patch in a day or two.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
On Fri, Oct 28, 2016 at 4:42 PM, Amit Kapila
On Mon, Oct 24, 2016 at 2:21 PM, Ashutosh Sharma wrote:
> Hi All,
>
> I have added a microvacuum support for hash index access method and
> attached is the v1 patch for the same.
>
This is an important functionality for hash index as we already do
have same functionality
Hi All,
I have added a microvacuum support for hash index access method and
attached is the v1 patch for the same. The patch basically takes care
of the following things:
1. Firstly, it changes the marking of dead tuples from
'tuple-at-a-time' to 'page-at-a-time' during hash index scan. For this
47 matches
Mail list logo