On Wed, Mar 15, 2017 at 7:50 PM, Robert Haas wrote:
> On Wed, Mar 15, 2017 at 9:18 AM, Stephen Frost wrote:
>>> I think we have sufficient comments in code especially on top of
>>> function _hash_alloc_buckets().
>>
>> I don't see any comments regarding
On Wed, Mar 15, 2017 at 11:02 AM, Tom Lane wrote:
> Robert Haas writes:
>> That theory seems inconsistent with how mdextend() works. My
>> understanding is that we zero-fill the new blocks before populating
>> them with actual data precisely to avoid
Tom,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Robert Haas writes:
> > That theory seems inconsistent with how mdextend() works. My
> > understanding is that we zero-fill the new blocks before populating
> > them with actual data precisely to avoid running out of disk
Robert Haas writes:
> That theory seems inconsistent with how mdextend() works. My
> understanding is that we zero-fill the new blocks before populating
> them with actual data precisely to avoid running out of disk space due
> to deferred allocation at the OS level. If
Robert,
* Robert Haas (robertmh...@gmail.com) wrote:
> On Wed, Mar 15, 2017 at 10:34 AM, Tom Lane wrote:
> > FWIW, I'm not certain that Stephen is correct to claim that we have
> > some concrete problem with sparse files. We certainly don't *depend*
> > on sparse storage
On Tue, Mar 14, 2017 at 10:30 PM, Amit Kapila wrote:
> On Tue, Mar 14, 2017 at 10:59 PM, Robert Haas wrote:
>> On Mon, Mar 13, 2017 at 11:48 PM, Amit Kapila
>> wrote:
>>> We didn't found any issue with the above testing.
On Wed, Mar 15, 2017 at 10:34 AM, Tom Lane wrote:
> FWIW, I'm not certain that Stephen is correct to claim that we have
> some concrete problem with sparse files. We certainly don't *depend*
> on sparse storage anyplace else, nor write data in a way that would be
> likely to
Tom,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> FWIW, I'm not certain that Stephen is correct to claim that we have
> some concrete problem with sparse files. We certainly don't *depend*
> on sparse storage anyplace else, nor write data in a way that would be
> likely to trigger it; but I'm not
Stephen Frost writes:
> I do see that mdwrite() should handle an out-of-disk-space case, though
> that just makes me wonder what's different here compared to normal
> relations that we don't have an issue with a sparse WAL'd hash index but
> we can't handle it if a normal
Robert Haas writes:
> Now, that having been said, I'm not sure it's a good idea to tinker
> with the behavior for v10. We could change the new-splitpoint code so
> that it loops over all the pages in the new splitpoint and zeroes them
> all, instead of just the last one.
On Wed, Mar 15, 2017 at 9:18 AM, Stephen Frost wrote:
>> I think we have sufficient comments in code especially on top of
>> function _hash_alloc_buckets().
>
> I don't see any comments regarding how we have to be sure to handle
> an out-of-space case properly in the middle of
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
> On Wed, Mar 15, 2017 at 12:53 AM, Stephen Frost wrote:
> > If that's the case then
> > this does seem to at least be less of an issue, though I hope we put in
> > appropriate comments about it.
>
> I think we have
On Wed, Mar 15, 2017 at 12:53 AM, Stephen Frost wrote:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> Stephen Frost writes:
>> > * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> >> It's true that as soon as we need another overflow page, that's going to
>> >> get
On Tue, Mar 14, 2017 at 10:59 PM, Robert Haas wrote:
> On Mon, Mar 13, 2017 at 11:48 PM, Amit Kapila wrote:
>> We didn't found any issue with the above testing.
>
> Great! I've committed the latest version of the patch, with some
> cosmetic
On 15/03/17 06:29, Robert Haas wrote:
Great! I've committed the latest version of the patch, with some
cosmetic changes.
It would be astonishing if there weren't a bug or two left, but I
think overall this is very solid work, and I think it's time to put
this out there and see how things go.
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost writes:
> > * Tom Lane (t...@sss.pgh.pa.us) wrote:
> >> It's true that as soon as we need another overflow page, that's going to
> >> get dropped beyond the 2^{N+1}-1 point, and the *apparent* size of the
> >> index will
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> It's true that as soon as we need another overflow page, that's going to
>> get dropped beyond the 2^{N+1}-1 point, and the *apparent* size of the
>> index will grow quite a lot. But any modern filesystem
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Robert Haas writes:
> > On Tue, Mar 14, 2017 at 2:14 PM, Tom Lane wrote:
> >> Robert Haas writes:
> >>> It's become pretty clear to me that there are a bunch of other things
> >>> about
Robert Haas writes:
> On Tue, Mar 14, 2017 at 2:14 PM, Tom Lane wrote:
>> Robert Haas writes:
>>> It's become pretty clear to me that there are a bunch of other things
>>> about hash indexes which are not exactly great, the worst
On Tue, Mar 14, 2017 at 2:14 PM, Tom Lane wrote:
> Robert Haas writes:
>> It's become pretty clear to me that there are a bunch of other things
>> about hash indexes which are not exactly great, the worst of which is
>> the way they grow by DOUBLING IN
Robert Haas writes:
> It's become pretty clear to me that there are a bunch of other things
> about hash indexes which are not exactly great, the worst of which is
> the way they grow by DOUBLING IN SIZE.
Uh, what? Growth should happen one bucket-split at a time.
> Other
On Tue, Mar 14, 2017 at 1:40 PM, Tom Lane wrote:
> Robert Haas writes:
>> Great! I've committed the latest version of the patch, with some
>> cosmetic changes.
>
> Woo hoo! That's been a bee in the bonnet for, um, decades.
Yeah. I'm pretty happy to
Robert Haas writes:
> Great! I've committed the latest version of the patch, with some
> cosmetic changes.
Woo hoo! That's been a bee in the bonnet for, um, decades.
regards, tom lane
--
Sent via pgsql-hackers mailing list
On Mon, Mar 13, 2017 at 11:48 PM, Amit Kapila wrote:
> We didn't found any issue with the above testing.
Great! I've committed the latest version of the patch, with some
cosmetic changes.
It would be astonishing if there weren't a bug or two left, but I
think overall
On Mon, Mar 13, 2017 at 6:26 PM, Amit Kapila wrote:
> On Thu, Mar 9, 2017 at 3:11 AM, Robert Haas wrote:
>> On Tue, Mar 7, 2017 at 6:41 PM, Robert Haas wrote:
> Great, thanks. 0001 looks good to me now, so committed.
On Sun, Mar 12, 2017 at 8:06 AM, Robert Haas wrote:
> On Sat, Mar 11, 2017 at 12:20 AM, Amit Kapila wrote:
>>> /*
>>> + * Change the shared buffer state in critical section,
>>> + * otherwise any
On Thu, Mar 9, 2017 at 3:11 AM, Robert Haas wrote:
> On Tue, Mar 7, 2017 at 6:41 PM, Robert Haas wrote:
Great, thanks. 0001 looks good to me now, so committed.
>>>
>>> Committed 0002.
>>
>> Here are some initial review thoughts on 0003 based on
On Sat, Mar 11, 2017 at 12:20 AM, Amit Kapila wrote:
>> /*
>> + * Change the shared buffer state in critical section,
>> + * otherwise any error could make it unrecoverable after
>> + * recovery.
>> +
On Thu, Mar 9, 2017 at 3:11 AM, Robert Haas wrote:
> On Tue, Mar 7, 2017 at 6:41 PM, Robert Haas wrote:
Great, thanks. 0001 looks good to me now, so committed.
>>>
>>> Committed 0002.
>>
>> Here are some initial review thoughts on 0003 based on
On Fri, Mar 10, 2017 at 8:02 AM, Amit Kapila wrote:
> I was thinking that we will use REGBUF_NO_IMAGE flag as is used in
> XLOG_HEAP2_VISIBLE record for heap buffer, that will avoid any extra
> I/O and will make it safe as well. I think that makes registering the
>
On Fri, Mar 10, 2017 at 6:21 PM, Robert Haas wrote:
> On Fri, Mar 10, 2017 at 7:08 AM, Amit Kapila wrote:
>> On Fri, Mar 10, 2017 at 8:49 AM, Robert Haas wrote:
>>> On Thu, Mar 9, 2017 at 9:34 PM, Amit Kapila
On Fri, Mar 10, 2017 at 7:08 AM, Amit Kapila wrote:
> On Fri, Mar 10, 2017 at 8:49 AM, Robert Haas wrote:
>> On Thu, Mar 9, 2017 at 9:34 PM, Amit Kapila wrote:
>>> Do we really need to set LSN on this page (or mark it
On Fri, Mar 10, 2017 at 8:49 AM, Robert Haas wrote:
> On Thu, Mar 9, 2017 at 9:34 PM, Amit Kapila wrote:
>> Do we really need to set LSN on this page (or mark it dirty), if so
>> why? Are you worried about restoration of FPI or something else?
>
>
On Thu, Mar 9, 2017 at 9:34 PM, Amit Kapila wrote:
> Do we really need to set LSN on this page (or mark it dirty), if so
> why? Are you worried about restoration of FPI or something else?
I haven't thought through all of the possible consequences and am a
bit to tired
On Thu, Mar 9, 2017 at 11:15 PM, Robert Haas wrote:
> On Thu, Mar 9, 2017 at 10:23 AM, Amit Kapila wrote:
>> Right, if we use XLogReadBufferForRedoExtended() instead of
>> XLogReadBufferExtended()/LockBufferForCleanup during relay routine,
>> then
On Thu, Mar 9, 2017 at 10:23 AM, Amit Kapila wrote:
>> +mode anyway). It would seem natural to complete the split in VACUUM, but
>> since
>> +splitting a bucket might require allocating a new page, it might fail if you
>> +run out of disk space. That would be bad
On Thu, Mar 9, 2017 at 12:25 AM, Robert Haas wrote:
> On Wed, Mar 8, 2017 at 7:45 AM, Amit Kapila wrote:
>> Okay, I can try, but note that currently there is no test related to
>> "snapshot too old" for any other indexes.
>
> Wow, that's
On Thu, Mar 9, 2017 at 3:11 AM, Robert Haas wrote:
> On Tue, Mar 7, 2017 at 6:41 PM, Robert Haas wrote:
Great, thanks. 0001 looks good to me now, so committed.
>>>
>>> Committed 0002.
>>
>> Here are some initial review thoughts on 0003 based on
On Tue, Mar 7, 2017 at 6:41 PM, Robert Haas wrote:
>>> Great, thanks. 0001 looks good to me now, so committed.
>>
>> Committed 0002.
>
> Here are some initial review thoughts on 0003 based on a first read-through.
More thoughts on the main patch:
The text you've added to
On Wed, Mar 8, 2017 at 7:45 AM, Amit Kapila wrote:
>> I still think this is a bad idea. Releasing and reacquiring the lock
>> on the master doesn't prevent the standby from seeing intermediate
>> states; the comment, if I understand correctly, is just plain wrong.
> I
On Wed, Mar 8, 2017 at 5:11 AM, Robert Haas wrote:
>
> Here are some initial review thoughts on 0003 based on a first read-through.
>
>
> +/*
> + * we need to release and reacquire the lock on overflow buffer to ensure
> + * that standby shouldn't see an
On Wed, Mar 8, 2017 at 3:38 AM, Robert Haas wrote:
> On Wed, Mar 1, 2017 at 4:18 AM, Robert Haas wrote:
>> On Tue, Feb 28, 2017 at 7:31 PM, Amit Kapila wrote:
>>> Yeah, actually those were added later in Enable-WAL-for-Hash*
On Tue, Mar 7, 2017 at 5:08 PM, Robert Haas wrote:
> On Wed, Mar 1, 2017 at 4:18 AM, Robert Haas wrote:
>> On Tue, Feb 28, 2017 at 7:31 PM, Amit Kapila wrote:
>>> Yeah, actually those were added later in Enable-WAL-for-Hash*
On Wed, Mar 1, 2017 at 4:18 AM, Robert Haas wrote:
> On Tue, Feb 28, 2017 at 7:31 PM, Amit Kapila wrote:
>> Yeah, actually those were added later in Enable-WAL-for-Hash* patch,
>> but I think as this patch is standalone, so we should not remove it
On Tue, Feb 28, 2017 at 7:31 PM, Amit Kapila wrote:
> Yeah, actually those were added later in Enable-WAL-for-Hash* patch,
> but I think as this patch is standalone, so we should not remove it
> from their existing usage, I have added those back and rebased the
>
On Thu, Feb 16, 2017 at 8:16 PM, Amit Kapila wrote:
> Attached are refactoring patches. WAL patch needs some changes based
> on above comments, so will post it later.
After some study, I have committed 0001, and also committed 0002 and
0003 as a single commit, with only
On Thu, Feb 16, 2017 at 8:16 PM, Amit Kapila wrote:
> On Thu, Feb 16, 2017 at 7:15 AM, Robert Haas wrote:
>
> Attached are refactoring patches. WAL patch needs some changes based
> on above comments, so will post it later.
>
Attached is a rebased
On Thu, Feb 16, 2017 at 7:15 AM, Robert Haas wrote:
> On Mon, Feb 13, 2017 at 10:22 AM, Amit Kapila wrote:
>> As discussed, attached are refactoring patches and a patch to enable
>> WAL for the hash index on top of them.
>
> Thanks. I think that
On Mon, Feb 13, 2017 at 10:22 AM, Amit Kapila wrote:
> As discussed, attached are refactoring patches and a patch to enable
> WAL for the hash index on top of them.
Thanks. I think that the refactoring patches shouldn't add
START_CRIT_SECTION() and END_CRIT_SECTION()
On Mon, Feb 13, 2017 at 8:52 PM, Amit Kapila wrote:
> As discussed, attached are refactoring patches and a patch to enable
> WAL for the hash index on top of them.
0006-Enable-WAL-for-Hash-Indexes.patch needs to be rebased after
commit 8da9a226369e9ceec7cef1.
--
On Thu, Jan 12, 2017 at 10:23 PM, Amit Kapila wrote:
> On Fri, Jan 13, 2017 at 1:04 AM, Jesper Pedersen
> wrote:
>> On 12/27/2016 01:58 AM, Amit Kapila wrote:
>>> After recent commit's 7819ba1e and 25216c98, this patch requires a
>>> rebase.
On Sat, Feb 4, 2017 at 4:37 AM, Jeff Janes wrote:
> On Thu, Jan 12, 2017 at 7:23 PM, Amit Kapila
> wrote:
>>
>> On Fri, Jan 13, 2017 at 1:04 AM, Jesper Pedersen
>> wrote:
>> > On 12/27/2016 01:58 AM, Amit Kapila wrote:
On Thu, Jan 12, 2017 at 7:23 PM, Amit Kapila
wrote:
> On Fri, Jan 13, 2017 at 1:04 AM, Jesper Pedersen
> wrote:
> > On 12/27/2016 01:58 AM, Amit Kapila wrote:
> >>
> >> After recent commit's 7819ba1e and 25216c98, this patch requires a
> >>
On Fri, Jan 13, 2017 at 12:23 PM, Amit Kapila wrote:
> On Fri, Jan 13, 2017 at 1:04 AM, Jesper Pedersen
> wrote:
>> On 12/27/2016 01:58 AM, Amit Kapila wrote:
>>>
>>> After recent commit's 7819ba1e and 25216c98, this patch requires a
>>>
On 12/27/2016 01:58 AM, Amit Kapila wrote:
After recent commit's 7819ba1e and 25216c98, this patch requires a
rebase. Attached is the rebased patch.
This needs a rebase after commit e898437.
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
On Thu, Dec 22, 2016 at 9:56 PM, Robert Haas wrote:
> On Mon, Dec 5, 2016 at 2:46 AM, Amit Kapila wrote:
>>> I'll review after that, since I have other things to review meanwhile.
>>
>> Attached, please find the rebased patch attached with this
On Mon, Dec 5, 2016 at 2:46 AM, Amit Kapila wrote:
>> I'll review after that, since I have other things to review meanwhile.
>
> Attached, please find the rebased patch attached with this e-mail.
> There is no fundamental change in patch except for adapting the new
>
On Thu, Dec 1, 2016 at 6:51 PM, Amit Kapila wrote:
>> Thanks. I am thinking that it might make sense to try to get the
>> "microvacuum support for hash index" and "cache hash index meta page"
>> patches committed before this one, because I'm guessing they are much
>>
On Thu, Dec 1, 2016 at 9:44 PM, Robert Haas wrote:
> On Thu, Dec 1, 2016 at 1:03 AM, Amit Kapila wrote:
>> On Wed, Nov 9, 2016 at 7:40 PM, Amit Kapila wrote:
>>> On Tue, Nov 8, 2016 at 10:56 PM, Jeff Janes
On Thu, Dec 1, 2016 at 1:03 AM, Amit Kapila wrote:
> On Wed, Nov 9, 2016 at 7:40 PM, Amit Kapila wrote:
>> On Tue, Nov 8, 2016 at 10:56 PM, Jeff Janes wrote:
>>> Unless we want to wait until that work is committed before
On Wed, Nov 9, 2016 at 7:40 PM, Amit Kapila wrote:
> On Tue, Nov 8, 2016 at 10:56 PM, Jeff Janes wrote:
>>
>> Unless we want to wait until that work is committed before doing more review
>> and testing on this.
>>
>
> The concurrent hash index patch
On Tue, Nov 8, 2016 at 10:56 PM, Jeff Janes wrote:
> On Sat, Sep 24, 2016 at 10:00 PM, Amit Kapila
> wrote:
>>
>> On Fri, Sep 23, 2016 at 5:34 PM, Amit Kapila
>> wrote:
>> >
>> > I think here I am slightly wrong. For the
On Sat, Sep 24, 2016 at 10:00 PM, Amit Kapila
wrote:
> On Fri, Sep 23, 2016 at 5:34 PM, Amit Kapila
> wrote:
> >
> > I think here I am slightly wrong. For the full page writes, it do use
> > RBM_ZERO_AND_LOCK mode to read the page and for such
On 09/25/2016 01:00 AM, Amit Kapila wrote:
Attached patch fixes the problem, now we do perform full page writes
for bitmap pages. Apart from that, I have rebased the patch based on
latest concurrent index patch [1]. I have updated the README as well
to reflect the WAL logging related
Hi All,
> I forgot to mention that Ashutosh has tested this patch for a day
> using Jeff's tool and he didn't found any problem. Also, he has found
> a way to easily reproduce the problem. Ashutosh, can you share your
> changes to the script using which you have reproduce the problem?
I made
On Sun, Sep 25, 2016 at 10:30 AM, Amit Kapila wrote:
> On Fri, Sep 23, 2016 at 5:34 PM, Amit Kapila wrote:
>>
>> I think here I am slightly wrong. For the full page writes, it do use
>> RBM_ZERO_AND_LOCK mode to read the page and for such mode
On Thu, Sep 22, 2016 at 10:16 AM, Amit Kapila wrote:
> On Thu, Sep 22, 2016 at 8:51 AM, Jeff Janes wrote:
>>
>>
>> Correct. But any torn page write must be covered by the restoration of a
>> full page image during replay, shouldn't it? And that
On Thu, Sep 22, 2016 at 8:51 AM, Jeff Janes wrote:
> On Tue, Sep 20, 2016 at 10:27 PM, Amit Kapila
> wrote:
>>
>> On Tue, Sep 20, 2016 at 10:24 PM, Jeff Janes wrote:
>> > On Thu, Sep 15, 2016 at 11:42 PM, Amit Kapila
On Tue, Sep 20, 2016 at 10:27 PM, Amit Kapila
wrote:
> On Tue, Sep 20, 2016 at 10:24 PM, Jeff Janes wrote:
> > On Thu, Sep 15, 2016 at 11:42 PM, Amit Kapila
> > wrote:
> >>
> >>
> >> Okay, Thanks for pointing out the same.
On Tue, Sep 20, 2016 at 10:24 PM, Jeff Janes wrote:
> On Thu, Sep 15, 2016 at 11:42 PM, Amit Kapila
> wrote:
>>
>>
>> Okay, Thanks for pointing out the same. I have fixed it. Apart from
>> that, I have changed _hash_alloc_buckets() to initialize
On 09/16/2016 02:42 AM, Amit Kapila wrote:
Okay, Thanks for pointing out the same. I have fixed it. Apart from
that, I have changed _hash_alloc_buckets() to initialize the page
instead of making it completely Zero because of problems discussed in
another related thread [1]. I have also
On Thu, Sep 15, 2016 at 11:42 PM, Amit Kapila
wrote:
>
> Okay, Thanks for pointing out the same. I have fixed it. Apart from
> that, I have changed _hash_alloc_buckets() to initialize the page
> instead of making it completely Zero because of problems discussed in
>
On Wed, Sep 14, 2016 at 4:36 PM, Ashutosh Sharma wrote:
> Hi All,
>
> Below is the backtrace for the issue reported in my earlier mail [1].
> From the callstack it looks like we are trying to release lock on a
> meta page twice in _hash_expandtable().
>
Thanks for the
Hi All,
Below is the backtrace for the issue reported in my earlier mail [1].
>From the callstack it looks like we are trying to release lock on a
meta page twice in _hash_expandtable().
(gdb) bt
#0 0x007b01cf in LWLockRelease (lock=0x7f55f59d0570) at lwlock.c:1799
#1
Hi All,
I am getting following error when running the test script shared by
Jeff -[1] . The error is observed upon executing the test script for
around 3-4 hrs.
57869 INSERT XX000 2016-09-14 07:58:01.211 IST:ERROR: lock
buffer_content 1 is not held
57869 INSERT XX000 2016-09-14 07:58:01.211
On 09/13/2016 07:41 AM, Amit Kapila wrote:
README:
+in_complete split flag. The reader algorithm works correctly, as it will
scan
What flag ?
in-complete-split flag which indicates that split has to be finished
for that particular bucket. The value of these flags are
On Mon, Sep 12, 2016 at 11:29 AM, Jeff Janes wrote:
>
>
> My test program (as posted) injects crashes and then checks the
> post-crash-recovery system for consistency, so it cannot be run as-is
> without the WAL patch. I also ran the test with crashing turned off (just
>
Hi,
On 09/07/2016 05:58 AM, Amit Kapila wrote:
Okay, I have fixed this issue as explained above. Apart from that, I
have fixed another issue reported by Mark Kirkwood upthread and few
other issues found during internal testing by Ashutosh Sharma.
The locking issue reported by Mark and
On Sun, Sep 11, 2016 at 7:40 PM, Amit Kapila
wrote:
> On Mon, Sep 12, 2016 at 7:00 AM, Jeff Janes wrote:
> > On Thu, Sep 8, 2016 at 12:09 PM, Jeff Janes
> wrote:
> >
> >>
> >> I plan to do testing using my own testing harness
On Sun, Sep 11, 2016 at 3:01 PM, Mark Kirkwood
wrote:
> On 11/09/16 19:16, Mark Kirkwood wrote:
>
>>
>>
>> On 11/09/16 17:01, Amit Kapila wrote:
>>>
>>> ...Do you think we can do some read-only
>>> workload benchmarking using this server? If yes, then probably you
On Mon, Sep 12, 2016 at 7:00 AM, Jeff Janes wrote:
> On Thu, Sep 8, 2016 at 12:09 PM, Jeff Janes wrote:
>
>>
>> I plan to do testing using my own testing harness after changing it to
>> insert a lot of dummy tuples (ones with negative values in the
On Thu, Sep 8, 2016 at 12:09 PM, Jeff Janes wrote:
> I plan to do testing using my own testing harness after changing it to
> insert a lot of dummy tuples (ones with negative values in the pseudo-pk
> column, which are never queried by the core part of the harness) and
>
On 11/09/16 19:16, Mark Kirkwood wrote:
On 11/09/16 17:01, Amit Kapila wrote:
...Do you think we can do some read-only
workload benchmarking using this server? If yes, then probably you
can use concurrent hash index patch [1] and cache the metapage patch
[2] (I think Mithun needs to rebase
On 11/09/16 17:01, Amit Kapila wrote:
On Sun, Sep 11, 2016 at 4:10 AM, Mark Kirkwood
wrote:
performed several 10 hour runs on size 100 database using 32 and 64 clients.
For the last run I rebuilt with assertions enabled. No hangs or assertion
failures.
On Sun, Sep 11, 2016 at 4:10 AM, Mark Kirkwood
wrote:
>
>
> performed several 10 hour runs on size 100 database using 32 and 64 clients.
> For the last run I rebuilt with assertions enabled. No hangs or assertion
> failures.
>
Thanks for verification. Do you think
On 09/09/16 14:50, Mark Kirkwood wrote:
Yeah, good suggestion about replacing (essentially) all the indexes
with hash ones and testing. I did some short runs with this type of
schema yesterday (actually to get a feel for if hash performance vs
btree was compareable - does seem tp be) - but
On Fri, Sep 9, 2016 at 12:39 AM, Jeff Janes wrote:
>
> I plan to do testing using my own testing harness after changing it to
> insert a lot of dummy tuples (ones with negative values in the pseudo-pk
> column, which are never queried by the core part of the harness) and
>
On 09/09/16 07:09, Jeff Janes wrote:
On Wed, Sep 7, 2016 at 3:29 AM, Ashutosh Sharma > wrote:
> Thanks to Ashutosh Sharma for doing the testing of the patch and
> helping me in analyzing some of the above issues.
Hi All,
I
On Wed, Sep 7, 2016 at 3:29 AM, Ashutosh Sharma
wrote:
> > Thanks to Ashutosh Sharma for doing the testing of the patch and
> > helping me in analyzing some of the above issues.
>
> Hi All,
>
> I would like to summarize the test-cases that i have executed for
> validating
On Thu, Sep 8, 2016 at 10:02 AM, Mark Kirkwood
wrote:
>
> Repeating my tests with these new patches applied points to the hang issue
> being solved. I tested several 10 minute runs (any of which was enough to
> elicit the hang previously). I'll do some longer ones,
On 07/09/16 21:58, Amit Kapila wrote:
On Wed, Aug 24, 2016 at 10:32 PM, Jeff Janes wrote:
On Tue, Aug 23, 2016 at 10:05 PM, Amit Kapila
wrote:
On Wed, Aug 24, 2016 at 2:37 AM, Jeff Janes wrote:
After an intentionally
> Thanks to Ashutosh Sharma for doing the testing of the patch and
> helping me in analyzing some of the above issues.
Hi All,
I would like to summarize the test-cases that i have executed for
validating WAL logging in hash index feature.
1) I have mainly ran the pgbench test with read-write
On Wed, Sep 7, 2016 at 3:28 PM, Amit Kapila wrote:
>
> Okay, I have fixed this issue as explained above. Apart from that, I
> have fixed another issue reported by Mark Kirkwood upthread and few
> other issues found during internal testing by Ashutosh Sharma.
>
Forgot to
On Tue, Aug 30, 2016 at 3:40 AM, Alvaro Herrera
wrote:
> Amit Kapila wrote:
>
>> How about attached?
>
> That works; pushed.
Thanks.
> (I removed a few #includes from the new file.)
>
oops, copied from hash.h and forgot to remove those.
>> If you want, I think we can
Amit Kapila wrote:
> How about attached?
That works; pushed. (I removed a few #includes from the new file.)
> If you want, I think we can one step further and move hash_redo to a
> new file hash_xlog.c which is required for main patch, but we can
> leave it for later as well.
I think that can
On Thu, Aug 25, 2016 at 6:54 PM, Alvaro Herrera
wrote:
> Amit Kapila wrote:
>> On Wed, Aug 24, 2016 at 11:46 PM, Alvaro Herrera
>> wrote:
>
>> > Can you split the new xlog-related stuff to a new file, say hash_xlog.h,
>> > instead of cramming
Amit Kapila wrote:
> On Wed, Aug 24, 2016 at 11:46 PM, Alvaro Herrera
> wrote:
> > Can you split the new xlog-related stuff to a new file, say hash_xlog.h,
> > instead of cramming it in hash.h? Removing the existing #include
> > "xlogreader.h" from hash.h would be
On Wed, Aug 24, 2016 at 11:46 PM, Alvaro Herrera
wrote:
> Amit Kapila wrote:
>> $SUBJECT will make hash indexes reliable and usable on standby.
>
> Nice work.
>
> Can you split the new xlog-related stuff to a new file, say hash_xlog.h,
> instead of cramming it in hash.h?
Amit Kapila wrote:
> $SUBJECT will make hash indexes reliable and usable on standby.
Nice work.
Can you split the new xlog-related stuff to a new file, say hash_xlog.h,
instead of cramming it in hash.h? Removing the existing #include
"xlogreader.h" from hash.h would be nice. I volunteer for
On Tue, Aug 23, 2016 at 10:05 PM, Amit Kapila
wrote:
> On Wed, Aug 24, 2016 at 2:37 AM, Jeff Janes wrote:
>
> >
> > After an intentionally created crash, I get an Assert triggering:
> >
> > TRAP: FailedAssertion("!(((freep)[(bitmapbit)/32] &
> >
1 - 100 of 113 matches
Mail list logo