On Sun, Oct 1, 2017 at 8:36 PM, Daniel Gustafsson wrote:
>> On 18 Aug 2017, at 13:39, Claudio Freire wrote:
>>
>> On Fri, Apr 7, 2017 at 10:51 PM, Claudio Freire
>> wrote:
>>> Indeed they do, and that's what motivated this patch.
> On 18 Aug 2017, at 13:39, Claudio Freire wrote:
>
> On Fri, Apr 7, 2017 at 10:51 PM, Claudio Freire
> wrote:
>> Indeed they do, and that's what motivated this patch. But I'd need
>> TB-sized tables to set up something like that. I don't have
On Fri, Apr 7, 2017 at 10:51 PM, Claudio Freire wrote:
> Indeed they do, and that's what motivated this patch. But I'd need
> TB-sized tables to set up something like that. I don't have the
> hardware or time available to do that (vacuum on bloated TB-sized
> tables can
On Fri, Apr 21, 2017 at 6:24 AM, Claudio Freire wrote:
> On Wed, Apr 12, 2017 at 4:35 PM, Robert Haas wrote:
>> On Tue, Apr 11, 2017 at 4:38 PM, Claudio Freire
>> wrote:
>>> In essence, the patch as it is proposed, doesn't
On Mon, Apr 24, 2017 at 3:57 PM, Claudio Freire wrote:
> I wouldn't fret over the slight slowdown vs the old patch, it could be
> noise (the script only completed a single run at scale 400).
Yeah, seems fine.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The
On Sun, Apr 23, 2017 at 12:41 PM, Robert Haas wrote:
>> That's after inlining the compare on both the linear and sequential
>> code, and it seems it lets the compiler optimize the binary search to
>> the point where it outperforms the sequential search.
>>
>> That's not the
On Thu, Apr 20, 2017 at 5:24 PM, Claudio Freire wrote:
>> What's not clear to me is how sensitive the performance of vacuum is
>> to the number of cycles used here. For a large index, the number of
>> searches will presumably be quite large, so it does seem worth
>>
On Wed, Apr 12, 2017 at 4:35 PM, Robert Haas wrote:
> On Tue, Apr 11, 2017 at 4:38 PM, Claudio Freire
> wrote:
>> In essence, the patch as it is proposed, doesn't *need* a binary
>> search, because the segment list can only grow up to 15 segments
On Tue, Apr 11, 2017 at 4:38 PM, Claudio Freire wrote:
> In essence, the patch as it is proposed, doesn't *need* a binary
> search, because the segment list can only grow up to 15 segments at
> its biggest, and that's a size small enough that linear search will
>
On Tue, Apr 11, 2017 at 4:17 PM, Robert Haas wrote:
> On Tue, Apr 11, 2017 at 2:59 PM, Claudio Freire
> wrote:
>> On Tue, Apr 11, 2017 at 3:53 PM, Robert Haas wrote:
>>> 1TB / 8kB per page * 60 tuples/page * 20% * 6
On Tue, Apr 11, 2017 at 2:59 PM, Claudio Freire wrote:
> On Tue, Apr 11, 2017 at 3:53 PM, Robert Haas wrote:
>> 1TB / 8kB per page * 60 tuples/page * 20% * 6 bytes/tuple = 9216MB of
>> maintenance_work_mem
>>
>> So we'll allocate
On Tue, Apr 11, 2017 at 3:59 PM, Claudio Freire wrote:
> On Tue, Apr 11, 2017 at 3:53 PM, Robert Haas wrote:
>> 1TB / 8kB per page * 60 tuples/page * 20% * 6 bytes/tuple = 9216MB of
>> maintenance_work_mem
>>
>> So we'll allocate
On Tue, Apr 11, 2017 at 3:53 PM, Robert Haas wrote:
> 1TB / 8kB per page * 60 tuples/page * 20% * 6 bytes/tuple = 9216MB of
> maintenance_work_mem
>
> So we'll allocate 128MB+256MB+512MB+1GB+2GB+4GB which won't be quite
> enough so we'll allocate another 8GB, for a total of
On Fri, Apr 7, 2017 at 9:12 PM, Andres Freund wrote:
>> Why do you say exponential growth fragments memory? AFAIK, all those
>> allocations are well beyond the point where malloc starts mmaping
>> memory, so each of those segments should be a mmap segment,
>> independently
On 4/7/17 10:19 PM, Claudio Freire wrote:
>
> I rebased the early free patch (patch 3) to apply on top of the v9
> patch 2 (it needed some changes). I recognize the early free patch
> didn't get nearly as much scrutiny, so I'm fine with commiting only 2
> if that one's ready to go but 3 isn't.
>
On Fri, Apr 7, 2017 at 10:06 PM, Claudio Freire wrote:
>>> >> + if (seg->num_dead_tuples >= seg->max_dead_tuples)
>>> >> + {
>>> >> + /*
>>> >> + * The segment is overflowing, so we must allocate
>>> >> a
On Fri, Apr 7, 2017 at 10:12 PM, Andres Freund wrote:
> On 2017-04-07 22:06:13 -0300, Claudio Freire wrote:
>> On Fri, Apr 7, 2017 at 9:56 PM, Andres Freund wrote:
>> > Hi,
>> >
>> >
>> > On 2017-04-07 19:43:39 -0300, Claudio Freire wrote:
>> >> On Fri,
On 2017-04-07 22:06:13 -0300, Claudio Freire wrote:
> On Fri, Apr 7, 2017 at 9:56 PM, Andres Freund wrote:
> > Hi,
> >
> >
> > On 2017-04-07 19:43:39 -0300, Claudio Freire wrote:
> >> On Fri, Apr 7, 2017 at 5:05 PM, Andres Freund wrote:
> >> > Hi,
> >> >
>
On Fri, Apr 7, 2017 at 9:56 PM, Andres Freund wrote:
> Hi,
>
>
> On 2017-04-07 19:43:39 -0300, Claudio Freire wrote:
>> On Fri, Apr 7, 2017 at 5:05 PM, Andres Freund wrote:
>> > Hi,
>> >
>> > I've *not* read the history of this thread. So I really might
Hi,
On 2017-04-07 19:43:39 -0300, Claudio Freire wrote:
> On Fri, Apr 7, 2017 at 5:05 PM, Andres Freund wrote:
> > Hi,
> >
> > I've *not* read the history of this thread. So I really might be
> > missing some context.
> >
> >
> >> From
On Fri, Apr 7, 2017 at 5:05 PM, Andres Freund wrote:
> Hi,
>
> I've *not* read the history of this thread. So I really might be
> missing some context.
>
>
>> From e37d29c26210a0f23cd2e9fe18a264312fecd383 Mon Sep 17 00:00:00 2001
>> From: Claudio Freire
On Fri, Apr 7, 2017 at 7:43 PM, Claudio Freire wrote:
>>> + * Lookup in that structure proceeds sequentially in the list of segments,
>>> + * and with a binary search within each segment. Since segment's size grows
>>> + * exponentially, this retains O(N log N) lookup
Hi,
I've *not* read the history of this thread. So I really might be
missing some context.
> From e37d29c26210a0f23cd2e9fe18a264312fecd383 Mon Sep 17 00:00:00 2001
> From: Claudio Freire
> Date: Mon, 12 Sep 2016 23:36:42 -0300
> Subject: [PATCH] Vacuum: allow using
On Wed, Feb 1, 2017 at 7:55 PM, Claudio Freire wrote:
> On Wed, Feb 1, 2017 at 6:13 PM, Masahiko Sawada wrote:
>> On Wed, Feb 1, 2017 at 10:02 PM, Claudio Freire
>> wrote:
>>> On Wed, Feb 1, 2017 at 5:47 PM, Masahiko Sawada
On Wed, Feb 1, 2017 at 11:55 PM, Claudio Freire wrote:
> On Wed, Feb 1, 2017 at 6:13 PM, Masahiko Sawada wrote:
>> On Wed, Feb 1, 2017 at 10:02 PM, Claudio Freire
>> wrote:
>>> On Wed, Feb 1, 2017 at 5:47 PM, Masahiko
On Wed, Feb 1, 2017 at 6:13 PM, Masahiko Sawada wrote:
> On Wed, Feb 1, 2017 at 10:02 PM, Claudio Freire
> wrote:
>> On Wed, Feb 1, 2017 at 5:47 PM, Masahiko Sawada
>> wrote:
>>> Thank you for updating the patch.
>>>
>>>
On Wed, Feb 1, 2017 at 10:02 PM, Claudio Freire wrote:
> On Wed, Feb 1, 2017 at 5:47 PM, Masahiko Sawada wrote:
>> Thank you for updating the patch.
>>
>> Whole patch looks good to me except for the following one comment.
>> This is the final
On Wed, Feb 1, 2017 at 5:47 PM, Masahiko Sawada wrote:
> Thank you for updating the patch.
>
> Whole patch looks good to me except for the following one comment.
> This is the final comment from me.
>
> /*
> * lazy_tid_reaped() -- is a particular tid deletable?
> *
> *
On Tue, Jan 31, 2017 at 3:05 AM, Claudio Freire wrote:
> On Mon, Jan 30, 2017 at 5:51 AM, Masahiko Sawada
> wrote:
>>
>> * We are willing to use at most maintenance_work_mem (or perhaps
>> * autovacuum_work_mem) memory space to keep track of
On Tue, Jan 31, 2017 at 11:05 AM, Claudio Freire wrote:
> Updated and rebased v7 attached.
Moved to CF 2017-03.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Mon, Jan 30, 2017 at 5:51 AM, Masahiko Sawada wrote:
>
> * We are willing to use at most maintenance_work_mem (or perhaps
> * autovacuum_work_mem) memory space to keep track of dead tuples. We
> * initially allocate an array of TIDs of that size, with an upper
On Thu, Jan 26, 2017 at 5:11 AM, Claudio Freire wrote:
> On Wed, Jan 25, 2017 at 1:54 PM, Masahiko Sawada
> wrote:
>> Thank you for updating the patch!
>>
>> + /*
>> +* Quickly rule out by lower bound (should happen a lot) Upper bound
On Wed, Jan 25, 2017 at 1:54 PM, Masahiko Sawada wrote:
> Thank you for updating the patch!
>
> + /*
> +* Quickly rule out by lower bound (should happen a lot) Upper bound
> was
> +* already checked by segment search
> +*/
> + if
On Tue, Jan 24, 2017 at 1:49 AM, Claudio Freire wrote:
> On Fri, Jan 20, 2017 at 6:24 AM, Masahiko Sawada
> wrote:
>> On Thu, Jan 19, 2017 at 8:31 PM, Claudio Freire
>> wrote:
>>> On Thu, Jan 19, 2017 at 6:33 AM, Anastasia
On Fri, Jan 20, 2017 at 6:24 AM, Masahiko Sawada wrote:
> On Thu, Jan 19, 2017 at 8:31 PM, Claudio Freire
> wrote:
>> On Thu, Jan 19, 2017 at 6:33 AM, Anastasia Lubennikova
>> wrote:
>>> 28.12.2016 23:43, Claudio
I think this patch no longer applies because of conflicts with the one I
just pushed. Please rebase.
Thanks
--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list
I pushed this patch after rewriting it rather completely. I added
tracing notices to inspect the blocks it was prefetching and observed
that the original coding was failing to prefetch the final streak of
blocks in the table, which is an important oversight considering that it
may very well be
Alvaro Herrera wrote:
> There was no discussion whatsoever of the "prefetch" patch in this
> thread; and as far as I can see, nobody even mentioned such an idea in
> the thread. This prefetch patch appeared out of the blue and there was
> no discussion about it that I can see. Now I was about
You posted two patches with this preamble:
Claudio Freire wrote:
> Attached is the raw output of the test, the script used to create it,
> and just in case the patch set used. I believe it's the same as the
> last one I posted, just rebased.
There was no discussion whatsoever of the "prefetch"
On Thu, Jan 19, 2017 at 8:31 PM, Claudio Freire wrote:
> On Thu, Jan 19, 2017 at 6:33 AM, Anastasia Lubennikova
> wrote:
>> 28.12.2016 23:43, Claudio Freire:
>>
>> Attached v4 patches with the requested fixes.
>>
>>
>> Sorry for being late,
On Thu, Jan 19, 2017 at 6:33 AM, Anastasia Lubennikova
wrote:
> 28.12.2016 23:43, Claudio Freire:
>
> Attached v4 patches with the requested fixes.
>
>
> Sorry for being late, but the tests took a lot of time.
I know. Takes me several days to run my test scripts
28.12.2016 23:43, Claudio Freire:
Attached v4 patches with the requested fixes.
Sorry for being late, but the tests took a lot of time.
create table t1 as select i, md5(random()::text) from
generate_series(0,4) as i;
create index md5_idx ON t1(md5);
update t1 set md5 = md5((random()
On Wed, Dec 28, 2016 at 3:41 PM, Claudio Freire wrote:
>> Anyway, I found the problem that had caused segfault.
>>
>> for (segindex = 0; segindex <= vacrelstats->dead_tuples.last_seg; tupindex =
>> 0, segindex++)
>> {
>> DeadTuplesSegment *seg =
>>
On Wed, Dec 28, 2016 at 10:26 AM, Anastasia Lubennikova
wrote:
> 27.12.2016 20:14, Claudio Freire:
>
> On Tue, Dec 27, 2016 at 10:41 AM, Anastasia Lubennikova
> wrote:
>
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0
27.12.2016 20:14, Claudio Freire:
On Tue, Dec 27, 2016 at 10:41 AM, Anastasia Lubennikova
wrote:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x006941e7 in lazy_vacuum_heap (onerel=0x1ec2360,
vacrelstats=0x1ef6e00) at vacuumlazy.c:1417
27.12.2016 16:54, Alvaro Herrera:
Anastasia Lubennikova wrote:
I ran configure using following set of flags:
./configure --enable-tap-tests --enable-cassert --enable-debug
--enable-depend CFLAGS="-O0 -g3 -fno-omit-frame-pointer"
And then ran make check. Here is the stacktrace:
Program
On Tue, Dec 27, 2016 at 10:41 AM, Anastasia Lubennikova
wrote:
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0 0x006941e7 in lazy_vacuum_heap (onerel=0x1ec2360,
> vacrelstats=0x1ef6e00) at vacuumlazy.c:1417
> 1417tblk =
>
On Tue, Dec 27, 2016 at 10:54 AM, Alvaro Herrera
wrote:
> Anastasia Lubennikova wrote:
>
>> I ran configure using following set of flags:
>> ./configure --enable-tap-tests --enable-cassert --enable-debug
>> --enable-depend CFLAGS="-O0 -g3 -fno-omit-frame-pointer"
>> And
Anastasia Lubennikova wrote:
> I ran configure using following set of flags:
> ./configure --enable-tap-tests --enable-cassert --enable-debug
> --enable-depend CFLAGS="-O0 -g3 -fno-omit-frame-pointer"
> And then ran make check. Here is the stacktrace:
>
> Program terminated with signal SIGSEGV,
23.12.2016 22:54, Claudio Freire:
On Fri, Dec 23, 2016 at 1:39 PM, Anastasia Lubennikova
wrote:
I found the reason. I configure postgres with CFLAGS="-O0" and it causes
Segfault on initdb.
It works fine and passes tests with default configure flags, but I'm
On Fri, Dec 23, 2016 at 1:39 PM, Anastasia Lubennikova
wrote:
>> On Thu, Dec 22, 2016 at 12:22 PM, Claudio Freire
>> wrote:
>>>
>>> On Thu, Dec 22, 2016 at 12:15 PM, Anastasia Lubennikova
>>> wrote:
The
22.12.2016 21:18, Claudio Freire:
On Thu, Dec 22, 2016 at 12:22 PM, Claudio Freire wrote:
On Thu, Dec 22, 2016 at 12:15 PM, Anastasia Lubennikova
wrote:
The following review has been posted through the commitfest application:
make
On Thu, Dec 22, 2016 at 12:22 PM, Claudio Freire wrote:
> On Thu, Dec 22, 2016 at 12:15 PM, Anastasia Lubennikova
> wrote:
>> The following review has been posted through the commitfest application:
>> make installcheck-world: tested, failed
>>
On Thu, Dec 22, 2016 at 12:15 PM, Anastasia Lubennikova
wrote:
> The following review has been posted through the commitfest application:
> make installcheck-world: tested, failed
> Implements feature: not tested
> Spec compliant: not tested
>
The following review has been posted through the commitfest application:
make installcheck-world: tested, failed
Implements feature: not tested
Spec compliant: not tested
Documentation:not tested
Hi,
I haven't read through the thread yet, just tried to apply the patch
On Tue, Nov 22, 2016 at 4:53 AM, Claudio Freire
wrote:
> On Mon, Nov 21, 2016 at 2:15 PM, Masahiko Sawada
> wrote:
> > On Fri, Nov 18, 2016 at 6:54 AM, Claudio Freire
> wrote:
> >> On Thu, Nov 17, 2016 at 6:34 PM, Robert
On Mon, Nov 21, 2016 at 2:15 PM, Masahiko Sawada wrote:
> On Fri, Nov 18, 2016 at 6:54 AM, Claudio Freire
> wrote:
>> On Thu, Nov 17, 2016 at 6:34 PM, Robert Haas wrote:
>>> On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire
On Fri, Nov 18, 2016 at 6:54 AM, Claudio Freire wrote:
> On Thu, Nov 17, 2016 at 6:34 PM, Robert Haas wrote:
>> On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire
>> wrote:
>>> Attached is patch 0002 with pgindent applied over
On Thu, Nov 17, 2016 at 6:34 PM, Robert Haas wrote:
> On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire
> wrote:
>> Attached is patch 0002 with pgindent applied over it
>>
>> I don't think there's any other formatting issue, but feel free to
>> point
On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire wrote:
> Attached is patch 0002 with pgindent applied over it
>
> I don't think there's any other formatting issue, but feel free to
> point a finger to it if I missed any
Hmm, I had imagined making all of the segments the
On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire wrote:
> Attached is patch 0002 with pgindent applied over it
>
> I don't think there's any other formatting issue, but feel free to
> point a finger to it if I missed any
Hmm, I had imagined making all of the segments the
On Thu, Nov 17, 2016 at 2:51 PM, Claudio Freire wrote:
> On Thu, Nov 17, 2016 at 2:34 PM, Masahiko Sawada
> wrote:
>> I glanced at the patches but the both patches don't obey the coding
>> style of PostgreSQL.
>> Please refer to [1].
>>
>> [1]
>>
On Thu, Nov 17, 2016 at 2:34 PM, Masahiko Sawada wrote:
> I glanced at the patches but the both patches don't obey the coding
> style of PostgreSQL.
> Please refer to [1].
>
> [1]
>
On Thu, Oct 27, 2016 at 5:25 AM, Claudio Freire wrote:
> On Thu, Sep 15, 2016 at 1:16 PM, Claudio Freire
> wrote:
>> On Wed, Sep 14, 2016 at 12:24 PM, Claudio Freire
>> wrote:
>>> On Wed, Sep 14, 2016 at 12:17 PM, Robert
On Fri, Sep 16, 2016 at 9:47 AM, Pavan Deolasee
wrote:
> On Fri, Sep 16, 2016 at 7:03 PM, Robert Haas wrote:
>> On Thu, Sep 15, 2016 at 11:39 PM, Pavan Deolasee
>> wrote:
>> > But I actually wonder if we are over
On Fri, Sep 16, 2016 at 7:03 PM, Robert Haas wrote:
> On Thu, Sep 15, 2016 at 11:39 PM, Pavan Deolasee
> wrote:
> > But I actually wonder if we are over engineering things and
> overestimating
> > cost of memmove etc. How about this simpler
On Thu, Sep 15, 2016 at 11:39 PM, Pavan Deolasee
wrote:
> But I actually wonder if we are over engineering things and overestimating
> cost of memmove etc. How about this simpler approach:
Don't forget that you need to handle the case where
maintenance_work_mem is quite
On Fri, Sep 16, 2016 at 9:09 AM, Pavan Deolasee
wrote:
>
> I also realised that we can compact the TID array in step (b) above
> because we only need to store 17 bits for block numbers (we already know
> which 1GB segment they belong to). Given that usable offsets are
On Fri, Sep 16, 2016 at 12:24 AM, Claudio Freire
wrote:
> On Thu, Sep 15, 2016 at 3:48 PM, Tomas Vondra
> wrote:
> > For example, we always allocate the TID array as large as we can fit into
> > m_w_m, but maybe we don't need to wait with
On Thu, Sep 15, 2016 at 3:48 PM, Tomas Vondra
wrote:
> For example, we always allocate the TID array as large as we can fit into
> m_w_m, but maybe we don't need to wait with switching to the bitmap until
> filling the whole array - we could wait as long as the
On 09/15/2016 06:40 PM, Robert Haas wrote:
On Thu, Sep 15, 2016 at 12:22 PM, Tom Lane wrote:
Tomas Vondra writes:
On 09/14/2016 07:57 PM, Tom Lane wrote:
People who are vacuuming because they are out of disk space will be very
very unhappy
On Thu, Sep 15, 2016 at 12:22 PM, Tom Lane wrote:
> Tomas Vondra writes:
>> On 09/14/2016 07:57 PM, Tom Lane wrote:
>>> People who are vacuuming because they are out of disk space will be very
>>> very unhappy with that solution.
>
>> The people
Tomas Vondra writes:
> On 09/14/2016 07:57 PM, Tom Lane wrote:
>> People who are vacuuming because they are out of disk space will be very
>> very unhappy with that solution.
> The people are usually running out of space for data, while these files
> would be
On Thu, Sep 15, 2016 at 12:50 PM, Tomas Vondra
wrote:
> On 09/14/2016 07:57 PM, Tom Lane wrote:
>>
>> Pavan Deolasee writes:
>>>
>>> On Wed, Sep 14, 2016 at 10:53 PM, Alvaro Herrera
>>>
>>> wrote:
One
On 09/14/2016 05:17 PM, Robert Haas wrote:
I am kind of doubtful about this whole line of investigation because
we're basically trying pretty hard to fix something that I'm not sure
is broken.I do agree that, all other things being equal, the TID
lookups will probably be faster with a
On 09/14/2016 07:57 PM, Tom Lane wrote:
Pavan Deolasee writes:
On Wed, Sep 14, 2016 at 10:53 PM, Alvaro Herrera
wrote:
One thing not quite clear to me is how do we create the bitmap
representation starting from the array representation in
On Thu, Sep 15, 2016 at 2:40 AM, Simon Riggs wrote:
> On 14 September 2016 at 11:19, Pavan Deolasee
> wrote:
>
>>> In
>>> theory we could even start with the list of TIDs and switch to the
>>> bitmap if the TID list becomes larger than the
On Wed, Sep 14, 2016 at 1:23 PM, Alvaro Herrera
wrote:
> Robert Haas wrote:
>> Actually, I think that probably *is* worthwhile, specifically because
>> it might let us avoid multiple index scans in cases where we currently
>> require them. Right now, our default
Pavan Deolasee writes:
> On Wed, Sep 14, 2016 at 10:53 PM, Alvaro Herrera
> wrote:
>> One thing not quite clear to me is how do we create the bitmap
>> representation starting from the array representation in midflight
>> without using twice as
On Wed, Sep 14, 2016 at 10:53 PM, Alvaro Herrera
wrote:
>
>
> One thing not quite clear to me is how do we create the bitmap
> representation starting from the array representation in midflight
> without using twice as much memory transiently. Are we going to write
>
On 14 September 2016 at 11:19, Pavan Deolasee wrote:
>> In
>> theory we could even start with the list of TIDs and switch to the
>> bitmap if the TID list becomes larger than the bitmap would have been,
>> but I don't know if it's worth the effort.
>>
>
> Yes, that
Robert Haas wrote:
> Actually, I think that probably *is* worthwhile, specifically because
> it might let us avoid multiple index scans in cases where we currently
> require them. Right now, our default maintenance_work_mem value is
> 64MB, which is enough to hold a little over ten million
On Wed, Sep 14, 2016 at 12:17 PM, Robert Haas wrote:
> For instance, one idea to grow memory usage incrementally would be to
> store dead tuple information separately for each 1GB segment of the
> relation. So we have an array of dead-tuple-representation objects,
> one
On Wed, Sep 14, 2016 at 8:47 PM, Robert Haas wrote:
>
>
> I am kind of doubtful about this whole line of investigation because
> we're basically trying pretty hard to fix something that I'm not sure
> is broken.I do agree that, all other things being equal, the TID
>
On Sep 14, 2016 5:18 PM, "Robert Haas" wrote:
>
> On Wed, Sep 14, 2016 at 8:16 AM, Pavan Deolasee
> wrote:
> > Ah, thanks. So MaxHeapTuplesPerPage sets the upper boundary for the per
page
> > bitmap size. Thats about 36 bytes for 8K page. IOW if
On Wed, Sep 14, 2016 at 12:17 PM, Robert Haas wrote:
>
> I am kind of doubtful about this whole line of investigation because
> we're basically trying pretty hard to fix something that I'm not sure
> is broken.I do agree that, all other things being equal, the TID
>
On Wed, Sep 14, 2016 at 8:16 AM, Pavan Deolasee
wrote:
> Ah, thanks. So MaxHeapTuplesPerPage sets the upper boundary for the per page
> bitmap size. Thats about 36 bytes for 8K page. IOW if on an average there
> are 6 or more dead tuples per page, bitmap will outperform
On Wed, Sep 14, 2016 at 5:32 PM, Robert Haas wrote:
> On Wed, Sep 14, 2016 at 5:45 AM, Pavan Deolasee
> wrote:
> > Another interesting bit about these small tables is that the largest used
> > offset for these tables never went beyond 291 which
On Wed, Sep 14, 2016 at 5:45 AM, Pavan Deolasee
wrote:
> Another interesting bit about these small tables is that the largest used
> offset for these tables never went beyond 291 which is the value of
> MaxHeapTuplesPerPage. I don't know if there is something that
On Wed, Sep 14, 2016 at 8:47 AM, Pavan Deolasee
wrote:
>
>>
> Sawada-san offered to reimplement the patch based on what I proposed
> upthread. In the new scheme of things, we will allocate a fixed size bitmap
> of length 2D bits per page where D is average page density
On Wed, Sep 14, 2016 at 12:21 AM, Robert Haas wrote:
> On Fri, Sep 9, 2016 at 3:04 AM, Masahiko Sawada
> wrote:
> > Attached PoC patch changes the representation of dead tuple locations
> > to the hashmap having tuple bitmap.
> > The one hashmap
On Tue, Sep 13, 2016 at 4:06 PM, Robert Haas wrote:
> On Tue, Sep 13, 2016 at 2:59 PM, Claudio Freire
> wrote:
>> I've finished writing that patch, I'm in the process of testing its CPU
>> impact.
>>
>> First test seemed to hint at a 40% increase
On Tue, Sep 13, 2016 at 2:59 PM, Claudio Freire wrote:
> I've finished writing that patch, I'm in the process of testing its CPU
> impact.
>
> First test seemed to hint at a 40% increase in CPU usage, which seems
> rather steep compared to what I expected, so I'm trying
On Tue, Sep 13, 2016 at 3:51 PM, Robert Haas wrote:
> On Fri, Sep 9, 2016 at 3:04 AM, Masahiko Sawada wrote:
>> Attached PoC patch changes the representation of dead tuple locations
>> to the hashmap having tuple bitmap.
>> The one hashmap entry
On Tue, Sep 13, 2016 at 11:51 AM, Robert Haas wrote:
> I think it's probably wrong to worry that an array-of-arrays is going
> to be meaningfully slower than a single array here. It's basically
> costing you some small number of additional memory references per
> tuple,
On Fri, Sep 9, 2016 at 3:04 AM, Masahiko Sawada wrote:
> Attached PoC patch changes the representation of dead tuple locations
> to the hashmap having tuple bitmap.
> The one hashmap entry consists of the block number and the TID bitmap
> of corresponding block, and the
On Fri, Sep 9, 2016 at 12:33 PM, Pavan Deolasee
wrote:
>
>
> On Thu, Sep 8, 2016 at 11:40 PM, Masahiko Sawada
> wrote:
>>
>>
>>
>> Making the vacuum possible to choose between two data representations
>> sounds good.
>> I implemented the patch
On Thu, Sep 8, 2016 at 11:40 PM, Masahiko Sawada
wrote:
>
>
> Making the vacuum possible to choose between two data representations
> sounds good.
> I implemented the patch that changes dead tuple representation to bitmap
> before.
> I will measure the performance of
On Thu, Sep 8, 2016 at 11:54 PM, Pavan Deolasee
wrote:
>
>
> On Wed, Sep 7, 2016 at 10:18 PM, Claudio Freire
> wrote:
>>
>> On Wed, Sep 7, 2016 at 12:12 PM, Greg Stark wrote:
>> > On Wed, Sep 7, 2016 at 1:45 PM, Simon Riggs
On Thu, Sep 8, 2016 at 8:42 PM, Claudio Freire
wrote:
> On Thu, Sep 8, 2016 at 11:54 AM, Pavan Deolasee
> wrote:
> > For example, for a table with 60 bytes wide tuple (including 24 byte
> > header), each page can approximately have 8192/60 = 136
1 - 100 of 132 matches
Mail list logo