On Fri, Sep 9, 2016 at 3:04 AM, Masahiko Sawada wrote:
> Attached PoC patch changes the representation of dead tuple locations
> to the hashmap having tuple bitmap.
> The one hashmap entry consists of the block number and the TID bitmap
> of corresponding block, and the
On Fri, Sep 9, 2016 at 12:33 PM, Pavan Deolasee
wrote:
>
>
> On Thu, Sep 8, 2016 at 11:40 PM, Masahiko Sawada
> wrote:
>>
>>
>>
>> Making the vacuum possible to choose between two data representations
>> sounds good.
>> I implemented the patch
On Thu, Sep 8, 2016 at 11:40 PM, Masahiko Sawada
wrote:
>
>
> Making the vacuum possible to choose between two data representations
> sounds good.
> I implemented the patch that changes dead tuple representation to bitmap
> before.
> I will measure the performance of
On Thu, Sep 8, 2016 at 11:54 PM, Pavan Deolasee
wrote:
>
>
> On Wed, Sep 7, 2016 at 10:18 PM, Claudio Freire
> wrote:
>>
>> On Wed, Sep 7, 2016 at 12:12 PM, Greg Stark wrote:
>> > On Wed, Sep 7, 2016 at 1:45 PM, Simon Riggs
On Thu, Sep 8, 2016 at 8:42 PM, Claudio Freire
wrote:
> On Thu, Sep 8, 2016 at 11:54 AM, Pavan Deolasee
> wrote:
> > For example, for a table with 60 bytes wide tuple (including 24 byte
> > header), each page can approximately have 8192/60 = 136
On Thu, Sep 8, 2016 at 11:54 AM, Pavan Deolasee
wrote:
> For example, for a table with 60 bytes wide tuple (including 24 byte
> header), each page can approximately have 8192/60 = 136 tuples. Say we
> provision for 136*2 = 272 bits per page i.e. 34 bytes per page for the
On Wed, Sep 7, 2016 at 10:18 PM, Claudio Freire
wrote:
> On Wed, Sep 7, 2016 at 12:12 PM, Greg Stark wrote:
> > On Wed, Sep 7, 2016 at 1:45 PM, Simon Riggs
> wrote:
> >> On 6 September 2016 at 19:59, Tom Lane
On 9/8/16 3:48 AM, Masahiko Sawada wrote:
If we replaced dead_tuples with an array-of-array, isn't there
negative performance impact for lazy_tid_reap()?
As chunk is added, that performance would be decrease.
Yes, it certainly would, as you'd have to do 2 binary searches. I'm not
sure how
On Wed, Sep 7, 2016 at 2:39 AM, Robert Haas wrote:
> On Tue, Sep 6, 2016 at 10:28 PM, Claudio Freire
> wrote:
>>> The problem with this is that we allocate the entire amount of
>>> maintenance_work_mem even when the number of actual dead tuples
On Wed, Sep 7, 2016 at 12:12 PM, Greg Stark wrote:
> On Wed, Sep 7, 2016 at 1:45 PM, Simon Riggs wrote:
>> On 6 September 2016 at 19:59, Tom Lane wrote:
>>
>>> The idea of looking to the stats to *guess* about how many tuples are
>>>
On Wed, Sep 7, 2016 at 1:45 PM, Simon Riggs wrote:
> On 6 September 2016 at 19:59, Tom Lane wrote:
>
>> The idea of looking to the stats to *guess* about how many tuples are
>> removable doesn't seem bad at all. But imagining that that's going to be
>>
On 6 September 2016 at 19:59, Tom Lane wrote:
> The idea of looking to the stats to *guess* about how many tuples are
> removable doesn't seem bad at all. But imagining that that's going to be
> exact is folly of the first magnitude.
Yes. Bear in mind I had already
Simon Riggs writes:
> On 6 September 2016 at 19:23, Robert Haas wrote:
>> On Tue, Sep 6, 2016 at 2:16 PM, Simon Riggs wrote:
>>> What occurs to me is that we can exactly predict how many tuples we
>>> are going to get when we
On Tue, Sep 6, 2016 at 2:51 PM, Simon Riggs wrote:
> On 6 September 2016 at 19:23, Robert Haas wrote:
>> On Tue, Sep 6, 2016 at 2:16 PM, Simon Riggs wrote:
>>> What occurs to me is that we can exactly predict how many tuples
On 6 September 2016 at 19:23, Robert Haas wrote:
> On Tue, Sep 6, 2016 at 2:16 PM, Simon Riggs wrote:
>> What occurs to me is that we can exactly predict how many tuples we
>> are going to get when we autovacuum, since we measure that and we know
>>
On Tue, Sep 6, 2016 at 3:45 AM, Simon Riggs wrote:
> On 5 September 2016 at 21:58, Claudio Freire wrote:
>
> How long does that part ever take? Is there any substantial gain from
> this?
>
>> Btw, without a further patch to prefetch pages
On Tue, Sep 6, 2016 at 2:16 PM, Simon Riggs wrote:
> What occurs to me is that we can exactly predict how many tuples we
> are going to get when we autovacuum, since we measure that and we know
> what the number is when we trigger it.
>
> So there doesn't need to be any
On 6 September 2016 at 19:09, Robert Haas wrote:
> On Tue, Sep 6, 2016 at 2:06 PM, Simon Riggs wrote:
>> On 6 September 2016 at 19:00, Tom Lane wrote:
>>> Robert Haas writes:
Yeah, but I've seen
On Tue, Sep 6, 2016 at 3:11 PM, Tom Lane wrote:
> We could get around (1) by something like Robert's idea of segmented
> allocation, but TBH I've seen nothing on this thread to make me think
> it's necessary or would even result in any performance improvement
> at all. The
Simon Riggs writes:
> Is there a reason we can't use repalloc here?
(1) repalloc will probably copy the data.
(2) that answer doesn't excuse you from choosing a limit.
We could get around (1) by something like Robert's idea of segmented
allocation, but TBH I've seen
On Tue, Sep 6, 2016 at 2:09 PM, Robert Haas wrote:
> There are two possible problems, either of which is necessarily fatal:
I meant to write "neither of which" not "either of which".
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL
On Tue, Sep 6, 2016 at 2:06 PM, Simon Riggs wrote:
> On 6 September 2016 at 19:00, Tom Lane wrote:
>> Robert Haas writes:
>>> Yeah, but I've seen actual breakage from exactly this issue on
>>> customer systems even with the 1GB
On Tue, Sep 6, 2016 at 2:00 PM, Tom Lane wrote:
> Robert Haas writes:
>> Yeah, but I've seen actual breakage from exactly this issue on
>> customer systems even with the 1GB limit, and when we start allowing
>> 100GB it's going to get a whole lot worse.
On 6 September 2016 at 19:00, Tom Lane wrote:
> Robert Haas writes:
>> Yeah, but I've seen actual breakage from exactly this issue on
>> customer systems even with the 1GB limit, and when we start allowing
>> 100GB it's going to get a whole lot worse.
>
Robert Haas writes:
> Yeah, but I've seen actual breakage from exactly this issue on
> customer systems even with the 1GB limit, and when we start allowing
> 100GB it's going to get a whole lot worse.
While it's not necessarily a bad idea to consider these things,
I think
On Tue, Sep 6, 2016 at 11:22 PM, Claudio Freire wrote:
> CREATE INDEX could also allocate 218GB, you just need to index enough
> columns and you'll get that.
>
> Aside from the fact that CREATE INDEX will only allocate what is going
> to be used and VACUUM will
On Tue, Sep 6, 2016 at 2:39 PM, Robert Haas wrote:
>> I could attempt that, but I don't see the difference between
>> vacuum and create index in this case. Both could allocate a huge chunk
>> of the virtual address space if maintainance work mem says so, both
>>
On Tue, Sep 6, 2016 at 10:28 PM, Claudio Freire wrote:
>> The problem with this is that we allocate the entire amount of
>> maintenance_work_mem even when the number of actual dead tuples turns
>> out to be very small. That's not so bad if the amount of memory we're
>>
On Sun, Sep 4, 2016 at 8:10 PM, Jim Nasby wrote:
> On 9/4/16 1:46 AM, Simon Riggs wrote:
>>>
>>> > The patch also makes vacuum free the dead_tuples before starting
>>> > truncation. It didn't seem necessary to hold onto it beyond that
>>> > point, and it might help give
On Sat, Sep 3, 2016 at 8:55 AM, Claudio Freire wrote:
> The attached patch allows setting maintainance_work_mem or
> autovacuum_work_mem higher than 1GB (and be effective), by turning the
> allocation of the dead_tuples into a huge allocation.
>
> This results in fewer
On 9/4/16 1:46 AM, Simon Riggs wrote:
> The patch also makes vacuum free the dead_tuples before starting
> truncation. It didn't seem necessary to hold onto it beyond that
> point, and it might help give the OS more cache, especially if work
> mem is configured very high to avoid multiple index
On 5 September 2016 at 21:58, Claudio Freire wrote:
How long does that part ever take? Is there any substantial gain from this?
> Btw, without a further patch to prefetch pages on the backward scan
> for truncate, however, my patience ran out before it finished
>
On Mon, Sep 5, 2016 at 5:36 PM, Simon Riggs wrote:
> On 5 September 2016 at 15:50, Claudio Freire wrote:
>> On Sun, Sep 4, 2016 at 3:46 AM, Simon Riggs wrote:
>>> On 3 September 2016 at 04:25, Claudio Freire
On 5 September 2016 at 15:50, Claudio Freire wrote:
> On Sun, Sep 4, 2016 at 3:46 AM, Simon Riggs wrote:
>> On 3 September 2016 at 04:25, Claudio Freire wrote:
>>> The patch also makes vacuum free the dead_tuples before
On Mon, Sep 5, 2016 at 11:50 AM, Claudio Freire wrote:
> On Sun, Sep 4, 2016 at 3:46 AM, Simon Riggs wrote:
>> On 3 September 2016 at 04:25, Claudio Freire wrote:
>>> The patch also makes vacuum free the dead_tuples before
On Sun, Sep 4, 2016 at 3:46 AM, Simon Riggs wrote:
> On 3 September 2016 at 04:25, Claudio Freire wrote:
>> The patch also makes vacuum free the dead_tuples before starting
>> truncation. It didn't seem necessary to hold onto it beyond that
>>
On 3 September 2016 at 04:25, Claudio Freire wrote:
> The attached patch allows setting maintainance_work_mem or
> autovacuum_work_mem higher than 1GB (and be effective), by turning the
> allocation of the dead_tuples into a huge allocation.
>
> This results in fewer index
The attached patch allows setting maintainance_work_mem or
autovacuum_work_mem higher than 1GB (and be effective), by turning the
allocation of the dead_tuples into a huge allocation.
This results in fewer index scans for heavily bloated tables, and
could be a lifesaver in many situations (in
101 - 138 of 138 matches
Mail list logo