On Fri, May 31, 2013 at 3:45 PM, Josh Berkus wrote:
> On 05/31/2013 12:22 PM, Stephen Frost wrote:
>> Where I'm going with this whole thing is simply that I do worry a bit
>> about using a bitmap for freeze, or similar, information and not being
>> able to reconstruct that bitmap from the heap. P
On Fri, May 31, 2013 at 1:44 PM, Bruce Momjian wrote:
> On Fri, May 31, 2013 at 10:28:12AM -0700, Josh Berkus wrote:
>> >> Isn't the visibility map already required for proper return results as
>> >> we use it for index-only scans. I think the optimization-only ship has
>> >> sailed.
>> >
>> > At
* Josh Berkus (j...@agliodbs.com) wrote:
> On 05/31/2013 12:22 PM, Stephen Frost wrote:
> > Where I'm going with this whole thing is simply that I do worry a bit
> > about using a bitmap for freeze, or similar, information and not being
> > able to reconstruct that bitmap from the heap. Perhaps th
On 05/31/2013 12:22 PM, Stephen Frost wrote:
> Where I'm going with this whole thing is simply that I do worry a bit
> about using a bitmap for freeze, or similar, information and not being
> able to reconstruct that bitmap from the heap. Perhaps that's overly
> paranoid, but, well, we also write
* Bruce Momjian (br...@momjian.us) wrote:
> Right, and it is hard to see that losing and adding are somehow
> more/less likely, so it seems we already realy on the visibility map
> being correct.
Yes, but there's also a way to get *back* to a valid state if things go
south with the visibility map.
On Fri, May 31, 2013 at 11:00:19AM -0700, Josh Berkus wrote:
> Bruce,
>
> > Roberts statement was:
> >
> >> Loss or corruption of a single visibility map page means possible loss
> >> of half a gigabyte of data.
>
> I fail to be alarmed at this; currently losing a single page of the clog
> cause
Bruce,
> Roberts statement was:
>
>> Loss or corruption of a single visibility map page means possible loss
>> of half a gigabyte of data.
I fail to be alarmed at this; currently losing a single page of the clog
causes just as widespread corruption (worse, actually, since it's not
confined to on
On Fri, May 31, 2013 at 10:28:12AM -0700, Josh Berkus wrote:
>
> >> Isn't the visibility map already required for proper return results as
> >> we use it for index-only scans. I think the optimization-only ship has
> >> sailed.
> >
> > At the moment we can remove it without causing corruption. I
>> Isn't the visibility map already required for proper return results as
>> we use it for index-only scans. I think the optimization-only ship has
>> sailed.
>
> At the moment we can remove it without causing corruption. If we were to
> use it for freezing we couldn't anymore. So there's a diff
On 2013-05-31 13:14:13 -0400, Bruce Momjian wrote:
> On Thu, May 30, 2013 at 09:47:22AM -0400, Robert Haas wrote:
> > Well, as Heikki points out, I think that's unacceptably dangerous.
> > Loss or corruption of a single visibility map page means possible loss
> > of half a gigabyte of data.
> >
>
On Thu, May 30, 2013 at 09:47:22AM -0400, Robert Haas wrote:
> Well, as Heikki points out, I think that's unacceptably dangerous.
> Loss or corruption of a single visibility map page means possible loss
> of half a gigabyte of data.
>
> Also, if we go that route, looking at the visibility map is n
On Thu, May 30, 2013 at 8:12 AM, Andres Freund wrote:
> As far as I understand the trick basically is that we can rely on an FPI
> being logged when an action unsetting ALL_VISIBLE is performed. That
> all-visible would then make sure the hint-bits marking indvidual tuples
> as frozen would hit di
On 2013-05-30 15:34:04 +0300, Heikki Linnakangas wrote:
> On 30.05.2013 15:12, Andres Freund wrote:
> >Now, I am far from being convinced its a good idea to get rid of
> >PD_ALL_VISIBLE, but I don't think it does. Except that it currently is
> >legal for the page level ALL_VISIBLE being set while t
On 30.05.2013 15:12, Andres Freund wrote:
Now, I am far from being convinced its a good idea to get rid of
PD_ALL_VISIBLE, but I don't think it does. Except that it currently is
legal for the page level ALL_VISIBLE being set while the corresponding
visibilitymap one isn't there's not much prohibi
On 2013-05-30 07:54:38 -0400, Robert Haas wrote:
> On Thu, May 30, 2013 at 12:06 AM, Jeff Davis wrote:
> >> AFAICS, the main benefit of eliminating PD_ALL_VISIBLE is that we
> >> eliminate one write cycle; that is, we won't dirty the page once to
> >> hint it and then again to mark it all-visible.
On Thu, May 30, 2013 at 12:06 AM, Jeff Davis wrote:
>> AFAICS, the main benefit of eliminating PD_ALL_VISIBLE is that we
>> eliminate one write cycle; that is, we won't dirty the page once to
>> hint it and then again to mark it all-visible. But as of 9.3, that
>> should really only be a problem
On Wed, 2013-05-29 at 22:46 -0400, Robert Haas wrote:
> Again independently of Josh's proposal, we could eliminate
> PD_ALL_VISIBLE. This would require either surrendering the
> optimization whereby sequential scans can skip visibility checks on
> individual tuples within the page, or referring to
On Wed, May 29, 2013 at 1:11 PM, Jeff Davis wrote:
> On Tue, 2013-05-28 at 19:51 -0400, Robert Haas wrote:
>> > If we just wanted to reduce read cost, why not just take a simpler
>> > approach and give the visibility map a "isfrozen" bit? Then we'd know
>> > which pages didn't need rescanning wit
On Mon, 2013-01-21 at 12:00 +0200, Heikki Linnakangas wrote:
> On 21.01.2013 11:10, Jeff Davis wrote:
> > That confuses me. The testing was to show it didn't hurt other workloads
> > (like scans or inserts/updates/deletes); so the best possible result is
> > that they don't show signs either way.
>
On Mon, Jan 21, 2013 at 1:52 AM, Jeff Davis wrote:
>> Of course, there is an argument that this patch will
>> simplify the code, but I'm not sure if its enough to justify the
>> additional contention which may or may not show up in the benchmarks
>> we are running, but we know its there.
>
> What
On 21.01.2013 11:10, Jeff Davis wrote:
That confuses me. The testing was to show it didn't hurt other workloads
(like scans or inserts/updates/deletes); so the best possible result is
that they don't show signs either way.
I went back to look at the initial test results that demonstrated that
On Mon, 2013-01-21 at 12:49 +0530, Pavan Deolasee wrote:
> At the minimum your patch will need to have one additional buffer
> pinned for every K < 8192 * 8 heap pages.
I assume it's the same K I referred to when responding to Robert: the
max number of heap buffers we read before we unpin and rep
On Mon, Jan 21, 2013 at 12:22 PM, Jeff Davis wrote:
>
> On Mon, 2013-01-21 at 11:27 +0530, Pavan Deolasee wrote:
> > Of course, there is an argument that this patch will
> > simplify the code, but I'm not sure if its enough to justify the
> > additional contention which may or may not show up in
On Mon, 2013-01-21 at 11:27 +0530, Pavan Deolasee wrote:
> I tend to agree. When I looked at the patch, I thought since its
> removing a WAL record (and associated redo logic), it has some
> additional value. But that was kind of broken (sorry, I haven't looked
> at the latest patch if Jeff fixed i
On Sun, 2013-01-20 at 22:19 -0500, Tom Lane wrote:
> Robert Haas writes:
> > On Fri, Jan 18, 2013 at 3:31 AM, Jeff Davis wrote:
> >> So, I attached a new version of the patch that doesn't look at the VM
> >> for tables with fewer than 32 pages. That's the only change.
>
> > That certainly seems
On Mon, Jan 21, 2013 at 8:49 AM, Tom Lane wrote:
> Robert Haas writes:
>> On Fri, Jan 18, 2013 at 3:31 AM, Jeff Davis wrote:
>>> So, I attached a new version of the patch that doesn't look at the VM
>>> for tables with fewer than 32 pages. That's the only change.
>
>> That certainly seems worthw
Robert Haas writes:
> On Fri, Jan 18, 2013 at 3:31 AM, Jeff Davis wrote:
>> So, I attached a new version of the patch that doesn't look at the VM
>> for tables with fewer than 32 pages. That's the only change.
> That certainly seems worthwhile, but I still don't want to get rid of
> this code.
On Fri, Jan 18, 2013 at 3:31 AM, Jeff Davis wrote:
> So, I attached a new version of the patch that doesn't look at the VM
> for tables with fewer than 32 pages. That's the only change.
That certainly seems worthwhile, but I still don't want to get rid of
this code. I'm just not seeing a reason
On Thu, Jan 17, 2013 at 5:50 PM, Jeff Davis wrote:
> If the "without interruption" part becomes a practical problem, it seems
> fairly easy to fix: drop the pin and pick it up again once every K
> pages. Unless I'm missing something, this is a minor concern.
I think probably so.
> Test plan:
>
>
On Thu, 2013-01-17 at 14:53 -0800, Jeff Davis wrote:
> Test plan:
>
> 1. Take current patch (without "skip VM check for small tables"
> optimization mentioned above).
> 2. Create 500 tables each about 1MB.
> 3. VACUUM them all.
> 4. Start 500 connections (one for each table)
> 5. Time th
On Thu, 2013-01-17 at 09:53 -0500, Robert Haas wrote:
> The main question in my mind is whether
> there are any negative consequences to holding a VM buffer pin for
> that long without interruption. The usual consideration - namely,
> blocking vacuum - doesn't apply here because vacuum does not ta
On Thu, 2013-01-17 at 19:58 +, Simon Riggs wrote:
> Presumably we remember the state of the VM so we can skip the re-visit
> after every write?
That was not a part of my patch, although I remember that you mentioned
that previously and I thought it could be a good way to mitigate a
problem if
On Thu, 2013-01-17 at 17:14 +0200, Heikki Linnakangas wrote:
> I don't remember if I ever actually tested that
> though. Maybe I was worrying about nothing and hitting the VM page on
> every update is ok.
I tried, but was unable to show really anything at all, even without
keeping the VM page pi
On 17 January 2013 15:14, Heikki Linnakangas wrote:
> On 17.01.2013 16:53, Robert Haas wrote:
>>
>> On Thu, Jan 17, 2013 at 3:49 AM, Pavan Deolasee
>> wrote:
>>>
>>> May be you've already addressed that concern with the proven
>>> performance numbers, but I'm not sure.
>>
>>
>> It would be nice
On Thu, 2013-01-17 at 10:39 -0800, Jeff Davis wrote:
> On Thu, 2013-01-17 at 15:25 +0530, Pavan Deolasee wrote:
> > Now that I look at the patch, I wonder if there is another fundamental
> > issue with the patch. Since the patch removes WAL logging for the VM
> > set operation, this can happen:
> >
On Thu, 2013-01-17 at 15:25 +0530, Pavan Deolasee wrote:
> Now that I look at the patch, I wonder if there is another fundamental
> issue with the patch. Since the patch removes WAL logging for the VM
> set operation, this can happen:
>
Thank you. I think I was confused by this comment here:
"Whe
On 17.01.2013 16:53, Robert Haas wrote:
On Thu, Jan 17, 2013 at 3:49 AM, Pavan Deolasee
wrote:
May be you've already addressed that concern with the proven
performance numbers, but I'm not sure.
It would be nice to hear what Heikki's reasons were for adding
PD_ALL_VISIBLE in the first place.
On Thu, Jan 17, 2013 at 3:49 AM, Pavan Deolasee
wrote:
> May be you've already addressed that concern with the proven
> performance numbers, but I'm not sure.
It would be nice to hear what Heikki's reasons were for adding
PD_ALL_VISIBLE in the first place.
Jeff's approach of holding the VM pins
On Thu, Jan 17, 2013 at 2:57 PM, Abhijit Menon-Sen wrote:
>
>
> There was considerable discussion after this (accessible through the
> archives link above), which I won't attempt to summarise.
>
>
I thought Robert made those comments after considerable discussions on
Jeff's approach. So he probabl
At 2013-01-17 08:41:37 +, si...@2ndquadrant.com wrote:
>
> Jeff, can you summarise/collate why we're doing this, what concerns it
> raises and how you've dealt with them?
Since I was just looking at the original patch and discussion, and since
Pavan has posted an excerpt from one objection to
On Thu, Jan 17, 2013 at 2:11 PM, Simon Riggs wrote:
>
> On 17 January 2013 03:02, Jeff Davis wrote:
>
> > Rebased patch attached. No significant changes.
>
> Jeff, can you summarise/collate why we're doing this, what concerns it
> raises and how you've dealt with them? That will help decide wheth
On 17 January 2013 03:02, Jeff Davis wrote:
> Rebased patch attached. No significant changes.
Jeff, can you summarise/collate why we're doing this, what concerns it
raises and how you've dealt with them? That will help decide whether
to commit.
Thanks
--
Simon Riggs http://
Rebased patch attached. No significant changes.
Regards,
Jeff Davis
rm-pd-all-visible-20130116.patch.gz
Description: GNU Zip compressed data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pg
On Fri, 2012-11-30 at 13:16 -0800, Jeff Davis wrote:
> I tried for quite a while to show any kind of performance difference
> between checking the VM and checking PD_ALL_VISIBLE on a 12-core box (24
> if you count HT).
>
> Three patches in question:
> 1. Current unpatched master
> 2. patch tha
On Mon, 2012-11-26 at 16:55 -0600, Merlin Moncure wrote:
> > Based on previous measurements, I think there *is* contention pinning
> > the root of an index. Currently, I believe it's largely overwhelmed
> > by contention from other sources, such as the buffer manager lwlocks
> > and the very-evil
On Mon, Nov 26, 2012 at 3:03 PM, Robert Haas wrote:
> On Mon, Nov 26, 2012 at 3:29 PM, Jeff Davis wrote:
>> Your intuition here is better than mine, but I am still missing
>> something here. If we keep the buffer pinned, then there will be very
>> few pin/unpin cycles here, so I don't see where
On Mon, 2012-11-26 at 16:10 -0500, Tom Lane wrote:
> There's still the issue of whether the IOS code is safe in machines with
> weak memory ordering. I think that it probably is safe, but the
> argument for it in the current code comment is wrong; most likely, a
> correct argument has to depend on
Jeff Davis writes:
> On Sun, 2012-11-25 at 22:30 -0500, Tom Lane wrote:
>> Another possibly important point is that reducing the number of
>> pin/unpin cycles for a given VM page might actually hurt the chances of
>> it being found in shared buffers, because IIRC the usage_count is bumped
>> once
On Mon, Nov 26, 2012 at 3:29 PM, Jeff Davis wrote:
> Your intuition here is better than mine, but I am still missing
> something here. If we keep the buffer pinned, then there will be very
> few pin/unpin cycles here, so I don't see where the contention would
> come from (any more than there is co
On Sun, 2012-11-25 at 22:30 -0500, Tom Lane wrote:
> I'd be worried about the case of a lot of sessions touching a lot of
> unrelated tables. This change implies doubling the number of buffers
> that are held pinned by any given query, and the distributed overhead
> from that (eg, adding cycles to
Jeff Davis writes:
> Now it tries to keep the VM buffer pinned during scans, inserts,
> updates, and deletes. This should avoid increased contention pinning the
> VM pages, but performance tests are required.
> ...
> Then again, if a 5GB table is being randomly accessed, an extra pin is
> unlikely
On Wed, 2012-11-21 at 18:25 -0800, Jeff Davis wrote:
> Follow up to discussion:
> http://archives.postgresql.org/pgsql-hackers/2012-11/msg00817.php
>
> I worked out a patch that replaces PD_ALL_VISIBLE with calls to
> visibilitymap_test. It rips out a lot of complexity, with a net drop of
> about
Follow up to discussion:
http://archives.postgresql.org/pgsql-hackers/2012-11/msg00817.php
I worked out a patch that replaces PD_ALL_VISIBLE with calls to
visibilitymap_test. It rips out a lot of complexity, with a net drop of
about 300 lines (not a lot, but some of that code is pretty complex).
53 matches
Mail list logo