On Fri, Aug 4, 2017 at 3:30 PM, Peter Geoghegan wrote:
> Yura Sokolov of Postgres Pro performed this benchmark at my request.
> He took the 9.5 commit immediately proceeding 2ed5b87f9 as a baseline.
I attach a simple patch that comments out the release of the buffer
pin for logged
On Mon, Jul 31, 2017 at 10:54 AM, Peter Geoghegan wrote:
> Let's wait to see what difference it makes if Alik's zipfian
> distribution pgbench test case uses unlogged tables. That may gives us a
> good sense of the problem for cases with contention/concurrency.
Yura Sokolov of
Robert Haas wrote:
On Mon, Jul 31, 2017 at 1:54 PM, Peter Geoghegan wrote:
That is hard to justify. I don't think that failing to set LP_DEAD hints
is the cost that must be paid to realize a benefit elsewhere, though. I
don't see much problem with having
On Mon, Jul 31, 2017 at 1:54 PM, Peter Geoghegan wrote:
> That is hard to justify. I don't think that failing to set LP_DEAD hints
> is the cost that must be paid to realize a benefit elsewhere, though. I
> don't see much problem with having both benefits consistently. It's
>
Robert Haas wrote:
On Thu, Jul 27, 2017 at 10:05 PM, Peter Geoghegan wrote:
I really don't know if that would be worthwhile. It would certainly fix
the regression shown in my test case, but that might not go far enough.
I strongly suspect that there are
On Thu, Jul 27, 2017 at 10:05 PM, Peter Geoghegan wrote:
> I really don't know if that would be worthwhile. It would certainly fix
> the regression shown in my test case, but that might not go far enough.
> I strongly suspect that there are more complicated workloads where
> LP_DEAD
Amit Kapila wrote:
Isn't it possible to confirm if the problem is due to commit
2ed5b87f9? Basically, if we have unlogged tables, then it won't
release the pin. So if the commit in question is the culprit, then
the same workload should not lead to bloat.
That's a
On Wed, Jul 26, 2017 at 3:32 AM, Peter Geoghegan wrote:
> On Fri, Jul 14, 2017 at 5:06 PM, Peter Geoghegan wrote:
>> I think that what this probably comes down to, more than anything
>> else, is that you have leftmost hot/bloated leaf pages like this:
>>
>>
>>
Peter Geoghegan wrote:
In Alik's workload, there are two queries: One UPDATE, one SELECT. Even
though the bloated index was a unique index, and so still gets
_bt_check_unique() item killing, the regression is still going to block
LP_DEAD cleanup by the SELECTs, which seems like
Robert Haas wrote:
We now see that no update ever kills items within _bt_killitems(),
because our own update to the index leaf page itself nullifies our
ability to kill anything, by changing the page LSN from the one
stashed in the index scan state variable. Fortunately,
On Tue, Jul 25, 2017 at 11:02 PM, Peter Geoghegan wrote:
> While the benchmark Alik came up with is non-trivial to reproduce, I
> can show a consistent regression for a simple case with only one
> active backend.
That's not good.
> We now see that no update ever kills items within
On Tue, Jul 25, 2017 at 8:02 PM, Peter Geoghegan wrote:
> Setup:
>
> Initialize pgbench (any scale factor).
> create index on pgbench_accounts (aid);
That "create index" was meant to be on "abalance", to make the UPDATE
queries always HOT-unsafe. (You'll want to *also* create this
On Tue, Jul 25, 2017 at 3:02 PM, Peter Geoghegan wrote:
> I've been thinking about this a lot, because this really does look
> like a pathological case to me. I think that this workload is very
> sensitive to how effective kill_prior_tuples/LP_DEAD hinting is. Or at
> least, I can
On Fri, Jul 14, 2017 at 5:06 PM, Peter Geoghegan wrote:
> I think that what this probably comes down to, more than anything
> else, is that you have leftmost hot/bloated leaf pages like this:
>
>
> idx | level | l_item | blkno | btpo_prev |
> btpo_next |
14 matches
Mail list logo