The more I think about it, the more I think gin is just an innocent
bystander, for which I just happen to have a particularly demanding
test. I think something about snapshots and wrap-around may be
broken.
After 10 hours of running I've got
1587 XX000 2016-04-28 05:57:09.964 MSK:ERROR: unexp
On Tue, Apr 26, 2016 at 08:22:03PM +0300, Teodor Sigaev wrote:
> >>Check my reasoning: In version 4 I added a remebering of tail of pending
> >>list into blknoFinish variable. And when we read page which was a tail on
> >>cleanup start then we sets cleanupFinish variable and after cleaning that
> >
Check my reasoning: In version 4 I added a remebering of tail of pending
list into blknoFinish variable. And when we read page which was a tail on
cleanup start then we sets cleanupFinish variable and after cleaning that
page we will stop further cleanup. Any insert caused during cleanup will be
p
On Fri, Apr 22, 2016 at 02:03:01PM -0700, Jeff Janes wrote:
> On Thu, Apr 21, 2016 at 11:00 PM, Noah Misch wrote:
> > Could you describe the test case in sufficient detail for Teodor to
> > reproduce
> > your results?
>
> [detailed description and attachments]
Thanks.
> The more I think about
On Thu, Apr 21, 2016 at 11:00 PM, Noah Misch wrote:
> On Mon, Apr 18, 2016 at 05:48:17PM +0300, Teodor Sigaev wrote:
>> >>Added, see attached patch (based on v3.1)
>> >
>> >With this applied, I am getting a couple errors I have not seen before
>> >after extensive crash recovery testing:
>> >ERROR:
On Fri, Apr 22, 2016 at 2:20 PM, Jeff Janes wrote:
>> Check my reasoning: In version 4 I added a remebering of tail of pending
>> list into blknoFinish variable. And when we read page which was a tail on
>> cleanup start then we sets cleanupFinish variable and after cleaning that
>> page we will s
On Mon, Apr 18, 2016 at 7:48 AM, Teodor Sigaev wrote:
>>> Added, see attached patch (based on v3.1)
>>
>>
>> With this applied, I am getting a couple errors I have not seen before
>> after extensive crash recovery testing:
>> ERROR: attempted to delete invisible tuple
>> ERROR: unexpected chunk
On Thu, Nov 5, 2015 at 2:44 PM, Jeff Janes wrote:
> The bug theoretically exists in 9.5, but it wasn't until 9.6 (commit
> e95680832854cf300e64c) that free pages were recycled aggressively
> enough that it actually becomes likely to be hit.
In other words: The bug could be in 9.5, but that hasn't
On Mon, Apr 18, 2016 at 05:48:17PM +0300, Teodor Sigaev wrote:
> >>Added, see attached patch (based on v3.1)
> >
> >With this applied, I am getting a couple errors I have not seen before
> >after extensive crash recovery testing:
> >ERROR: attempted to delete invisible tuple
> >ERROR: unexpected
Added, see attached patch (based on v3.1)
With this applied, I am getting a couple errors I have not seen before
after extensive crash recovery testing:
ERROR: attempted to delete invisible tuple
ERROR: unexpected chunk number 1 (expected 2) for toast value
100338365 in pg_toast_16425
Huh, see
On Tue, Apr 12, 2016 at 9:53 AM, Teodor Sigaev wrote:
>
> With pending cleanup patch backend will try to get lock on metapage with
> ConditionalLockPage. Will it interrupt autovacum worker?
Correct, ConditionalLockPage should not interrupt the autovacuum worker.
>>
>> Alvaro's recommendation, t
Alvaro's recommendation, to let the cleaner off the hook once it
passes the page which was the tail page at the time it started, would
prevent any process from getting pinned down indefinitely, but would
Added, see attached patch (based on v3.1)
If there is no objections I will aplly it at mondea
There are only 3 fundamental options I see, the cleaner can wait,
"help", or move on.
"Helping" is what it does now and is dangerous.
Moving on gives the above-discussed unthrottling problem.
Waiting has two problems. The act of waiting will cause autovacuums
to be canceled, unless ugly hacks
This restricts the memory used by ordinary backends when doing the
cleanup to be work_mem. Shouldn't we let them use
maintenance_work_mem? Only one backend can be doing this clean up of a
given index at any given time, so we don't need to worry about many
concurrent allocations of maintenance_work
On Thu, Apr 07, 2016 at 05:53:54PM -0700, Jeff Janes wrote:
> On Thu, Apr 7, 2016 at 4:33 PM, Tom Lane wrote:
> > Jeff Janes writes:
> >> To summarize the behavior change:
> >
> >> In the released code, an inserting backend that violates the pending
> >> list limit will try to clean the list, eve
On Thu, Apr 7, 2016 at 4:33 PM, Tom Lane wrote:
> Jeff Janes writes:
>> To summarize the behavior change:
>
>> In the released code, an inserting backend that violates the pending
>> list limit will try to clean the list, even if it is already being
>> cleaned. It won't accomplish anything usefu
Jeff Janes writes:
> To summarize the behavior change:
> In the released code, an inserting backend that violates the pending
> list limit will try to clean the list, even if it is already being
> cleaned. It won't accomplish anything useful, but will go through the
> motions until eventually it
Jeff Janes wrote:
> The proposed change removes that throttle, so that inserters will
> immediately see there is already a cleaner and just go back about
> their business. Due to that, unthrottled backends could add to the
> pending list faster than the cleaner can clean it, leading to
> unbounde
On Wed, Apr 6, 2016 at 9:52 AM, Teodor Sigaev wrote:
> I'm inclining to push v3.1 as one of two winners by size/performance and,
> unlike to pending lock patch, it doesn't change an internal logic of lock
> machinery.
This restricts the memory used by ordinary backends when doing the
cleanup to
I've tested the v2, v3 and v3.1 of the patch, to see if there are any
differences. The v2 no longer applies, so I tested it on ee943004. The following
table shows the total duration of the data load, and also sizes of the two GIN
indexes.
duration (sec) subject body
Hi,
On 04/04/2016 02:25 PM, Tomas Vondra wrote:
On 04/04/2016 02:06 PM, Teodor Sigaev wrote:
The above-described topic is currently a PostgreSQL 9.6 open item.
Teodor,
since you committed the patch believed to have created it, you own
this open
item. If that responsibility lies elsewhere, plea
On 04/04/2016 02:06 PM, Teodor Sigaev wrote:
The above-described topic is currently a PostgreSQL 9.6 open item.
Teodor,
since you committed the patch believed to have created it, you own
this open
item. If that responsibility lies elsewhere, please let us know whose
responsibility it is to fix t
The above-described topic is currently a PostgreSQL 9.6 open item. Teodor,
since you committed the patch believed to have created it, you own this open
item. If that responsibility lies elsewhere, please let us know whose
responsibility it is to fix this. Since new open items may be discovered
On Thu, Feb 25, 2016 at 11:19:20AM -0800, Jeff Janes wrote:
> On Wed, Feb 24, 2016 at 8:51 AM, Teodor Sigaev wrote:
> > Thank you for remembering this problem, at least for me.
> >
> >>> Well, turns out there's a quite significant difference, actually. The
> >>> index sizes I get (quite stable aft
On Wed, Feb 24, 2016 at 8:51 AM, Teodor Sigaev wrote:
> Thank you for remembering this problem, at least for me.
>
>>> Well, turns out there's a quite significant difference, actually. The
>>> index sizes I get (quite stable after multiple runs):
>>>
>>> 9.5 : 2428 MB
>>> 9.6 + alone clean
Hi,
On 02/25/2016 05:32 PM, Teodor Sigaev wrote:
Well, turns out there's a quite significant difference, actually. The
index sizes I get (quite stable after multiple runs):
9.5 : 2428 MB
9.6 + alone cleanup : 730 MB
9.6 + pending lock : 488 MB
In attach modified alone_cleanup patc
Well, turns out there's a quite significant difference, actually. The
index sizes I get (quite stable after multiple runs):
9.5 : 2428 MB
9.6 + alone cleanup : 730 MB
9.6 + pending lock : 488 MB
In attach modified alone_cleanup patch which doesn't break cleanup process as it
does p
Thank you for remembering this problem, at least for me.
Well, turns out there's a quite significant difference, actually. The
index sizes I get (quite stable after multiple runs):
9.5 : 2428 MB
9.6 + alone cleanup : 730 MB
9.6 + pending lock : 488 MB
Interesting, I don't see why al
On 02/24/2016 06:56 AM, Robert Haas wrote:
On Wed, Feb 24, 2016 at 9:17 AM, Tomas Vondra
wrote:
...
Are we going to anything about this? While the bug is present in 9.5 (and
possibly other versions), fixing it before 9.6 gets out seems important
because reproducing it there is rather trivial (
On Wed, Feb 24, 2016 at 9:17 AM, Tomas Vondra
wrote:
>> Well, turns out there's a quite significant difference, actually. The
>> index sizes I get (quite stable after multiple runs):
>>
>> 9.5 : 2428 MB
>> 9.6 + alone cleanup : 730 MB
>> 9.6 + pending lock : 488 MB
>>
>> So that's quit
Hi,
On 01/05/2016 10:38 AM, Tomas Vondra wrote:
Hi,
...
There shouldn't be a difference between the two approaches (although I
guess there could be if one left a larger pending list than the other,
as pending lists is very space inefficient), but since you included
9.5 in your test I thought
Hi,
On 12/23/2015 09:33 PM, Jeff Janes wrote:
On Mon, Dec 21, 2015 at 11:51 AM, Tomas Vondra
wrote:
On 12/21/2015 07:41 PM, Jeff Janes wrote:
On Sat, Dec 19, 2015 at 3:19 PM, Tomas Vondra
wrote:
...
So both patches seem to do the trick, but (2) is faster. Not sure
if this is expected
On Mon, Dec 21, 2015 at 11:51 AM, Tomas Vondra
wrote:
>
>
> On 12/21/2015 07:41 PM, Jeff Janes wrote:
>>
>> On Sat, Dec 19, 2015 at 3:19 PM, Tomas Vondra
>> wrote:
>
>
> ...
>
>>> So both patches seem to do the trick, but (2) is faster. Not sure
>>> if this is expected. (BTW all the results are w
On 12/21/2015 07:41 PM, Jeff Janes wrote:
On Sat, Dec 19, 2015 at 3:19 PM, Tomas Vondra
wrote:
...
So both patches seem to do the trick, but (2) is faster. Not sure
if this is expected. (BTW all the results are without asserts
enabled).
Do you know what the size of the pending list was a
On Sat, Dec 19, 2015 at 3:19 PM, Tomas Vondra
wrote:
> Hi,
>
> On 11/06/2015 02:09 AM, Tomas Vondra wrote:
>>
>> Hi,
>>
>> On 11/06/2015 01:05 AM, Jeff Janes wrote:
>>>
>>> On Thu, Nov 5, 2015 at 3:50 PM, Tomas Vondra
>>> wrote:
>>
>> ...
I can do that - I see there are three patch
Hi,
On 11/06/2015 02:09 AM, Tomas Vondra wrote:
Hi,
On 11/06/2015 01:05 AM, Jeff Janes wrote:
On Thu, Nov 5, 2015 at 3:50 PM, Tomas Vondra
wrote:
...
I can do that - I see there are three patches in the two threads:
1) gin_pending_lwlock.patch (Jeff Janes)
2) gin_pending_pagelock.pa
Hi,
On 11/06/2015 01:05 AM, Jeff Janes wrote:
On Thu, Nov 5, 2015 at 3:50 PM, Tomas Vondra
wrote:
...
I can do that - I see there are three patches in the two threads:
1) gin_pending_lwlock.patch (Jeff Janes)
2) gin_pending_pagelock.patch (Jeff Janes)
3) gin_alone_cleanup-2.patch (
On Thu, Nov 5, 2015 at 3:50 PM, Tomas Vondra
wrote:
>
>
> On 11/05/2015 11:44 PM, Jeff Janes wrote:
>>
>>
>> This looks like it is probably the same bug discussed here:
>>
>>
>> http://www.postgresql.org/message-id/CAMkU=1xalflhuuohfp5v33rzedlvb5aknnujceum9knbkrb...@mail.gmail.com
>>
>> And here:
On 11/05/2015 11:44 PM, Jeff Janes wrote:
>
This looks like it is probably the same bug discussed here:
http://www.postgresql.org/message-id/CAMkU=1xalflhuuohfp5v33rzedlvb5aknnujceum9knbkrb...@mail.gmail.com
And here:
http://www.postgresql.org/message-id/56041b26.2040...@sigaev.ru
The bug t
On Thu, Nov 5, 2015 at 2:18 PM, Tomas Vondra
wrote:
> Hi,
>
> while repeating some full-text benchmarks on master, I've discovered
> that there's a data corruption bug somewhere. What happens is that while
> loading data into a table with GIN indexes (using multiple parallel
> connections), I some
Hi,
while repeating some full-text benchmarks on master, I've discovered
that there's a data corruption bug somewhere. What happens is that while
loading data into a table with GIN indexes (using multiple parallel
connections), I sometimes get this:
TRAP: FailedAssertion("!(((PageHeader) (page))
41 matches
Mail list logo