On Thu, Dec 8, 2016 at 2:38 AM, Andreas Seltenreich <seltenre...@gmx.de> wrote:
> Andreas Seltenreich writes:
>> Amit Kapila writes:
>>> On Sat, Dec 3, 2016 at 3:44 PM, Andreas Seltenreich <seltenre...@gmx.de> 
>>> wrote:
>>>> Amit Kapila writes:
>>>>> [2. text/x-diff; fix_hash_bucketsplit_sqlsmith_v1.patch]
>>>> Ok, I'll do testing with the patch applied.
>> Good news: the assertion hasn't fired since the patch is in.
> Meh, it fired again today after being silent for 100e6 queries :-/
> I guess I need to add some confidence qualification on such statements.
> Maybe sigmas as they do at CERN…

This assertion can be reproduced with Jeff's test as well and the fix
for the same is posted [1].

>> smith=# select * from state_report where sqlstate = 'XX001';
>> -[ RECORD 1 
>> ]------------------------------------------------------------------------------
>> count    | 10
>> sqlstate | XX001
>> sample   | ERROR:  could not read block 1173 in file "base/16384/17256": 
>> read only 0 of 8192 bytes
>> hosts    | {airbisquit,frell,gorgo,marbit,pillcrow,quakken}
>>> Hmm, I am not sure if this is related to previous problem, but it
>>> could be.  Is it possible to get the operation and or callstack for
>>> above failure?
>> Ok, will turn the elog into an assertion to get at the backtraces.
> Doing so on top of 4212cb7, I caught the backtrace below.  Query was:
> --8<---------------cut here---------------start------------->8---
> set max_parallel_workers_per_gather = 0;
> select  count(1) from
>        public.hash_name_heap as ref_2
>        join public.rtest_emplog as sample_1
>               on (ref_2.random = sample_1.who);
> --8<---------------cut here---------------end--------------->8---
> I've put the data directory where it can be reproduced here:
>     http://ansel.ydns.eu/~andreas/hash_index_short_read.tar.xz (12MB)

This can happen due to non-marking of the dirty buffer as the index
page where we have deleted the tuples will not be flushed whereas
vacuum would have removed corresponding heap tuples.  Next access to
hash index page will bring back the old copy of index page which
contains tuples that were supposed to get deleted by vacuum and
accessing those tuples will give wrong information about heap tuples
and when we try to access deleted heap tuples, it can give us short
reads problem.

Can you please try with the patch posted on hash index thread [1] to
see if you can reproduce any of these problems?

[1] - 

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to