----- On 5 Apr, 2019, at 4:14 AM, Jeff Janes <jeff.ja...@gmail.com> wrote: 

> On Thu, Apr 4, 2019 at 3:42 AM Duncan Kinnear < [
> mailto:duncan.kinn...@mccarthy.co.nz | duncan.kinn...@mccarthy.co.nz ] > 
> wrote:

>> the EXPLAIN (ANALYSE, TIMING TRUE) of this query gives:

>> Update on next_id (cost=0.14..8.16 rows=1 width=36) (actual time=0.057..0.057
>> rows=0 loops=1)
>> -> Index Scan using next_id_pk on next_id (cost=0.14..8.16 rows=1 width=36)
>> (actual time=0.039..0.040 rows=1 loops=1)
>> Index Cond: ((id)::text = 'Session'::text)
>> Planning Time: 0.083 ms
>> Execution Time: 0.089 ms

>> which is significantly less than 50ms.

> The EXPLAIN ANALYZE doesn't include the time needed to fsync the transaction
> logs. It measures only the update itself, not the implicit commit at the end.
> DBeaver is seeing the fsync-inclusive time. 50ms is pretty long, but some file
> systems and OSes seem to be pretty inefficient at this and take several disk
> revolutions to get the data down.

>> Now, if I point DBeaver to a VM server on the same gigabit network switch,
>> running version:
>> 9.5.3 on i386-pc-solaris2.11, compiled by cc: Sun C 5.10 SunOS_i386 Patch
>> 142363-07 2010/12/09, 64-bit
>> then the same query executes in about 2-3ms

> That machine probably has hardware to do a fast fsync, has fsync turned off, 
> or
> is lying about the safety of its data.

Just a quick update. I tried performing a sequence of BEGIN; UPDATE ...; 
COMMIT; and I got the following log entries:

Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.639 NZST [29887] 
LOG:  duration: 0.025 ms  parse <unnamed>: begin
Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.639 NZST [29887] 
LOG:  duration: 0.014 ms  bind <unnamed>: begin
Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.639 NZST [29887] 
LOG:  duration: 0.003 ms  execute <unnamed>: begin
Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.639 NZST [29887] 
LOG:  duration: 0.045 ms  parse <unnamed>:  update NEXT_ID set 
next_value=next_value+1 where id='Session'
Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.640 NZST [29887] 
LOG:  duration: 0.055 ms  bind <unnamed>:  update NEXT_ID set 
next_value=next_value+1 where id='Session'
Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.640 NZST [29887] 
LOG:  duration: 0.059 ms  execute <unnamed>:  update NEXT_ID set 
next_value=next_value+1 where id='Session'
Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.640 NZST [29887] 
LOG:  duration: 0.004 ms  parse <unnamed>:  commit
Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.640 NZST [29887] 
LOG:  duration: 0.003 ms  bind <unnamed>:  commit
Apr 10 09:02:40 duncanpc postgres[7656]: 2019-04-10 09:02:40.690 NZST [29887] 
LOG:  duration: 50.237 ms  execute <unnamed>:  commit

So this confirms that the overhead is indeed happening in the COMMIT part. But 
how do I get more detailed logging to see what it is doing?

Note, in a previous reply to Jeff (which I forgot to CC to the list) I 
explained that the slow machines are both using BTRFS as the filesystem, and a 
bit of googling has revealed that using PostgreSQL on BTRFS filesystems is 
(don't cross the streams) bad.

Jeff, I will try adding the wait event stuff to see if that it what it is doing.


Cheers, Duncan


Reply via email to