Re: effective_io_concurrency and NVMe devices

2022-06-08 Thread Tomas Vondra
On 6/8/22 08:29, Jakub Wartak wrote:
 The attached patch is a trivial version that waits until we're at
 least
 32 pages behind the target, and then prefetches all of them. Maybe give it 
 a
>> try?
 (This pretty much disables prefetching for e_i_c below 32, but for an
 experimental patch that's enough.)
>>>
>>> I've tried it at e_i_c=10 initially on David's setup.sql, and most defaults
>> s_b=128MB, dbsize=8kb but with forced disabled parallel query (for easier
>> inspection with strace just to be sure//so please don't compare times).
>>>
>>> run:
>>> a) master (e_i_c=10)  181760ms, 185680ms, 185384ms @ ~ 340MB/s and 44k
>>> IOPS (~122k IOPS practical max here for libaio)
>>> b) patched(e_i_c=10)  237774ms, 236326ms, ..as you stated it disabled
>>> prefetching, fadvise() not occurring
>>> c) patched(e_i_c=128) 90430ms, 88354ms, 85446ms, 78475ms, 74983ms,
>>> 81432ms (mean=83186ms +/- 5947ms) @ ~570MB/s and 75k IOPS (it even
>>> peaked for a second on ~122k)
>>> d) master (e_i_c=128) 116865ms, 101178ms, 89529ms, 95024ms, 89942ms
>>> 99939ms (mean=98746ms +/- 10118ms) @ ~510MB/s and 65k IOPS (rare peaks
>>> to 90..100k IOPS)
>>>
>>> ~16% benefit sounds good (help me understand: L1i cache?). Maybe it is
>>> worth throwing that patch onto more advanced / complete performance
>>> test farm too ? (although it's only for bitmap heap scans)
> 
> I hope you have some future plans for this patch :)
> 

I think the big challenge is to make this adaptive, i.e. work well for
access patterns that are not known in advance. The existing prefetching
works fine for our random stuff (even for nvme devices), not so much for
sequential (as demonstrated by David's example).

>> Yes, kernel certainly does it's own read-ahead, which works pretty well for
>> sequential patterns. What does
>>
>>blockdev --getra /dev/...
>>
>> say?
> 
> It's default, 256 sectors (128kb) so it matches.
> 

Right. I think this is pretty much why (our) prefetching performs so
poorly on sequential access patterns - the kernel read-ahead works very
well in this case, and our prefetching can't help but can interfere.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




RE: effective_io_concurrency and NVMe devices

2022-06-08 Thread Jakub Wartak
> >> The attached patch is a trivial version that waits until we're at
> >> least
> >> 32 pages behind the target, and then prefetches all of them. Maybe give it 
> >> a
> try?
> >> (This pretty much disables prefetching for e_i_c below 32, but for an
> >> experimental patch that's enough.)
> >
> > I've tried it at e_i_c=10 initially on David's setup.sql, and most defaults
> s_b=128MB, dbsize=8kb but with forced disabled parallel query (for easier
> inspection with strace just to be sure//so please don't compare times).
> >
> > run:
> > a) master (e_i_c=10)  181760ms, 185680ms, 185384ms @ ~ 340MB/s and 44k
> > IOPS (~122k IOPS practical max here for libaio)
> > b) patched(e_i_c=10)  237774ms, 236326ms, ..as you stated it disabled
> > prefetching, fadvise() not occurring
> > c) patched(e_i_c=128) 90430ms, 88354ms, 85446ms, 78475ms, 74983ms,
> > 81432ms (mean=83186ms +/- 5947ms) @ ~570MB/s and 75k IOPS (it even
> > peaked for a second on ~122k)
> > d) master (e_i_c=128) 116865ms, 101178ms, 89529ms, 95024ms, 89942ms
> > 99939ms (mean=98746ms +/- 10118ms) @ ~510MB/s and 65k IOPS (rare peaks
> > to 90..100k IOPS)
> >
> > ~16% benefit sounds good (help me understand: L1i cache?). Maybe it is
> > worth throwing that patch onto more advanced / complete performance
> > test farm too ? (although it's only for bitmap heap scans)

I hope you have some future plans for this patch :)

> Yes, kernel certainly does it's own read-ahead, which works pretty well for
> sequential patterns. What does
> 
>blockdev --getra /dev/...
> 
> say?

It's default, 256 sectors (128kb) so it matches.

-J.




Re: effective_io_concurrency and NVMe devices

2022-06-07 Thread Tomas Vondra
On 6/7/22 15:29, Jakub Wartak wrote:
> Hi Tomas,
> 
>>> I have a machine here with 1 x PCIe 3.0 NVMe SSD and also 1 x PCIe 4.0
>>> NVMe SSD. I ran a few tests to see how different values of
>>> effective_io_concurrency would affect performance. I tried to come up
>>> with a query that did little enough CPU processing to ensure that I/O
>>> was the clear bottleneck.
>>>
>>> The test was with a 128GB table on a machine with 64GB of RAM.  I
>>> padded the tuples out so there were 4 per page so that the aggregation
>>> didn't have much work to do.
>>>
>>> The query I ran was: explain (analyze, buffers, timing off) select
>>> count(p) from r where a = 1;
>  
>> The other idea I had while looking at batching a while back, is that we 
>> should
>> batch the prefetches. The current logic interleaves prefetches with other 
>> work -
>> prefetch one page, process one page, ... But once reading a page gets
>> sufficiently fast, this means the queues never get deep enough for
>> optimizations. So maybe we should think about batching the prefetches, in 
>> some
>> way. Unfortunately posix_fadvise does not allow batching of requests, but we
>> can at least stop interleaving the requests.
> 
> .. for now it doesn't, but IORING_OP_FADVISE is on the long-term horizon.  
> 

Interesting! Will take time to get into real systems, though.

>> The attached patch is a trivial version that waits until we're at least
>> 32 pages behind the target, and then prefetches all of them. Maybe give it a 
>> try?
>> (This pretty much disables prefetching for e_i_c below 32, but for an
>> experimental patch that's enough.)
> 
> I've tried it at e_i_c=10 initially on David's setup.sql, and most defaults 
> s_b=128MB, dbsize=8kb but with forced disabled parallel query (for easier 
> inspection with strace just to be sure//so please don't compare times). 
> 
> run:
> a) master (e_i_c=10)  181760ms, 185680ms, 185384ms @ ~ 340MB/s and 44k IOPS 
> (~122k IOPS practical max here for libaio)
> b) patched(e_i_c=10)  237774ms, 236326ms, ..as you stated it disabled 
> prefetching, fadvise() not occurring
> c) patched(e_i_c=128) 90430ms, 88354ms, 85446ms, 78475ms, 74983ms, 81432ms 
> (mean=83186ms +/- 5947ms) @ ~570MB/s and 75k IOPS (it even peaked for a 
> second on ~122k)
> d) master (e_i_c=128) 116865ms, 101178ms, 89529ms, 95024ms, 89942ms 99939ms 
> (mean=98746ms +/- 10118ms) @ ~510MB/s and 65k IOPS (rare peaks to 90..100k 
> IOPS)
> 
> ~16% benefit sounds good (help me understand: L1i cache?). Maybe it is worth 
> throwing that patch onto more advanced / complete performance test farm too ? 
> (although it's only for bitmap heap scans)
> 
> run a: looked interleaved as you said:
> fadvise64(160, 1064157184, 8192, POSIX_FADV_WILLNEED) = 0
> pread64(160, "@\0\0\0\200\303/_\0\0\4\0(\0\200\0\0 \4 \0\0\0\0 
> \230\300\17@\220\300\17"..., 8192, 1064009728) = 8192
> fadvise64(160, 1064173568, 8192, POSIX_FADV_WILLNEED) = 0
> pread64(160, "@\0\0\0\0\0040_\0\0\4\0(\0\200\0\0 \4 \0\0\0\0 
> \230\300\17@\220\300\17"..., 8192, 1064026112) = 8192
> [..]
> 
> BTW: interesting note, for run b, the avgrq-sz from extended iostat jumps is 
> flipping between 16(*512=8kB) to ~256(*512=~128kB!) as if kernel was doing 
> some own prefetching heuristics on and off in cycles, while when calling 
> e_i_c/fadvise() is in action then it seems to be always 8kB requests. So with 
> disabled fadivse() one IMHO might have problems deterministically 
> benchmarking short queries as kernel voodoo might be happening (?)
> 

Yes, kernel certainly does it's own read-ahead, which works pretty well
for sequential patterns. What does

   blockdev --getra /dev/...

say?

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




RE: effective_io_concurrency and NVMe devices

2022-06-07 Thread Jakub Wartak
Hi Tomas,

> > I have a machine here with 1 x PCIe 3.0 NVMe SSD and also 1 x PCIe 4.0
> > NVMe SSD. I ran a few tests to see how different values of
> > effective_io_concurrency would affect performance. I tried to come up
> > with a query that did little enough CPU processing to ensure that I/O
> > was the clear bottleneck.
> >
> > The test was with a 128GB table on a machine with 64GB of RAM.  I
> > padded the tuples out so there were 4 per page so that the aggregation
> > didn't have much work to do.
> >
> > The query I ran was: explain (analyze, buffers, timing off) select
> > count(p) from r where a = 1;
 
> The other idea I had while looking at batching a while back, is that we should
> batch the prefetches. The current logic interleaves prefetches with other 
> work -
> prefetch one page, process one page, ... But once reading a page gets
> sufficiently fast, this means the queues never get deep enough for
> optimizations. So maybe we should think about batching the prefetches, in some
> way. Unfortunately posix_fadvise does not allow batching of requests, but we
> can at least stop interleaving the requests.

.. for now it doesn't, but IORING_OP_FADVISE is on the long-term horizon.  

> The attached patch is a trivial version that waits until we're at least
> 32 pages behind the target, and then prefetches all of them. Maybe give it a 
> try?
> (This pretty much disables prefetching for e_i_c below 32, but for an
> experimental patch that's enough.)

I've tried it at e_i_c=10 initially on David's setup.sql, and most defaults 
s_b=128MB, dbsize=8kb but with forced disabled parallel query (for easier 
inspection with strace just to be sure//so please don't compare times). 

run:
a) master (e_i_c=10)  181760ms, 185680ms, 185384ms @ ~ 340MB/s and 44k IOPS 
(~122k IOPS practical max here for libaio)
b) patched(e_i_c=10)  237774ms, 236326ms, ..as you stated it disabled 
prefetching, fadvise() not occurring
c) patched(e_i_c=128) 90430ms, 88354ms, 85446ms, 78475ms, 74983ms, 81432ms 
(mean=83186ms +/- 5947ms) @ ~570MB/s and 75k IOPS (it even peaked for a second 
on ~122k)
d) master (e_i_c=128) 116865ms, 101178ms, 89529ms, 95024ms, 89942ms 99939ms 
(mean=98746ms +/- 10118ms) @ ~510MB/s and 65k IOPS (rare peaks to 90..100k IOPS)

~16% benefit sounds good (help me understand: L1i cache?). Maybe it is worth 
throwing that patch onto more advanced / complete performance test farm too ? 
(although it's only for bitmap heap scans)

run a: looked interleaved as you said:
fadvise64(160, 1064157184, 8192, POSIX_FADV_WILLNEED) = 0
pread64(160, "@\0\0\0\200\303/_\0\0\4\0(\0\200\0\0 \4 \0\0\0\0 
\230\300\17@\220\300\17"..., 8192, 1064009728) = 8192
fadvise64(160, 1064173568, 8192, POSIX_FADV_WILLNEED) = 0
pread64(160, "@\0\0\0\0\0040_\0\0\4\0(\0\200\0\0 \4 \0\0\0\0 
\230\300\17@\220\300\17"..., 8192, 1064026112) = 8192
[..]

BTW: interesting note, for run b, the avgrq-sz from extended iostat jumps is 
flipping between 16(*512=8kB) to ~256(*512=~128kB!) as if kernel was doing some 
own prefetching heuristics on and off in cycles, while when calling 
e_i_c/fadvise() is in action then it seems to be always 8kB requests. So with 
disabled fadivse() one IMHO might have problems deterministically benchmarking 
short queries as kernel voodoo might be happening (?)

-J.




RE: effective_io_concurrency and NVMe devices

2022-06-02 Thread Jakub Wartak
Hi Nathan,

> > NVMe devices have a maximum queue length of 64k:
[..]
> > but our effective_io_concurrency maximum is 1,000:
[..]
> > Should we increase its maximum to 64k?  Backpatched?  (SATA has a
> > maximum queue length of 256.)
> 
> If there are demonstrable improvements with higher values, this seems
> reasonable to me.  I would even suggest removing the limit completely so
> this doesn't need to be revisited in the future.

Well, are there any? I remember playing with this (although for ANALYZE 
Stephen's case [1]) and got quite contrary results [2] -- see going to 16 from 
8 actually degraded performance.
I somehow struggle to understand how 1000+ fadvise() syscalls would be a net 
benefit on storage with latency of ~0.1.. 0.3ms as each syscall on it's own is 
overhead (quite contrary, it should help on very slow one?) 
Pardon if I'm wrong (I don't have time to lookup code now), but maybe Bitmap 
Scans/fadvise() logic would first need to perform some fadvise() offset/length 
aggregations to bigger fadvise() syscalls and in the end real hardware 
observable I/O concurrency would be bigger (assuming that fs/LVM/dm/mq layer 
would split that into more parallel IOs).

-J.

[1] - https://commitfest.postgresql.org/30/2799/
[2] - 
https://www.postgresql.org/message-id/flat/vi1pr0701mb69603a433348edcf783c6ecbf6...@vi1pr0701mb6960.eurprd07.prod.outlook.com


 





Re: effective_io_concurrency and NVMe devices

2022-04-21 Thread Tomas Vondra
On 4/21/22 10:14, David Rowley wrote:
> On Wed, 20 Apr 2022 at 14:56, Bruce Momjian  wrote:
>> NVMe devices have a maximum queue length of 64k:
> 
>> Should we increase its maximum to 64k?  Backpatched?  (SATA has a
>> maximum queue length of 256.)
> 
> I have a machine here with 1 x PCIe 3.0 NVMe SSD and also 1 x PCIe 4.0
> NVMe SSD. I ran a few tests to see how different values of
> effective_io_concurrency would affect performance. I tried to come up
> with a query that did little enough CPU processing to ensure that I/O
> was the clear bottleneck.
> 
> The test was with a 128GB table on a machine with 64GB of RAM.  I
> padded the tuples out so there were 4 per page so that the aggregation
> didn't have much work to do.
> 
> The query I ran was: explain (analyze, buffers, timing off) select
> count(p) from r where a = 1;
> 
> Here's what I saw:
> 
> NVME PCIe 3.0 (Samsung 970 Evo 1TB)
> e_i_c query_time_ms
> 0 88627.221
> 1 652915.192
> 5 271536.054
> 10 141168.986
> 100 67340.026
> 1000 70686.596
> 1 70027.938
> 10 70106.661
> 
> Saw a max of 991 MB/sec in iotop
> 
> NVME PCIe 4.0 (Samsung 980 Pro 1TB)
> e_i_c query_time_ms
> 0 59306.960
> 1 956170.704
> 5 237879.121
> 10 135004.111
> 100 55662.030
> 1000 51513.717
> 1 59807.824
> 10 53443.291
> 
> Saw a max of 1126 MB/sec in iotop
> 
> I'm not pretending that this is the best query and table size to show
> it, but at least this test shows that there's not much to gain by
> prefetching further.   I imagine going further than we need to is
> likely to have negative consequences due to populating the kernel page
> cache with buffers that won't be used for a while. I also imagine
> going too far out likely increases the risk that buffers we've
> prefetched are evicted before they're used.
> 

Not sure.

I don't think the risk of polluting the cache is very high, because the
1k buffers is 8MB and 64k would be 512MB. That's significant, but likely
just a tiny fraction of the available memory in machines with NVME.
Sure, there may be multiple sessions doing prefetch, but chances the
sessions touch the same data etc.

> This does also highlight that an effective_io_concurrency of 1 (the
> default) is pretty terrible in this test.  The bitmap contained every
> 2nd page. I imagine that would break normal page prefetching by the
> kernel. If that's true, then it does not explain why e_i_c = 0 was so
> fast.
> 

Yeah, this default is clearly pretty unfortunate. I think the problem is
that async request is not free, i.e. prefetching means

  async request + read

and the prefetch trick is in assuming that

  cost(async request) << cost(read)

and moving the read to a background thread. But the NVMe make reads
cheaper, so the amount of work moved to the background thread gets
lower, while the cost of the async request remains roughly the same.
Which means the difference (benefit) decreases over time.

Also, recent NVMe devices (like Intel Optane) aim to require lower queue
depths, so although the NVMe spec supports 64k queues and 64k commands
per queue, that does not mean you need to use that many requests to get
good performance.

As for the strange behavior with e_i_c=0, I think this can be explained
by how NVMe work internally. A simplified model of NVMe device is "slow"
flash with a DRAM cache, and AFAIK the data is not read from flash into
DRAM in 8kB pages but larger chunks. So even if there's no explicit OS
readahead, the device may still cache larger chunks in the DRAM buffer.


> I've attached the test setup that I did. I'm open to modifying the
> test and running again if someone has an idea that might show benefits
> to larger values for effective_io_concurrency.
> 

I think it'd be interesting to test different / less regular patterns,
not just every 2nd page etc.

The other idea I had while looking at batching a while back, is that we
should batch the prefetches. The current logic interleaves prefetches
with other work - prefetch one page, process one page, ... But once
reading a page gets sufficiently fast, this means the queues never get
deep enough for optimizations. So maybe we should think about batching
the prefetches, in some way. Unfortunately posix_fadvise does not allow
batching of requests, but we can at least stop interleaving the requests.

The attached patch is a trivial version that waits until we're at least
32 pages behind the target, and then prefetches all of them. Maybe give
it a try? (This pretty much disables prefetching for e_i_c below 32, but
for an experimental patch that's enough.)


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Companydiff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index f6fe07ad703..550d0f387c1 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -472,6 +472,11 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc scan)
 
 		if 

Re: effective_io_concurrency and NVMe devices

2022-04-21 Thread David Rowley
On Wed, 20 Apr 2022 at 14:56, Bruce Momjian  wrote:
> NVMe devices have a maximum queue length of 64k:

> Should we increase its maximum to 64k?  Backpatched?  (SATA has a
> maximum queue length of 256.)

I have a machine here with 1 x PCIe 3.0 NVMe SSD and also 1 x PCIe 4.0
NVMe SSD. I ran a few tests to see how different values of
effective_io_concurrency would affect performance. I tried to come up
with a query that did little enough CPU processing to ensure that I/O
was the clear bottleneck.

The test was with a 128GB table on a machine with 64GB of RAM.  I
padded the tuples out so there were 4 per page so that the aggregation
didn't have much work to do.

The query I ran was: explain (analyze, buffers, timing off) select
count(p) from r where a = 1;

Here's what I saw:

NVME PCIe 3.0 (Samsung 970 Evo 1TB)
e_i_c query_time_ms
0 88627.221
1 652915.192
5 271536.054
10 141168.986
100 67340.026
1000 70686.596
1 70027.938
10 70106.661

Saw a max of 991 MB/sec in iotop

NVME PCIe 4.0 (Samsung 980 Pro 1TB)
e_i_c query_time_ms
0 59306.960
1 956170.704
5 237879.121
10 135004.111
100 55662.030
1000 51513.717
1 59807.824
10 53443.291

Saw a max of 1126 MB/sec in iotop

I'm not pretending that this is the best query and table size to show
it, but at least this test shows that there's not much to gain by
prefetching further.   I imagine going further than we need to is
likely to have negative consequences due to populating the kernel page
cache with buffers that won't be used for a while. I also imagine
going too far out likely increases the risk that buffers we've
prefetched are evicted before they're used.

This does also highlight that an effective_io_concurrency of 1 (the
default) is pretty terrible in this test.  The bitmap contained every
2nd page. I imagine that would break normal page prefetching by the
kernel. If that's true, then it does not explain why e_i_c = 0 was so
fast.

I've attached the test setup that I did. I'm open to modifying the
test and running again if someone has an idea that might show benefits
to larger values for effective_io_concurrency.

David


setup.sql
Description: Binary data


Re: effective_io_concurrency and NVMe devices

2022-04-20 Thread Nathan Bossart
On Tue, Apr 19, 2022 at 10:56:05PM -0400, Bruce Momjian wrote:
> NVMe devices have a maximum queue length of 64k:
> 
>   https://blog.westerndigital.com/nvme-queues-explained/
> 
> but our effective_io_concurrency maximum is 1,000:
> 
>   test=> set effective_io_concurrency = 1001;
>   ERROR:  1001 is outside the valid range for parameter 
> "effective_io_concurrency" (0 .. 1000)
> 
> Should we increase its maximum to 64k?  Backpatched?  (SATA has a
> maximum queue length of 256.)

If there are demonstrable improvements with higher values, this seems
reasonable to me.  I would even suggest removing the limit completely so
this doesn't need to be revisited in the future.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com