Hi Jakub,

Can you please how have you verified it is giving you that some perf. increase?
...
where insert.sql was:
    echo "CREATE TEMPORARY TABLE IF NOT EXISTS file_storage (data
BYTEA STORAGE EXTERNAL) ON COMMIT PRESERVE ROWS;" > insert.sql
    echo "INSERT INTO file_storage(data) VALUES ('" >> insert.sql
    perl -e 'print "A"x(1024*1024);' >> insert.sql # 1MB
    echo "');" >> insert.sql

This data might be too easily compressed.
In production we upload files (PDFs), for benchmarking
we generated random data.

This might make a big difference on the network side
if you're using TLS with compression, for example.


Some description of the env I had:
- tuned TCP rmem/wmem and set congestion to BBR (to eliminate TCP as
being a bottlneck)

Well, if your tcp_rmem buffers are bigger than the data you upload,
that might also soften the impact of the patch.


- low RTT (same AWS zone), max 25Gbps total, max 9.6 Gbps single stream TCP

That might also be a big difference,
we had a few msec inbetween client and server.


- as TOAST pglz compression way taking most of CPU in my case , I've
changed it to lz4 also didn't help a lot, so I've changed it to avoid
*any* compression

We had no compression set on the table


- switched to temporary table to avoid I/O as much as possible,
wal_level=minimal too

We didn't use a temporary table.


If you think that's helpful,
I can try to dig out what we used to benchmark
the receive buffer size impact.


Regards,

Phil


Reply via email to