Hi all. I know this has been discussed before, I'd like to know what's the
current position on this.

Comparing the performance of the simple vs. extended protocols with pgbench
yields some extreme results:

$ ./pgbench -T 10 -S -M simple -f /tmp/pgbench.sql pgbench
tps = 14739.803253 (excluding connections establishing)

$ ./pgbench -T 10 -S -M extended -f /tmp/pgbench.sql pgbench
tps = 11407.012679 (excluding connections establishing)

(pgbench.sql contains a minimal SELECT 1, I'm running against localhost)

I was aware that there's some overhead associated with the extended
protocol, but not that it was a 30% difference... My question is whether
there are good reasons why this should be so, or rather that this simply
hasn't been optimized yet. If it's the latter, are there plans to do so?

To give some context, I maintain Npgsql, the open-source .NET driver for
PostgreSQL. Since recent version Npgsql uses the extended protocol almost
exclusively, mainly because it does binary data rather than text. Even if
that weren't the case, imposing such a performance penalty on extended-only
features (parameters, prepared statements) seems problematic.

I'm aware that testing against localhost inflates the performance issue -
taking into account the latency of a remote server, the simple/extended
difference would be much less significant. But the issue still seems to be
relevant.

Reply via email to