22.05.2021 09:58, Mark Rotteveel wrote:
In any case, I think adding 1 to a length to obtain the actual length is
a bad idea even if it is not exposed to the outside. Doing that is a
great way to introduce increased risk of off-by-one errors and buffer
overflows; the risk of that increases exponentially when exposing this
to the outside world.
Agreed.
And what to think of 64KB record limitations, I seem to recall being
told in the past that this is also a hard limit somewhere in the legacy
API, which most people still use.
I suppose you mean the API message size limit. While the record size is
also limited by 64KB, it has nothing to do with the outside world and
may be raised when needed.
However, I do believe we should implement a denser record compression
*before* we increase either VARCHAR or record length limits.
Also, I had plans to implement unlimited (well, OK, ULONG-counted)
strings in the next major ODS. This really requires more API changes
than some hacks with the sign bit.
> If we're going to break things hard, I would sooner suggest something
> a bit more radical and ambitious, maybe something like SQL Server's
> VARCHAR(MAX), or PostgreSQLs VARCHAR/BYTEA (without length specifier),
> and/or sending blobs inline in the wire protocol, and maybe storing
> blobs inline as well.
I'd prefer to follow the standard, with length treated as a constraint,
and storage dependent on that length (like we do for NUMERICs). But I
don't mind having VARCHAR (without length specifier) that's
unconditionally backed by the ULONG-counted implementation.
So I'd suggest to start with the design for the "really long" strings
and then decide whether we should introduce it in v5 (for 64KB strings)
or later (for 4GB strings).
Dmitry
Firebird-Devel mailing list, web interface at
https://lists.sourceforge.net/lists/listinfo/firebird-devel