My database systems haven't have pages for 15 years, based instead on
distributed objects serialized to storage for persistence.
Netfrastructure/Falcon did all record operations in memory using
database pages only for backfill. OK, I guess, but it didn't work for
distributed databases. If
Putting metadata bytes in record data is INCREDIBLY efficient.
The heavily overload byte codes that I have used for my last two
database systems has, for example, byte codes for:
1. Integers from -10 to 32 (totally arbitrary)
2. Integers of significant length 1 to 8
3. UTF strings from length
09.06.2022 16:18, Adriano dos Santos Fernandes wrote:
09.06.2022 15:16, Adriano dos Santos Fernandes wrote:
Yes, it should work. However, I'm not going to remove the limit until we
introduce a denser compression. Also, we have a number of places where
records is stored unpacked in memory
09.06.2022 15:58, Dmitry Yemanov wrote:
(2) skip padding bytes
A separate but interesting question is whether we need alignment (and
thus padding bytes) at all. Most of our installations are little-endian
and it does not crash on unaligned data access. Moreover, modern CPUs
access
On 09/06/2022 09:58, Dmitry Yemanov wrote:
> 09.06.2022 15:16, Adriano dos Santos Fernandes wrote:
>
> Yes, it should work. However, I'm not going to remove the limit until we
> introduce a denser compression. Also, we have a number of places where
> records is stored unpacked in memory (rpb's,
09.06.2022 15:16, Adriano dos Santos Fernandes wrote:
With some frequency people ask me why UTF-8 is slower than single byte
charsets.
The thing is, they have something using, for example, VARCHAR(30)
CHARACTER SET WIN1252 and convert to VARCHAR(30) CHARACTER SET UTF8,
test with the same data
Adriano dos Santos Fernandes wrote 09.06.2022 14:36:
Self-descriptive record format which makes RDB$FORMATS obsolete and
solve problems with garbage collection etc was suggested by me (for
replication block buffer in the first place but storage can use it as
well).
Putting metadata bytes in
On 09/06/2022 09:29, Dimitry Sibiryakov wrote:
> Adriano dos Santos Fernandes wrote 09.06.2022 14:16:
>> What do you think and are there any active work in this regard?
>
> Using of record encoding instead of record compressing was suggested
> years ago by Ann and Jim.
Yes, but, suggesting
Adriano dos Santos Fernandes wrote 09.06.2022 14:16:
What do you think and are there any active work in this regard?
Using of record encoding instead of record compressing was suggested years
ago by Ann and Jim.
Self-descriptive record format which makes RDB$FORMATS obsolete and solve
Hi!
With some frequency people ask me why UTF-8 is slower than single byte
charsets.
The thing is, they have something using, for example, VARCHAR(30)
CHARACTER SET WIN1252 and convert to VARCHAR(30) CHARACTER SET UTF8,
test with the same data and have slower queries.
Database is also increased
10 matches
Mail list logo