Hi Jim,
I don't tell your scheme hack, this is misunderstanding.
I tell, that current implementation of RLE in firebird is hack
(parsing RLE control stream outside compresor/decompresor in reverse order).
If I replace current RLE by anything, I have to do same/worst hack(s).
And I don't want to g
First, I take personal offense at your characterization of my empncoding
scheme as a hack. It is not. It is a carefully thought out scheme with
muktiple implementations into three database systems. It has been
measured, compared, and extensively profiled. I would be the last to cram
it down som
Hi Jim,
I will try explain.
First, for any encoding schema, we need good interface that will be
respected by all other parts of program.
Now, the core of RLE is in one file, but some other parts of Firebird
try to parse RLE directly.
In this situation I need clean up code to use interface.
For
Perhaps a smarter approach would be to capture the run lengths on the first
scan to drive the encoding. I vaguely remember that the code once did
something like that.
Could you describe your scheme and explain why it's better? Run length
encoding doesn't seem to lend itself to a lot of optimizat
Hi Jim,
what happens in current Firebird if records not fit in buffer:
1. Scan and calculate commpress length
2. If not fit, than scan control buffer and calculate, how many bytes
will fit + padding
3. Compress into small area (scan again)
4. Find another free space on data page and goto 1 with
The answer to your questions is simple: It is much faster to encode from
the original record onto the data pages(s), eliminating the need to
allocate, populate, copy, and release a temporary buffer.
And, frankly, the cost of a byte per full database page is not something to
loose sleep over.
The
Hi Vlad,
as I see, in some situation (that really happen), packing into small
area is padded by zeroes
(uncomress prefix with zero length).
And new control char added at begining next fragment (you will lost 2
bytes).
The differencies in current compression is not so much, but with better
one is
What I meant is that the reaction for the event is at the client.
[]s
Carlos
http://www.firebirdnews.org
FireBase - http://www.FireBase.com.br
DS> 27.02.2015 15:08, Carlos H. Cantu wrote:
>> Firebird events needs a listening client.
DS>No. It is not Oracle.
Title: Re: [Firebird-devel] Proposal of new feature: Event triggers
What I said before is: for that single example that I gave, I could achieve a similar result using currently events implementation, if it could be trusted 100%, but I would consider such solution as a workaround.
If a single cli
No, I don't see. What you want to do, and you agreed, could be done with
the existence ng event mechanism if yiu could figure out why sometimes it
appeared that events weren't being delivered. Instead, your proposing a
whole new class of trigger that would require yet another interface, a
mechani
27.02.2015 15:08, Carlos H. Cantu wrote:
> Firebird events needs a listening client.
No. It is not Oracle.
--
WBR, SD.
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and
Title: Re: [Firebird-devel] Proposal of new feature: Event triggers
Jim, as I said before, I'm not proposing a substitute for the currently Firebird events. It is a new feature. Of course, if current events implementation is weak and/or bugged, it would be nice to have it improved or fixed, but t
27.02.2015 15:18, Slavomir Skopalik wrote:
> Hi,
> I was investigate more about record storage and I found this:
> If record going to be fragmented than each part are compressed
> separatly.
Not exactly so. The big record is prepared for compression as a whole, then
tail of record is packed and
Carlos, again, why not figure out why events aren't working at your site.
It is a mechanism designed from the beginning to do what you want,
e.g. notify other connections of a database state change.
What you are suggesting will introduce far more problems than it purports
to solve.
If there is is
Hi,
I was investigate more about record storage and I found this:
If record going to be fragmented than each part are compressed
separatly.
And when record is materialized in RAM all parts are reads and decompress
separatly.
If comprossor cannot fit in small space, than rest of space is padded
(ch
On 27/02/2015 10:10, Dimitry Sibiryakov wrote:
> 27.02.2015 14:03, Carlos H. Cantu wrote:
>> I have no idea about how context variables was implemented
>> internally, but do you think there is a chance to make them respect
>> the isolation of its transaction when retrieving the values?
>It is n
27.02.2015 14:03, Carlos H. Cantu wrote:
> I have no idea about how context variables was implemented
> internally, but do you think there is a chance to make them respect
> the isolation of its transaction when retrieving the values?
It is not that hard to do, but being out of transaction cont
AP> And this may play bad jokes from stability POV - imagine having old
AP> values in the beginning of some complex request but new one in the end.
Exactly. I have no idea about how context variables was implemented
internally, but do you think there is a chance to make them respect
the isolation
On 02/26/15 21:04, Dmitry Yemanov wrote:
>> But your point raised a doubt here: what is the "isolation" of a
>> USER_SESSION context variable regarding active snapshot transactions?
> Context variables are outside the transaction control, so the new values
> are available immediately.
>
And this
19 matches
Mail list logo