On 9/3/2013 8:02 AM, Alex Peshkoff wrote:
> On 09/03/13 13:21, Dmitry Yemanov wrote:
>> 03.09.2013 11:13, Alex Peshkoff wrote:
>>
>>> That's definitely a candidate for next ODS. But I see one problem -
>>> currently all (or at least most of all) record buffers are allocated at
>>> prepare time. With variable record length this strategy requires change,
>>> and that change does not look trivial at the first glance.
>> Record buffers are easily reallocated (extended) at runtime, this is how
>> we deal with different record formats. All the code is already in place
>> and the prepare-time allocation is just targeted at the "most commonly
>> used" scenario. The problem is that we never shrink record buffers. This
>> works fast, but the memory is used ineffectively. If we start to
>> reallocate them both ways at runtime, it will cost us extra CPU cycles.
>>
> Does not it seem that waht you describe is classic and hard-most problem
> of dynamic memory allocation? :-)
> We can use memory fast or efficient, combining is always compromise.
>
> On my mind what we have today and what is suggested are very difficult
> from memory comsumption POV.
>
> Currently new record format certainly may be narrower than the old one,
> and in this case we do waste memory. But that means dropping fields from
> the table or decreasing size of string fields. Not too often case -
> typically we add fields / expand string fields. Moreover changing format
> is also not typical operation for production systems.
>
> If we start to use unlimited strings a case when once used very long
> record occupies a lot of memory but later same buffer is used for small
> records becomes much more possible. Certainly, there are may be
> solutions how to find the best compromise for this case, but this will
> anyway cost us CPU. I don't want to say that suggestion is bad, but it
> should be well thought about before doing.

An idea I've toyed with is to automatically promote any large string or 
large varbinary to blobs, i.e. stored as separate object.  The original 
motivation for the blob type was to be able to handle data objects 
larger than physical memory (I'm showing my age here).  Now, the primary 
benefit is the ability to update a record without slopping unchanged 
blobs around (this, incidentally, is why blob handling in MySQL sucks so 
badly -- blobs are simply tacked to the end of the record).  But this 
would probably mean the demise of the blob as a type, but I can live 
with that.


>
>
> ------------------------------------------------------------------------------
> Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
> Discover the easy way to master current and previous Microsoft technologies
> and advance your career. Get an incredible 1,500+ hours of step-by-step
> tutorial videos with LearnDevNow. Subscribe today and save!
> http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk
> Firebird-Devel mailing list, web interface at 
> https://lists.sourceforge.net/lists/listinfo/firebird-devel


------------------------------------------------------------------------------
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to