Hello, Alex!

On 09/22/2014 08:19 AM, Alex Peshkoff wrote:
>> Out of curiosity, what is the call sequence that leads to the memory
>> manager load? AFAIU, this is not what should normally happen when
>> processing records. Maybe post_verb / verb_cleanup?

The original test I stumbled upon was an artificial unit test for a piece of 
code. It called 
VIO_record a lot, much like savepoints code in Firebird 2.5.
Since new GC code is based on atomics, you need to put some pressure on certain 
execution paths to 
see if there is a problem.

Speaking of memory management, it was very easy to put a lot of pressure on 
memory manager in 
Firebird 2.5 using savepoints,
but since undo has migrated to temp tablespace memory pressure has reduced 
considerably.

Memory hot spots now are (1) [update->delete] savepoint code and [2] temporary 
blobs handling. There 
could be other spots,
we didn't test Firebird 3 well enough yet.

Temporary blobs problem is even easier to reproduce (I used default 4 kb pages 
for test):
===
execute block as
declare variable B blob;
declare variable i integer = 0;
begin
   while(i <= 15000000) do
   begin
     b = i;
     i = i + 1;
   end
end;
===

This query takes 92 seconds with memory manager of Firebird 2.5 (and scales 
almost linearly with 
number of iterations)
and 219 seconds with new memory manager (and demonstrates O(N^2) performance).

If I understand correctly, this test stresses different code path from a 
previous test (small block 
vs large block allocation).

Sometimes you need to convert large number or strings to BLOBS in ETL or schema 
update procedures, 
so this kind of performance
is not quite acceptable for us.


>>> Dear Firebird engineers, why did you replace an algorithm which has 
>>> O(log(n)) performance with an
>>> algorithm that has O(n) performance in such a performance-critical part of 
>>> the engine?
>> Maybe because you're speaking just about one case while there are other
>> cases when the new memory manager was proven to be more efficient? I
>> hope Alex will jump in with more details.
> Current memory manager was ported from Vulcan a few years ago, at the
> most beginning of FB3 project. Certainly the first thing to do was
> performance comparison. I've tries on fbtcs, backup/restore of some real
> life databases, tpc/c and something else. Unfortunately I did not save
> results for all tests and I do not remember them now - too many time has
> gone, but speedup for all tests was between 5 and 10 percents, and
> certainly none of them showed worse performance.
>
> Certainly I knew about O(n) performance issues, but that time
> performance growth appeared enough for me, and I've decided to return to
> that issue later. Looks like time for it came.

Firebird's new memory manager is extremely fast in simple cases, but worst case 
performance and 
scalability are not very good.
As a rule of thumb now, if memory pool used more than 2 Gb of RAM, the server 
spends most of its 
time in memory manager.

Older memory manager was designed with reliability in mind, not to ever have 
bad performance cases.
'global best-fit' strategy is the most costly strategy to avoid fragmentation, 
and it requires a lot 
of bookkeeping, but it is also the most reliable.

>>> Last statement uses 3.5 Gb of memory in small blocks.
>> Do you mean the undo log here? vct_records bitmap of 3.5GB?
No, the query uses vct_undo tree in this case.

> And looking at this list I start to wonder - may be it's really better to 
> port tiny blocks 
> allocation into 2.5 memory manager? 

No, please don't do this. If you do this, you would compromise 'global best 
fit' fragmentation strategy.
Hybrid model is no go. Either you design for speed or for reliability. When you 
try to mix these 
models, you get neither.

The best way to speed it up for small blocks allocation in Firebird 2.5 
allocation algorithm IMO is 
to replace BTree with custom array.
Number of elements if freelist BTree is limited, and custom structure would 
speed up small block 
allocation by about 50%
without compromising reliability or scalability. I can implement this change, 
if you ask me, after I 
finish testing GC changes.

I am not quite fond of third-party allocator idea, because memory allocation 
issues often becomes a 
bottleneck, and we resort
to studying of the production system with OProfile. When you measure server's 
RAM in TB, interesting 
issues do pop up. :-)

Seeing the profiling results, sometimes you need to align your allocation code 
with allocation code 
of the kernel (yes, we were
burned by hugepages issues too and some other issues as well). This would be 
quite hard if we didn't 
understand fully how Firebird's
allocator works.

>>> Good thing is that changing or replacing memory manager is very simple task 
>>> for existing code base.
>> I'd rather prefer the memory manager being replacable / pluggable at
>> runtime.
> Quite possible, very useful, requires low programming efforts.

I do not think this is necessary or useful. Firebird's idea is that you do not 
have to turn many knobs
to turn to get good performance. By using non-scalable allocator by default, 
you would compromise 
this idea.

If you ask me - a version that is 5% slower in simple cases, but scales well is 
always better.
Because small users don't care about these extra 5%, but large users will 
quickly face scalability 
problems.

But on the other side, non-scalable allocator in vanilla Firebird is probably 
good for Red Soft 
business. :-)
So I am happy either way.

Thank you!

Best Regards,
Nikolay Samofatov


------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to