>> All metadata objects moved into Attachment. Metadata syncronization is 
>> guarded
>> by attachment's mutex now. Database::SyncGuard and company are replaced by
>> corresponding Attachment::XXX classes.
>>
>> To make AST's work we need to release attachment mutex sometimes. This is 
>> very
>> important change after v2.5 : in v2.5 attachment mutex is locked during 
>> whole duration
>> of API call and no other API call (except of asynchronous 
>> fb_cancel_operation) could work
>> with "busy" attachment. In v3 this rule is not worked anymore. So, now we 
>> could run more
>> that one API call on the same attachment (of course not really 
>> simultaneously). I'm not
>> sure it is safe but not disabled it so far.
> 
> Are there other reasons (besides AST delivery and possible 
> per-attachment parallelism) to have checkouts?

    Yes. Database-level objects with internal syncronization also need to 
release attachment
mutex while waiting for own syncronization object. Look at GlobalRWLock, for 
example.

>> To make asynchronous detach safe i introduced att_use_count counter which is
>> incremented each time when API call is entered and decremented on exit. 
>> detach
>> now marks attachment as shutdown and waits for att_use_count == 0 before
>> processing.
> 
> How is it done? Sleeping/timeouts inside the spin loop?

    Currently - yes. See JAttachment::freeEngineData (former jrd8_detach)

>> Also it seems this counter make obsolete att_in_use member as detach call 
>> should wait
>> for att_use_count == 0 and drop call should return "object is in use" if 
>> att_use_count != 0.
> 
> Agreed.
> 
>> All AST's related to attachment-level objects should take attachment mutex 
>> before
>> access attachment internals. This is implemented but not tested !
> 
> Are there any database-level ASTs known to implicitly access the 
> attachments? When should they lock the appropriate mutex?

    Database-level ASTs not need to access attachment internals usually. 
Therefore NULL
attachment can be passed into both Attachment::SyncGuard and 
Attachment::Checkout 
classes. Not perfect i know. Better ideas appreciated.

>> Transaction inventory pages cache (TIP cache) was reworked and is shared by 
>> all
>> attachments.
> 
> What other layers (besides TPC, CCH and supposedly PIO) are kept in the 
> shared Database class?

    SharedCounter, PageSpace manager,  backup manager, shadow, monitoring, blob 
filters, 
generator pages, loaded modules, External Engines manager, probably something 
else...
Some of them requires additional protection, so this is work in progress.
 
>> To avoid contention on common dbb_pool its usage was replaced by att_pool 
>> when
>> possible. To make this task slightly easy there was introduced 
>> jrd_rel::rel_pool which
>> is points currently to the attachment's pool. All relation's "sub-objects" 
>> (such as
>> formats, fields, etc) is allocated using rel_pool (it was dbb_pool before). 
>> When we'll
>> return metadata objects back to the Database it will be easy to redirect 
>> rel_pool to
>> dbb_pool at one place in code instead of makeing tens of small changes again.
> 
> Good idea. What about going even further and renaming it to meta_pool 
> and then switching procedures/functions/triggers to this pool as well?

    Should think about it ;)

>> In configuration file there are two new settings :
>> a) SharedCache - boolean value which rules the page cache mode
>> b) SharedDatabase - boolean value which which rules the database file open 
>> mode.
> 
> Do we really need the SharedCache option in production? 

    It was implicitly introduced by Alex and i just add it to the config file. 
I see no problem
if we remove it - less options is less headache for us and for users :)

> I thought it would be the only mode available. See below.
> 
>> - SharedCache = true, SharedDatabase = false (default mode)
>>      this is traditional SuperServer mode when all attachments share page 
>> cache and
>>      database file is opens in exclusive mode (only one server process could 
>> work with
>>      database)
> 
> OK.
> 
>> - SharedCache = false, SharedDatabase = true
>>      this is Classic mode when each attachment have its own page cache and 
>> many
>>      server processes could work with the same database.
> 
> The Classic mode could be handled by the listener itself. 

    I mean both CS and SC here and explained it below :)

> As long as it 
> keeps launching one process per connection, there's no difference 
> between SharedCache being true or false, as there will always be only 
> one Database/Attachment pair per process.

    Sure
 
>> To run SuperClassic you should use switch -m in command line of firebird.exe
>> (on Windows) or run fb_smp_server (on Posix, here i'm not sure and Alex will
>> correct me).  Else ClassicServer will run.
> 
> This is the only case when we seem needing SharedCache = false. But what 
> are the benefits of SuperClassic in v3.0?

    For end users - i don't know. For us to easily debug Classic mode - 
definitely.
 
>> - SharedCache = true, SharedDatabase = true
>>      this is completely new mode in which database file could be opened by 
>> many
>>      server processes and each process could handle many attachments which 
>> will
>>      share page cache (i.e. per-process page cache).
> 
> OK.
> 
>> - SharedCache = false, SharedDatabase = false
>>      Looks like single process with single attachment will be allowed to 
>> work with
>>      database with such settings. Probably you can find an applications for 
>> it ;)
> 
> I cannot ;-) SharedCache = true (and possibly single-mode shutdown) does 
> the same trick.
> 
>> One more change in configuration is that CpuAffinityMask setting changed its
>> default value and it is 0 now. It allows new SS to use all available 
>> CPU's\cores by
>> default.
> 
> Does it make sense to keep this option? I see what it could be used for, 
> but I'm not really sure anyone intentionally uses affinity in production.

    I always said that Affinity should work even for Classic. Think about 
multy-core box
at internet provider, for example. It could run many instances of Firebird and 
each will
bound to payed number of cores. There could be another usage of course.

Regards,
Vlad

------------------------------------------------------------------------------
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to