Le 30/10/15 08:37, Howard Chu a écrit :
> Kiran Ayyagari wrote:
>>
>>
>> On Thu, Oct 29, 2015 at 6:13 PM, Emmanuel Lécharny <[email protected]
>> <mailto:[email protected]>> wrote:
>>
>>     Hi guys,
>>
>>     here are a bit of my last night's insomnia thoughts (with the
>> shower,
>>     and the smallest room of my appartment, this is the only place
>> and time
>>     I have for such thoughts atm ;-) :
>>
>>     - txn are now going to be explicit, ie you have to begin a txn and
>>     commit it before and after any update, like in :
>>
>>              // Inject some data
>>              recordManager.beginTransaction();
>>              btree.insert( 1L, "1" );
>>              btree.insert( 4L, "4" );
>>              btree.insert( 2L, "2" );
>>              btree.insert( 3L, "3" );
>>              btree.insert( 5L, "5" );
>>              recordManager.commit();
>>
>> if a user forgets to call commit() and keeps inserting we will run
>> out of
>> memory at some point
>> so wondering what is the better way to prevent that from happening
>> without
>> maintaining a
>> log (a WAL or something similar)
>
> In the original LMDB implementation we simply had a max number of
> dirty pages in a txn. Once that was hit, further write requests simply
> failed with MDB_TXN_FULL.
>
> In current versions we still have the same max number of dirty pages
> at once, but when the limit is hit we spill a portion of pages
> immediately to disk so we can reuse their memory. As such, the current
> version allows txns of unlimited size, but still with a fixed RAM
> footprint.

This is ok if there is no relation between pages. The pb is that when
you use LMDB (or Mavibot) in a LDAP context, you will have to update
many pages in a single (application) transaction. Letting the underlying
DB to arbitrary flush pages based on a size/time limit simply break the
application transaction barriers.

Sadly, there is no easy solution for that - at least in Java or in C.
C++ makes it easier in a way you have destructors that are implicitely
called, so you can imagine th transaction to be automatically committed
or rollbacked when the destructor is called -. Typically Java finalizers
could be seen as a possible option, but it's not. Actuall, never ever
use a finalizer for that purpose.

Anyway, yeah, this is a big issue, and mitigating it by fixing a limit
is not necessarily a good idea in all context. We have had the issue
with teh WAL on top of JDBM : suddenly, teh server was freezing because
the WAL was full, but it was a nightmare to debug because we weren't
able to know where we forgot to commit or rollback (OTOH, that was our
fault, we should have been more careful in our implementation on top of
JDBM)

We can think about another solution though : instead of having to
explicitely start a transaction, and commit it, we can imagine a backend
for which we pass the full list of operations to apply, up to the bakend
to play those operations and commit (or rollback) them all. This remind
me about 'continuation', where we computed the code to be executed,
before passing it to the part in charge of its execution (it was back in
1988, when I was programing in LISP, where code is data).


Reply via email to