I've done some testing now, the dbSync calls incur very little overhead
with regards to the telling/updating mechanism of objects in other
processes.

What I still don't understand then is the ability to lock only certain
files in the dbSync call when the dbSync call is doing a global block in
itself. It seems to defeat the purpose of per file level locking?


On Sun, Jan 6, 2013 at 8:53 PM, Alexander Burger <a...@software-lab.de>wrote:

> On Sun, Jan 06, 2013 at 08:00:51PM +0700, Henrik Sarvell wrote:
> > OK so how would you deal with a situation where you have the need to
> > quickly increment people's balances like I mentioned previously but at
> the
> > same time you have another process that has to update a lot of objects by
> > fetching information from many others?
> >
> > This second process will take roughly one minute to complete from start
> to
> > finish and will not update +User in any way.
>
> I would do it the normal, "safe" way, i.e. with 'inc!>'. Note that the
> way you proposed doesn't have so much less overhead, I think, because it
> still uses the 'upd' argument to 'commit', which triggers communication
> with other processes, and 'commit' itself, which does a low-level
> locking of the DB.
>
>
> > If I have understood things correctly simply doing dbSync -> work ->
> commit
> > in the second process won't work here because it will block the balance
> > updates.
>
> Yes, but you can control this, depending on how many updates are done in
> the 'work' between dbSync and commit.
>
> We've discussed this in IRC, so for other readers here is what I do
> usually in such cases:
>
>    (dbSync)
>    (while (..)
>       (... do one update step ...)
>       (at (0 . 1000) (commit 'upd) (dbSync)) )
>    (commit 'upd)
>
> The value of 1000 is an examply, I would try something between 100 and
> 10000.
>
> With that, after every 1000th update step other processes get a chance
> to grab the lock in the (dbSync) after the 'commit'.
>
>
> > Another option would be to do it in a loop and use put!> which will only
> > initiate the sync at the time of each update which should not block the
> > balance updates for too long.
>
> Right. This would be optimal in terms of giving freedom to other
> processes, but it does only one single change in the 'put!>', and thus
> the whole update might take too long.
>
> The above sync at every 1000th step allows for a good compromise.
>
>
> > The question then is how much overhead does this cause when it comes to
> the
> > balance updates in your experience? If significant is it possible to
>
> I would not call this "overhead". It is just so that the "quick"
> operation of incrementing the balance may have to wait too long if the
> second process does too many changes in a single transaction.
>
> So the problem is not the 'inc!>'. It just sits and waits until it can
> do its job, and is then done quite quickly. It is the large update
> 'work' which may grab the DB for too long periods.
>
>
> > somehow solve the issue of these two processes creating collateral damage
> > to each other so to speak?
>
> If you can isolate the balance (not like in your last example, where two
> processes incremented and decremented the balance at the same time), and
> make absolutely sure that only once process caches the object at a given
> time, you could take the risk and do the incrementing/decrementing
> without synchronization, with just (commit).
>
> One way might be to have a single process take care of that,
> communicating values to/from other processes with 'tell', so that no one
> else needs to access these objects. But that's more complicated than
> the straight-forward way.
>
> ♪♫ Alex
> --
> UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
>

Reply via email to