On 11/12/13 12:20, Sven Van Caekenberghe wrote:
Hi Jan,
On 11 Dec 2013, at 12:49, Jan Vrany <[email protected]> wrote:
Hi,
By default, no concurrent access protection takes place, but optionally a
semaphore for mutual exclusion can be used. This slows down access.
cache useSemaphore.
There was enough "awesomes" already :-) so now some critics :-)
I like constructive feedback !
Wouldn't it be better to rename #useSemaphore to #beThreadSafe
or #beSynchronized.
Yes, specifying the goal/result is better than specifying the means. I think I
will go for #beThreadSafe (although threads in Pharo are called Processes ;-).
The last one reminds me of Java and we don’t have a ‘synchronised’ concept in
Pharo AFAIK.
Also I would use recursion lock (monitor, if you like) rather than plain mutex.
I’ve used both, they both work. But I must admit that in my mind the difference
is not very clear. A monitor can be re-entered while a semaphore cannot,
That's exactly the difference, subtle but important :-)
but I doubt this is necessary here.
Imagine you need Fibonacci number and want to cache them for speed.
How would one do that? Obvious solution would be:
fibCache := NeoCache ...
fibCache factory: [:key | (fibCache at: key - 1) + (fibCache at: key - 1) ].
If I understood the code correctly `fibCache at: 10` would hang, right?
Any other reasons to choose one over the other ? Speed ?
Speed-wise, it depends on implementation. The cost of recursion
lock could be reduced to couple machine instructions in case
there's no contention (which is usually the case).
Actually, this would make an interesting use-case for NB...
Sven
Best, Jan