Re: atoms, memoize, future-s and CAS

2014-12-20 Thread Andy L
Hi, Thanks for all suggestions. It all encouraged me to deep dive into atom-s code which turns out to be a simple wrapper over Java java.util.concurrent.atomic.AtomicReference which essentially is a spinlock. Knowing how it works under the hood makes so easier to use it ... Below piece (hopefully

Re: atoms, memoize, future-s and CAS

2014-12-08 Thread Michał Marczyk
On 8 December 2014 at 21:46, Fluid Dynamics wrote: > [...] > Which means it's locking or bust. You just get to either do the locking > yourself or delegate :) Sure, but isn't it nice when somebody else does your locking for you? :-) Incidentally, there is a trade-off here between lockless reads

Re: atoms, memoize, future-s and CAS

2014-12-08 Thread Michał Marczyk
Oh, and as for how to use it here, you could for example say (.putIfAbsent concurrent-hash-map :foo (delay (foo))) Then the first thread to @(get concurrent-hash-map :foo (delay :not-found)) (or similar) would actually compute the value. With a map in an atom, you could swap! using a function

Re: atoms, memoize, future-s and CAS

2014-12-08 Thread Fluid Dynamics
On Monday, December 8, 2014 3:34:05 PM UTC-5, Michał Marczyk wrote: > > On 8 December 2014 at 17:54, Andy L > > wrote: > >> But I'd personally just use a delay rather than "locking" for this > >> purpose. > > > > > > It is not that I like locking at all. However I still fail to see, how > in

Re: atoms, memoize, future-s and CAS

2014-12-08 Thread Michał Marczyk
On 8 December 2014 at 17:54, Andy L wrote: >> But I'd personally just use a delay rather than "locking" for this >> purpose. > > > It is not that I like locking at all. However I still fail to see, how in a > multithreaded context memoize/cache prevents executing a given function more > than once

Re: atoms, memoize, future-s and CAS

2014-12-08 Thread Andy L
> > > Most of the cache implementations in core.cache have no side-effects. They > simply return a new cache rather than overwriting the old one. The memoize > library places the cache in an atom, so it's guaranteed to change > atomically. > I tried to read the cache code (btw an excellent exerci

Re: atoms, memoize, future-s and CAS

2014-12-07 Thread James Reeves
On 7 December 2014 at 05:13, Andy L wrote: > > > >> The SoftCache uses a ConcurrentHashMap, but that caching option isn't >> used in core.memoize. Are you building a custom memoizer? >> >> > WRT ConcurrentHashMap, it was an incorrect conclusion on my part. In any > case, I fail to see "thread saf

Re: atoms, memoize, future-s and CAS

2014-12-06 Thread Andy L
or even better (using future themselves as a "marker" in the atom): (defmacro map-future-swap! [a k f] `(locking ~a (when (not (contains? @~a ~k)) (swap! ~a assoc ~k (future (swap! ~a assoc ~k (~f ~k ) ) ) -- You received this message because you are subscribed t

Re: atoms, memoize, future-s and CAS

2014-12-06 Thread Andy L
> The SoftCache uses a ConcurrentHashMap, but that caching option isn't used > in core.memoize. Are you building a custom memoizer? > > WRT ConcurrentHashMap, it was an incorrect conclusion on my part. In any case, I fail to see "thread safety" in the cache implementation, but again I could be wron

Re: atoms, memoize, future-s and CAS

2014-12-06 Thread James Reeves
On 7 December 2014 at 01:13, Andy L wrote: > > Thanks for looking into that. This indeed would solve a "semantics" > problem of memoize, as it returns a value now. However, it seems that > clojure.core.memoize, > or rather clojure.core.cache memoize is based of, is not thread safe. > > It uses C

Re: atoms, memoize, future-s and CAS

2014-12-06 Thread Andy L
> (defn memoized [f] > (comp deref (memoize (fn [& args] (delay (apply f args) > > Thanks for looking into that. This indeed would solve a "semantics" problem of memoize, as it returns a value now. However, it seems that clojure.core.memoize, or rather clojure.core.cache memoize is based of,

Re: atoms, memoize, future-s and CAS

2014-12-06 Thread James Reeves
It sounds like you want a delay. Delays are guaranteed to execute their body only once, so we can combine a delay with an atom: (defn memoized [f] (comp deref (memoize (fn [& args] (delay (apply f args) In theory that should produce a memoize that executes the function only once for each se

atoms, memoize, future-s and CAS

2014-12-06 Thread Andy L
Hi, Here is the situation. There is a function "f" retrieving some data from various sources (including reading files, a lot of io, e.g. map-reduce) expected by design to return the same result for given input. Results of "f" invocation from parallely running futures are stored in an atom wrapped