Hi,
Thanks for all suggestions. It all encouraged me to deep dive into atom-s
code which turns out to be a simple wrapper over Java
java.util.concurrent.atomic.AtomicReference which essentially is a
spinlock. Knowing how it works under the hood makes so easier
to use it ...
Below piece (hopefully
On 8 December 2014 at 21:46, Fluid Dynamics wrote:
> [...]
> Which means it's locking or bust. You just get to either do the locking
> yourself or delegate :)
Sure, but isn't it nice when somebody else does your locking for you? :-)
Incidentally, there is a trade-off here between lockless reads
Oh, and as for how to use it here, you could for example say
(.putIfAbsent concurrent-hash-map :foo (delay (foo)))
Then the first thread to @(get concurrent-hash-map :foo (delay
:not-found)) (or similar) would actually compute the value.
With a map in an atom, you could swap! using a function
On Monday, December 8, 2014 3:34:05 PM UTC-5, Michał Marczyk wrote:
>
> On 8 December 2014 at 17:54, Andy L >
> wrote:
> >> But I'd personally just use a delay rather than "locking" for this
> >> purpose.
> >
> >
> > It is not that I like locking at all. However I still fail to see, how
> in
On 8 December 2014 at 17:54, Andy L wrote:
>> But I'd personally just use a delay rather than "locking" for this
>> purpose.
>
>
> It is not that I like locking at all. However I still fail to see, how in a
> multithreaded context memoize/cache prevents executing a given function more
> than once
>
>
> Most of the cache implementations in core.cache have no side-effects. They
> simply return a new cache rather than overwriting the old one. The memoize
> library places the cache in an atom, so it's guaranteed to change
> atomically.
>
I tried to read the cache code (btw an excellent exerci
On 7 December 2014 at 05:13, Andy L wrote:
>
>
>
>> The SoftCache uses a ConcurrentHashMap, but that caching option isn't
>> used in core.memoize. Are you building a custom memoizer?
>>
>>
> WRT ConcurrentHashMap, it was an incorrect conclusion on my part. In any
> case, I fail to see "thread saf
or even better (using future themselves as a "marker" in the atom):
(defmacro map-future-swap! [a k f]
`(locking ~a
(when (not (contains? @~a ~k))
(swap! ~a assoc ~k (future (swap! ~a assoc ~k (~f ~k
)
)
)
--
You received this message because you are subscribed t
> The SoftCache uses a ConcurrentHashMap, but that caching option isn't used
> in core.memoize. Are you building a custom memoizer?
>
>
WRT ConcurrentHashMap, it was an incorrect conclusion on my part. In any
case, I fail to see "thread safety" in the cache implementation, but again
I could be wron
On 7 December 2014 at 01:13, Andy L wrote:
>
> Thanks for looking into that. This indeed would solve a "semantics"
> problem of memoize, as it returns a value now. However, it seems that
> clojure.core.memoize,
> or rather clojure.core.cache memoize is based of, is not thread safe.
>
> It uses C
> (defn memoized [f]
> (comp deref (memoize (fn [& args] (delay (apply f args)
>
>
Thanks for looking into that. This indeed would solve a "semantics" problem
of memoize, as it returns a value now. However, it seems that
clojure.core.memoize,
or rather clojure.core.cache memoize is based of,
It sounds like you want a delay. Delays are guaranteed to execute their
body only once, so we can combine a delay with an atom:
(defn memoized [f]
(comp deref (memoize (fn [& args] (delay (apply f args)
In theory that should produce a memoize that executes the function only
once for each se
Hi,
Here is the situation. There is a function "f" retrieving some data from
various sources (including reading files, a lot of io, e.g. map-reduce)
expected by design to return the same result for given input. Results of
"f" invocation from parallely running futures are stored in an atom wrapped
13 matches
Mail list logo