I personally think Thomas' is best if load may vary as it is more
predictable and straightforward to understand. If we're talking about line
code, here's a shortened version that I don't feel sacrifices readability
(typed on a phone so please excuse typos...):
(let [exec (Executors
Yeah Dan Thomas's code is an awesome example for anyone looking to solve a
similar solution with variable load. In my case this is a predictable batch
process and I know the load exactly :-) I ended up implementing the futures
approach for the reasons you listed, as it was our only use of
Sounds like a job for a future. Something like:
(- job-list
(partition-in-sublists 4)
(map #(future (do-job-on-sublist %)))
(mapv deref))
This is untested and written on a phone, so might not even be syntactically
correct, but the future calls will create new threds to execute the do-job
pmap isn't an option as the processes kicked off could affect other systems
load if we can't control the level of parallelization. futures seem like
they'd work quite well (the return value of the jobs is nil, it's a doseq).
I might rewrite it with futures at some point. Although it really just
core.sync is more about coordination and communication than about doing
things in parallel, thats just a pleasant side effect.
In your case you don't need anything core.async provides. You could either
use reducers or java.util.concurrent.ExecutorService.
(let [exec
Uh forgot to .shutdown the exec, proper cleanup is important otherwise the
threads might hang arround a bit.
On Wednesday, September 17, 2014 1:26:00 PM UTC+2, Thomas Heller wrote:
core.sync is more about coordination and communication than about doing
things in parallel, thats just a
We don't have streams of data here, the long running tasks have
side-effects. I would
prefer to avoid adding another whole framework just to run a few long
running jobs in p//.
I guess I should show you some code, so you can see how simple this is.
I'll copy-and-paste some code that I
Thanks for that Larry but I think this is a bit of overkill for my
scenario. The code I pasted is almost verbatim what we have in our
production codebase, so the ability to queue new jobs etc is really not
needed. Cheers though.
On Thursday, September 18, 2014 9:38:47 AM UTC+10, larry google
Sounds like a job for a future.
If he knows the workload, and he knows how many threads will be spun up,
and he knows that the number will be reasonably small, then (future) is a
good bet. But if there is any risk of large numbers of threads being spun
up, then he should avoid calling
Thanks for that Larry but I think this is a bit of overkill for my
scenario.
If I'm counting correctly, your original example has 10 lines of code, and
my example has 11 lines of code (minus the try/catch and the closure and
the namespace declaration). So these 2 solutions are the same
Larry your solution includes the cognitive overhead of another entire
library and process model. future is part of core, and as I realised when
Gary posted the doall's were unnecessary anyway.
On Thursday, September 18, 2014 3:26:36 PM UTC+10, larry google groups
wrote:
Thanks for that
This does not look correct to me. Perhaps someone else has more insight
into this. I am suspicious about 2 things:
1.) your use of doall
2.) your use of (thread)
It looks to me like you are trying to hack together a kind of pipeline or
channel. Clojure has a wealth of libraries that can
We don't have streams of data here, the long running tasks have
side-effects. I would prefer to avoid adding another whole framework just
to run a few long running jobs in p//.
I have a list of jobs to do, I'm partitioning that list up into 4 sub lists
to be worked through by 4 p// workers, I
Is the kinda ugly constant (doall usage a sign that I'm doing something
silly?
(let [num-workers 4
widgets-per-worker (inc (int (/ (count widgets) num-workers)))
bucketed-widgets (partition-all widgets-per-worker widgets)
workers (doall (map (fn [widgets]
14 matches
Mail list logo