Hey Thomas,

Thanks for the great feedback! A few clarifications below.
 

> It should be noted that your async macro does in fact use the dispatcher 
> just like a normal go would, the only difference is that it will start 
> executing immediately in the current thread until the first "pause" instead 
> of dispatching. After the "pause" the dispatcher will run it though which 
> is not guaranteed to be the same thread.
>

I agree that the dispatcher will eventually be used, but the idea is that 
if your code is deeply nested, this will only happen at the deepest level 
(when an actual go block is used). So the dispatcher will only be run 
there, instead of running for every single wrapping function call. The idea 
is for the codebase to look like this (across multiple functions):

(async ... (<! (async ... (<! (async ... (<! (async ... (<! (async ... (<! 
(go <long-running stuff, run in a different thread>)))))))))))

Here the dispatcher is only run once, instead of 6 times. All the "glue 
code" is run in the calling thread.
 

> Therefore I'm not actually sure a ThreadLocal will work. Did you test that 
> you get the actual correct result?
>

I do get the correct result, although your concern is valid. I think it 
works because the generated state machine calls ioc/return-chan in the 
curent thread. If part of the computation has been dispatched to an other 
thread, then the result of that dispatched computation will be brought back 
into the current thread with an actual channel before being fed to 
ioc/return-chan. I would love to know if there are situations where 
return-chan would be called directly from another thread, since I haven't 
observed that so far and I'm not familiar enough with the state machine 
internals.
 

> Your benchmark is also not very useful, you are basically just measuring 
> the frameworks overhead which is tiny to what it provides. In actual 
> programs most of the time will not be spent in core.async internals, at 
> least not from what I have observed so far. go blocks are actually pretty 
> cheap.
>

Measuring the framework overhead was in fact my intention, but I agree it 
would be useful to know if the core.async overhead can actually be 
significant in a real-world application. Typically an IO-bound application 
passing lots of requests to other servers and doing very light 
transformation on the results.
 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to