Re: [ANN] Concurrently - A library for making concurrent process-pipeline backed by core.async

2021-09-13 Thread Christopher Small
Cool project; Thanks for working on and sharing this. Worth mentioning that Christian Weilbach built a thing called superv (based on the supervisor pattern in Erlang) which solves some similar problems using macros with some of the other core.async api, but I don't think implemented a version

Re: [ANN] Concurrently - A library for making concurrent process-pipeline backed by core.async

2021-09-13 Thread Christopher Small
Cool project; Thanks for working on and sharing this. Worth mentioning that Christian Weilbach built a thing called superv (based on the supervisor pattern in Erlang) which solves some similar problems using macros with some of the other core.async api, but I don't think implemented a version

Re: [ANN] Concurrently - A library for making concurrent process-pipeline backed by core.async

2021-09-12 Thread 'Tsutomu YANO' via Clojure
(Sorry, I send a response before writing text fully. This is a repost) The most large difference is safe exception-handling and easy per-request result-handling. 'concurrently' is built on top of pipeline/pipeline-blocking of core.async. It have functions like `concurrent-process

Re: [ANN] Concurrently - A library for making concurrent process-pipeline backed by core.async

2021-09-12 Thread 'Tsutomu YANO' via Clojure
The most large difference is safe exception-handling and per-thread 'concurrently' is built on top of pipeline/pipeline-blocking of core.async. It have functions like `concurrent-process` and `concurrent-process-blocking` that depend on `pipeline` and `pipeline-blocking`, so you can pass input

Re: [ANN] Concurrently - A library for making concurrent process-pipeline backed by core.async

2021-09-12 Thread Rangel
s, > > We publish our library for concurrent processing of data with core.async. > named 'concurrently' > > https://github.com/uzabase/concurrently > > > With 'concurrently', programmers can create shared process-pipelines > backed by core.async and > can share the pipeline

[ANN] Concurrently - A library for making concurrent process-pipeline backed by core.async

2021-09-12 Thread 'Tsutomu YANO' via Clojure
Hi clojurians, We publish our library for concurrent processing of data with core.async. named 'concurrently' https://github.com/uzabase/concurrently <https://github.com/uzabase/concurrently> With 'concurrently', programmers can create shared process-pipelines backed by core.async an

cljctools/peernode: an example core.async program exposing js-ipfs daemon's pubsub over rsocket (written in clojurescript, runs on nodejs)

2020-11-21 Thread Sergei Udris
https://github.com/cljctools/peernode Sharing as an example/reference of using core.async and rsocket. Runs on nodejs. Details and links to IPFS and RSocket in readme. Reddit post: https://www.reddit.com/r/Clojure/comments/jy7jtl/cljctoolspeernode_an_example_coreasync_program/ -- You

Re: Integrating `core.async` with `httpcore5-h2`

2020-11-02 Thread bartuka
Cool! Thanks for sharing, I was looking for a simple web-server prototype in order to study its internal. -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members

Integrating `core.async` with `httpcore5-h2`

2020-10-31 Thread Dimitrios Piliouras
, and in particular the async-server aspect of it, as I wanted to see whether it would integrate cleanly/nicely with core.async channels. The ring model of async handlers taking 3 arguments, albeit necessary, is hard to work with.  Not only does it feel kind of magic - it is actually non-trivial

Re: core.async buffers alternative backend

2020-09-10 Thread Alex Miller
I don't actually remember now, but it's possible that when core.async was created we were still trying to accommodate an older version of Java before some of those existed. That's not an issue now as we only need to support Java 1.8+. So, I don't know of any reason these wouldn't be an option

core.async buffers alternative backend

2020-09-10 Thread dimitris
Hi folks, `LinkedList` is used as the underlying data-structure for all core.async buffers. However, looking closer reveals that what is really needed is a `Deque` (for its .addFirst/.removeLast methods). So naturally then it begs the question - why not `ArrayDeque` [1]? It should offer

Re: core.async: Unbound channels

2019-07-11 Thread Ernesto Garcia
Thanks Alex! Correct, the channel implementation takes care that "transduced" channels always pass elements through the transducer and the buffer. Also, a FixedBuffer allows running out of limit for those cases, see this example with a FixedBuffer of size 1 making space for 4 elements: (def c

Re: core.async: Unbound channels

2019-07-08 Thread Alex Miller
Expanding transducers (like mapcat) can produce multiple output values per input value, and those have to have someplace to go. -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that

Re: core.async: Unbound channels

2019-07-08 Thread Ernesto Garcia
I see. Bufferless channels are meant to be used within the core.async threading architecture, where there will be a limited number of blocked puts and takes. At the boundaries, channels with dropping or sliding windows can be used for limiting work. So, my original question actually turns

Re: core.async: Unbound channels

2019-07-06 Thread Matching Socks
"Effective" is in the eye of the beholder. The 1024 limit helps surface bugs wherein more than a thousand threads are blocked for lack of a certain channel's buffer space. But the 1024 limit does not pertain if 1 thread would like to do thousands of puts for which there is no buffer space.

Re: core.async: Unbound channels

2019-07-05 Thread Ernesto Garcia
On Thursday, July 4, 2019 at 4:24:33 PM UTC+2, Matching Socks wrote: > > Ernesto, you may be interested in the informative response to this > enhancement request, https://clojure.atlassian.net/browse/ASYNC-23 >

Re: core.async: Unbound channels

2019-07-04 Thread Matching Socks
Ernesto, you may be interested in the informative response to this enhancement request, https://clojure.atlassian.net/browse/ASYNC-23, "Support channel buffers of unlimited size". Anyway, if you do not want to think very hard about buffer size, you can specify a size of 1. It does not limit

Re: core.async: Unbound channels

2019-07-04 Thread Ernesto Garcia
Thanks for your response, it is important to know. (Sorry for my lexical typo: *unbound**ed*. I didn't realize it derives from the verb *bound*, not *bind*!) My question on channel boundaries still holds though. Why the enforcement of boundaries *always*? On Wednesday, July 3, 2019 at 5:16:31

Re: core.async: Unbound channels

2019-07-03 Thread Ghadi Shayban
(chan) is not a channel with an unbounded buffer. It is a channel with *no* buffer and needs to rendezvous putters and takers 1-to-1. (Additionally it will throw an exception if more than 1024 takers or putters are enqueued waiting) On Wednesday, July 3, 2019 at 7:14:46 AM UTC-4, Ernesto

core.async: Unbound channels

2019-07-03 Thread Ernesto Garcia
You can create a unbound channel with (chan), but not if you use a transducer; (chan nil (filter odd?)) will raise an error that no buffer is provided. Why is this the case? Why the enforcement of all channels to be bound? In a program, there will be channels that propagate to other channels,

Re: [ANN] Clojure core.async 0.4.500 released

2019-06-12 Thread Bruce Durling
Thx! On Tue, Jun 11, 2019 at 5:26 PM Ghadi Shayban wrote: > > Release 0.4.500 on 2019.06.11 > > ASYNC-227 cljs alts! isn't non-deterministic > ASYNC-224 Fix bad putter unwrapping in channel abort > ASYNC-226 Fix bad cljs test code > > -- > You received this message because you are subscribed to

[ANN] Clojure core.async 0.4.500 released

2019-06-11 Thread Ghadi Shayban
Release 0.4.500 on 2019.06.11 - ASYNC-227 cljs alts! isn't non-deterministic - ASYNC-224 Fix bad putter unwrapping in channel abort - ASYNC-226

Re: Request to review my CLJS using core.async

2018-11-14 Thread Didier
Do you really need core.async for this? Seems like you can just normally call the service to get the token once at the beginning and then just go in a normal loop of calling for the next message -> handling message -> repeat. I'm not sure there's any reason to use core.async in Clojure

Request to review my CLJS using core.async

2018-11-12 Thread Kashyap CK
Hi all, I am attempting to use core.async to poll a service - https://gist.github.com/ckkashyap/c8423dcfc3a3f28b67e18ae76cc13f53 Broadly, I need to hit the service endpoint with a secret to get a token and subsequently use the token to poll for messages. I'd appreciate any feedback

core.async pipeline bug?

2018-09-04 Thread Alex Miller
Is the scenario this one? https://dev.clojure.org/jira/browse/ASYNC-217 -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient

core.async pipeline bug?

2018-09-04 Thread Paul Rutledge
Hi all, I've been using the 'pipeline' function in core.async and the docstring says that the 'from' channel won't be consumed anymore after the 'to' channel is closed (since you probably don't care anymore). In reality, I'm finding that my 'from' channel is still being taken from

Re: Is core.async a stackless or stackfull coroutine implementation?

2018-08-26 Thread Timothy Baldridge
ess and stackfull > coroutines. > > But I still don't really understand the difference. And I'm trying to know > in which camp core.async would fall. > > Anyone knows? > > -- > You received this message because you are subscribed to the Google > Groups "Clojure"

Is core.async a stackless or stackfull coroutine implementation?

2018-08-26 Thread Didier
I've read a bit about the difference between stackless and stackfull coroutines. But I still don't really understand the difference. And I'm trying to know in which camp core.async would fall. Anyone knows? -- You received this message because you are subscribed to the Google Groups "Cl

Re: core.async buffered channel behavior

2018-06-27 Thread Timothy Baldridge
ver is being used here) 5) put! completes, so a different thread executes the callback and delivers the promise 6) If the thread hasn't been completely killed yet it's possible that it may get the value from the delivered promise and continue In short, core.async doesn't support any sort of cancellat

Re: core.async buffered channel behavior

2018-06-27 Thread Justin Smith
I should be more precise there, by "consumed" I meant buffered or consumed. On Wed, Jun 27, 2018 at 10:17 AM Justin Smith wrote: > I doubt core.async would ever make promises about the behavior of a > blocking put that gets forcibly cancelled. It promises that the blocking >

Re: core.async buffered channel behavior

2018-06-27 Thread Justin Smith
I doubt core.async would ever make promises about the behavior of a blocking put that gets forcibly cancelled. It promises that the blocking put doesn't return until the message is consumed, but that's not the same as promising that the message isn't consumed if the blocking put is forcibly

core.async buffered channel behavior

2018-06-27 Thread Didier
I think its due to ctrl+c, not sure what it actually does, but maybe it didn't actually kill the blocked thread? -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new

Re: core.async buffered channel behavior

2018-06-26 Thread craig worrall
I guess the interrupt doesn't really obliterate the fourth put attempt, and that put proceeds in background when you first take. On Wednesday, June 27, 2018 at 5:12:45 AM UTC+10, jonah wrote: > > Hi folks, > > It's been a while since I've used core.async. Documentation suggests that

core.async buffered channel behavior

2018-06-26 Thread Jonah Benton
Hi folks, It's been a while since I've used core.async. Documentation suggests that (chan n) where n is a number creates a fixed size channel buffer supporting n elements. The below clj repl session seems to indicate that I can put 4 items into a 3-sized buffer: user=> (def c (async/cha

Re: Custom core.async go threadpools? Using go parking for heavy calculation parallelism throughput?

2018-05-02 Thread Alex Miller
at the clojure.parallel namespace in core - it's deprecated and not in the docs but is a wrapper for fork/join from an older time. On Wednesday, May 2, 2018 at 2:00:56 PM UTC-5, Leon Grapenthin wrote: > > I remember a Rich Hickey talk on core.async where he mentioned building > blocking t

Re: Custom core.async go threadpools? Using go parking for heavy calculation parallelism throughput?

2018-05-02 Thread Leon Grapenthin
I remember a Rich Hickey talk on core.async where he mentioned building blocking takes/puts into the compiler, as a possible future extension, making the go macro obsolete. Is that on any roadmap? Tesser I have to look at again, it seemed to go into a similar direction. Fork/Join /w reducers

Re: Custom core.async go threadpools? Using go parking for heavy calculation parallelism throughput?

2018-05-02 Thread Didier
This seems well suited for tesser https://github.com/aphyr/tesser/blob/master/README.markdown Or you could just look at using fold https://clojure.org/reference/reducers#_reduce_and_fold -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to

Re: Custom core.async go threadpools? Using go parking for heavy calculation parallelism throughput?

2018-05-01 Thread Leon Grapenthin
Yeah, that goes in a direction I thought about after my post. I'm going to implement sth. like this and we will see how much code overhead is necessary. At the time it appears to me that being able to locally dedicate threads to the go mechanism would make this much easier (than a hand

Re: Custom core.async go threadpools? Using go parking for heavy calculation parallelism throughput?

2018-05-01 Thread Justin Smith
Just a couple of small points (and not yet a full answer): > A node can obviously not pmap over all the child nodes (would spawn exponential amount of threads) pmap is not that naive, it uses a pool sized with the assumption that its work is CPU bound > (2) Made me wonder why I couldn't use the

Custom core.async go threadpools? Using go parking for heavy calculation parallelism throughput?

2018-05-01 Thread Leon Grapenthin
Recently I worked on an algorithm where a distributed tree is (sort of) flattened in a way that each node runs a commutative aggregation over all of its child nodes calculations. 1 A node can obviously not pmap over all the child nodes (would spawn exponential amount of threads). 2 If I want

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Brian J. Rubinton
https://dev.clojure.org/jira/browse/ASYNC-210 > On Jan 6, 2018, at 12:11 PM, Brian J. Rubinton > wrote: > > Thanks! I will. Just signed the CA. > > > On Sat, Jan 6, 2018, 12:10 PM Alex Miller

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Brian J. Rubinton
Thanks! I will. Just signed the CA. On Sat, Jan 6, 2018, 12:10 PM Alex Miller wrote: > > > On Saturday, January 6, 2018 at 10:56:06 AM UTC-6, Brian J. Rubinton wrote: >> >> Alex - it makes sense to me that the buffer temporarily expands beyond >> its normal size with the

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Alex Miller
On Saturday, January 6, 2018 at 10:56:06 AM UTC-6, Brian J. Rubinton wrote: > > Alex - it makes sense to me that the buffer temporarily expands beyond its > normal size with the content of the expanding transducer. What does not > make sense to me is the buffer also accepts puts even though

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Brian J. Rubinton
Typo — I meant to say the channel executes puts during a take! even though the buffer is full before executing the puts. This is clearer in code (please see the gist). > On Jan 6, 2018, at 11:55 AM, Brian J. Rubinton > wrote: > > Alex - it makes sense to me that the

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Brian J. Rubinton
Alex - it makes sense to me that the buffer temporarily expands beyond its normal size with the content of the expanding transducer. What does not make sense to me is the buffer also accepts puts even though its buffer is full. Why would the take! process puts when the channel's buffer is full?

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Alex Miller
On Saturday, January 6, 2018 at 10:27:20 AM UTC-6, Rob Nikander wrote: > > > On Jan 5, 2018, at 8:01 PM, Gary Verhaegen wrote: > >> What about simply having the producer put items one by one on the channel? > > > I will do that. My current producer is doing too many other

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Rob Nikander
On Jan 5, 2018, at 8:01 PM, Gary Verhaegen wrote: > What about simply having the producer put items one by one on the channel? I will do that. My current producer is doing too many other things, but if I break it up into separate threads or go blocks for each work

Re: core.async consumer + producer working by chunk?

2018-01-06 Thread Brian J. Rubinton
Rob - I’d go with Gary's approach, which essentially moves the splitting up of the chunk of results from the core.async channel’s transducer to the producing function. You can do that using a channel with a fixed buffer of 50 and >!!. As long as the next db query is blocked until e

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Gary Verhaegen
On 5 January 2018 at 19:44, Rob Nikander <rob.nikan...@gmail.com> wrote: > Hi, > > I’m wondering if there is a core.async design idiom for this situation... > > - A buffered channel > - One producer feeding it > - A bunch of consumers pulling from it. > - Pr

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Brian J. Rubinton
eturns nil) yet buffer space is available after only 1 of the 3 values on the channel are taken (so >!! doesn't block). https://github.com/clojure/core.async/blob/master/src/main/clojure/clojure/core/async.clj#L138 <https://github.com/clojure/core.async/blob/master/src/main/clojure/

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Rob Nikander
On Friday, January 5, 2018 at 4:00:25 PM UTC-5, Moritz Ulrich wrote: > > > You have a channel with a buffer-size of one. You clear the buffer by > taking one item from it, making room for another one. Therefore the put > succeeds. Try just `(async/chan nil xform)` to create a channel without

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Moritz Ulrich
Rob Nikander writes: > Thanks for the explanation! This is very close to what I want. I see some > confusing behavior though. See below. > > On Friday, January 5, 2018 at 2:40:14 PM UTC-5, Brian J. Rubinton wrote: >> >> >> The work-queue channel has a fixed buffer size

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Rob Nikander
Thanks for the explanation! This is very close to what I want. I see some confusing behavior though. See below. On Friday, January 5, 2018 at 2:40:14 PM UTC-5, Brian J. Rubinton wrote: > > > The work-queue channel has a fixed buffer size of 1. A collection (range > 50) is put on the channel.

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Brian J. Rubinton
The `mapcat` transducer takes a collection as its input and outputs each of its items individually. This example might be helpful: user> (use ‘[clojure.core.async]) nil user> (def work-queue (chan 1 (mapcat identity))) #’user/work-queue user> (offer! work-queue (range 50)) true user> ( ( (offer!

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Rob Nikander
On Friday, January 5, 2018 at 2:03:00 PM UTC-5, Brian J. Rubinton wrote: > > > What is the buffered channel’s buffer used for? If that’s set to 1 and the > channel’s transducer is `(mapcat identity)` then the producer should be > able to continuously put chunks of work onto the channel with

Re: core.async consumer + producer working by chunk?

2018-01-05 Thread Brian J. Rubinton
query’s result is consumed. Brian > On Jan 5, 2018, at 1:44 PM, Rob Nikander <rob.nikan...@gmail.com> wrote: > > Hi, > > I’m wondering if there is a core.async design idiom for this situation... > > - A buffered channel > - One producer feeding it > -

core.async consumer + producer working by chunk?

2018-01-05 Thread Rob Nikander
Hi, I’m wondering if there is a core.async design idiom for this situation... - A buffered channel - One producer feeding it - A bunch of consumers pulling from it. - Producer should wake up and fill the channel only when it’s empty. In other words, the producer should work in chunks. My

Re: functional implementation of core.async primitives

2017-12-13 Thread Timothy Baldridge
Great! So while this works, you'll still have a few problems, namely in places that are not in a tail call position. For example, core.async support this sort of behavior ;; inside an argument list (not a let) (go (println "GOT Value" ( wrote: > I just added `goloop` a

Re: functional implementation of core.async primitives

2017-12-12 Thread Divyansh Prakash
I just added `goloop` and `goconsume` to the Clojure implementation, with version of `recur` called `gorecur`. *Example:* > repl> > (def ch (chan)) > #'functional-core-async.core/ch > > repl> > (goconsume [v ch] > (if (= :ok v) > (gorecur) > (println "done"))) >

Re: functional implementation of core.async primitives

2017-12-12 Thread Divyansh Prakash
Just remembered that I did give an example in the README, a port of braveclojure's hotdog machine. (defn hot-dog-machine > [in out hot-dogs-left] > (when (> hot-dogs-left 0) > (go (if (= 3 input) > (go>! [out "hot dog"] > (hot-dog-machine in out (dec

Re: functional implementation of core.async primitives

2017-12-12 Thread Divyansh Prakash
In fact, the JS implementation is much ahead of the Clojure version as of right now - with a better parking algorithm and `goconsume` for creating actors that park on read from a channel in an implicit loop. -- You received this message because you are subscribed to the Google Groups "Clojure"

Re: functional implementation of core.async primitives

2017-12-12 Thread Divyansh Prakash
Hi, @halgari! The JS port actually does have this, just haven't found the time to port it back. But basically we can define something like: (defn goloop* > [f initial-state] > (letfn [(recur+ [state] > (goloop* f state))] > (go > (f recur+ initial-state > > >

Re: functional implementation of core.async primitives

2017-11-22 Thread Timothy Baldridge
I'm not exactly sure how the library works, honestly. It's seems that we still don't have parking take, instead we have callbacks again? I'd love to see an implementation of something like this with your library: (go (println "Count" (loop [acc 0] (if-some [v ( wrote: > Thanks for

Re: functional implementation of core.async primitives

2017-11-22 Thread Divyansh Prakash
Thanks for the encouragement, Jay! I ported the library over to JS, and now we have coroutines in vanilla JavaScript! Porting it to other systems should be fairly straightforward.  - Divyansh -- You received this message because you are subscribed

Re: functional implementation of core.async primitives

2017-11-21 Thread Jay Porcasi
> powerful as *core.async*'s versions. > I now understand what more *core.async* is doing, and where my > implementation falls short. > I do believe I have a working implementation of coroutines, though - which > is awesome!  > -- You received this message because you are s

Re: functional implementation of core.async primitives

2017-11-21 Thread Divyansh Prakash
Just a follow up. I have implemented parking versions of *!*, but (because I'm not a sorcerer like *@halgari*) they are rather simple and not as powerful as *core.async*'s versions. I now understand what more *core.async* is doing, and where my implementation falls short. I do believe I have

Re: functional implementation of core.async primitives

2017-11-16 Thread Divyansh Prakash
Actually, returns in ~1700ms if I increase buffer width to 1000 -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your

Re: functional implementation of core.async primitives

2017-11-16 Thread Divyansh Prakash
The other example on that thread has stranger charesteristics: (defn bench [] > (time >(let [c (chan 100)] > (go >(dotimes [i 10] ;; doesn't return for 1e5, takes ~170ms for 1e4 > (>! c i)) >(close! c)) > (loop [i nil] >(if-let [x (

Re: functional implementation of core.async primitives

2017-11-16 Thread Divyansh Prakash
Here <https://gist.github.com/divs1210/625422be5a3e326c991e5ced60b01c1c>'s a port of a core.async example that I was able to pull of the mailing list. Performance (in this particular case) seems to be the same. I'm trying out more examples as I find them. -- You received this message b

Re: functional implementation of core.async primitives

2017-11-15 Thread Divyansh Prakash
(which also resolves this blocking go problem ... in a way) -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your

Re: functional implementation of core.async primitives

2017-11-15 Thread Divyansh Prakash
Hi! I tried resolving that but pre-emption of the current task turned out to be a really, really tough problem, and I believe that's why we need the core.async macros. Anyhow, I have updated the scheduler to autopromote go blocks into actual JVM threads if they block for more than 10ms - poor

Re: functional implementation of core.async primitives

2017-11-15 Thread Timothy Baldridge
I don't think the go blocks play well with the back-pressure. The code I present here deadlocks the scheduler: https://github.com/divs1210/functional-core-async/issues/1 On Wed, Nov 15, 2017 at 2:17 PM, Jay Porcasi wrote: > interested to hear any feedback on these

Re: functional implementation of core.async primitives

2017-11-15 Thread Jay Porcasi
interested to hear any feedback on these features On Wednesday, November 15, 2017 at 3:52:24 PM UTC+7, Divyansh Prakash wrote: > > Hi! > > Thank you for your feedback! > > I've made the following changes to my implementation : > - bounded channels with backpressure > - proper thread

Re: functional implementation of core.async primitives

2017-11-15 Thread Divyansh Prakash
Hi! Thank you for your feedback! I've made the following changes to my implementation : - bounded channels with backpressure - proper thread synchronization - thread blocks that run on actual threads (unlike go blocks) TODO: - alts! - preserve thread local bindings in `go` blocks (`thread`

Re: functional implementation of core.async primitives

2017-11-12 Thread Timothy Baldridge
If you're looking for feedback, the input I gave on Reddit seems like a good place to start ( https://www.reddit.com/r/Clojure/comments/7c0p3c/functional_implementation_of_coreasync/dpmvjpp/). Like I said, it's not really comparable to core.async at all, since it doesn't properly support thread

Re: functional implementation of core.async primitives

2017-11-12 Thread Jay Porcasi
wow looks so neat! i would be interested as well to know what experienced async users have to say Jay On Friday, November 10, 2017 at 1:32:52 PM UTC+7, Divyansh Prakash wrote: > > > Hi! > > I was messing around with Clojure and somehow ended up implementing a > functional

Re: core.async got in a bad state?

2017-08-30 Thread Matching Socks
Special behavior built into only certain blocking operations (e.g., core.async's, but not java.io's) could instill a false sense of security. It could be complemented by a watchdog thread to poll the core.async pool threads and call a given fn if a thread was blocked when it shouldn't have

Re: core.async got in a bad state?

2017-08-29 Thread Didier
> > No code called by a go block should ever call the blocking variants of > core.async functions (!!, alts!!, etc.). So I'd start at the code > redacted in those lines and go from there. > Seems like a good use case for a static code analyser. Maybe a contribution to htt

Re: core.async got in a bad state?

2017-08-29 Thread Alex Miller
of blocking calls > (like IO) that I forgot that blocking variants of core.async calls > themselves were forbidden. > > Thank you for pointing this out! I will rewire things to not do this. > > Per Gary's suggestion, I also think it'd be useful if core.async blocking > ops che

Re: core.async got in a bad state?

2017-08-29 Thread Aaron Iba
Ahh that makes a lot of sense. Indeed, I'm guilty of doing a blocking >!! inside a go-block. I was so careful to avoid other kinds of blocking calls (like IO) that I forgot that blocking variants of core.async calls themselves were forbidden. Thank you for pointing this out! I will rew

Re: core.async got in a bad state?

2017-08-29 Thread Gary Trakhman
Hm, I came across a similar ordering invariant (No code called by a go block should ever call the blocking variants of core.async functions) while wrapping an imperative API, and I thought it might be useful to use vars/binding to enforce it. Has this or other approaches been considered

Re: core.async got in a bad state?

2017-08-29 Thread Timothy Baldridge
ode called by a go block should ever call the blocking variants of core.async functions (!!, alts!!, etc.). So I'd start at the code redacted in those lines and go from there. On Tue, Aug 29, 2017 at 11:09 AM, Alex Miller <a...@puredanger.com> wrote: > go blocks are multiplexed over a

Re: core.async got in a bad state?

2017-08-29 Thread Alex Miller
t;!! and convert that to a >! or do something else to avoid blocking there. On Tuesday, August 29, 2017 at 11:48:25 AM UTC-5, Aaron Iba wrote: > > My company has a production system that uses core.async extensively. We've > been running it 24/7 for over a year with occasional restarts

core.async got in a bad state?

2017-08-29 Thread Aaron Iba
My company has a production system that uses core.async extensively. We've been running it 24/7 for over a year with occasional restarts to update things and add features, and so far core.async has been working great. The other day, during a particularly high workload, the whole system got

Re: core.async vs continuations

2017-07-10 Thread Timothy Baldridge
mping back into that block. Timothy On Mon, Jul 10, 2017 at 2:41 PM, Răzvan Rotaru <razvan.rot...@gmail.com> wrote: > Hi, > > Pardon the ignorance, I was just rewatching Rich Hickeys talk about > core.async, and realized that the await call is actually a continuation > (like ca

core.async vs continuations

2017-07-10 Thread Răzvan Rotaru
Hi, Pardon the ignorance, I was just rewatching Rich Hickeys talk about core.async, and realized that the await call is actually a continuation (like call/cc from scheme). And if I am not mistaken, you can't implement call/cc with macros, so I suspect there is a difference, but I fail to see

Re: core.async/close! locks on chans with pending transforms

2017-07-04 Thread Vitalie Spinu
takes) -> transform and add to buf on the same thread 4) (nil of full buf) -> append to puts list I find 2 troublesome. It means that this line: https://github.com/clojure/core.async/blob/master/src/main/clojure/clojure/core/async/impl/channels.clj#L135 can be reached in the c

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Timothy Baldridge
t; in the go block, but the channel machinery is now > blocking before it yields control back to the go block. Putting blocking > operations in a transducer then operating on the channel from go blocks > turns `!>` and ' blocks), in to operations that do block a whole OS thread, of which the

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Kevin Downey
the go block, but the channel machinery is now blocking before it yields control back to the go block. Putting blocking operations in a transducer then operating on the channel from go blocks turns `!>` and '(just park go blocks), in to operations that do block a whole OS thread, of which the core.

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Vitalie Spinu
On Monday, 3 July 2017 22:48:40 UTC+2, red...@gmail.com wrote: > > > Discussion of locks aside, doing blocking operations like io or >!! or basically anything that looks like it blocks and isn't >! or is a very bad idea in a transducer on a channel. You will (eventually) > block the

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Vitalie Spinu
> the side-effect of this means that no other operation (puts, takes or closes) Is there a deeper reason for this beside the ease of implementation? If chan is buffered I still fail to see why should close and take block. -- You received this message because you are subscribed to the

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Kevin Downey
that go block which is bad. v)))] (go (>! s 1)) (Thread/sleep 100) (println "closing s") (async/close! s)) ;; => ;; this: 1 ;; closing s ;; .. [lock] This is caused by (.lock mutex) in close! method here: https://github.c

Re: core.async/close! locks on chans with pending transforms

2017-07-03 Thread Timothy Baldridge
quot;closing s") > (async/close! s)) > > ;; => > ;; this: 1 > ;; closing s > ;; .. [lock] > > This is caused by (.lock mutex) in close! method here: > >https://github.com/clojure/core.async/blob/master/src/ > main/clojure/clojure/core/async/i

core.async/close! locks on chans with pending transforms

2017-07-03 Thread Vitalie Spinu
(Thread/sleep 100) (println "closing s") (async/close! s)) ;; => ;; this: 1 ;; closing s ;; .. [lock] This is caused by (.lock mutex) in close! method here: https://github.com/clojure/core.async/blob/master/src/main/clojure/clojure/core/async/impl/channels.clj#L247 I

[ANN] core.async 0.3.442

2017-03-14 Thread Alex Miller
core.async 0.3.442 is now available. Try it via: [org.clojure/core.async "0.3.442"] 0.3.442 includes the following changes: - Fixed bad use of :refer-clojure that was failing with the new core specs in 1.9.0-alpha15 -- You received this message because you are subscribed to

Re: Clojurescript 1.9.493+ breaks core.async with :advanced optimizations

2017-03-03 Thread Kenny Williams
Yup, it is a known issue. See http://dev.clojure.org/jira/browse/CLJS-1954. On Friday, March 3, 2017 at 2:44:52 PM UTC-8, Chad Harrington wrote: > > I believe that cljs versions 1.9.493 and above break core.async (and > possibly other libraries) under :advanced optimizations.

Clojurescript 1.9.493+ breaks core.async with :advanced optimizations

2017-03-03 Thread Chad Harrington
I believe that cljs versions 1.9.493 and above break core.async (and possibly other libraries) under :advanced optimizations. Here is a minimal reproduction: In src/ca_adv_bug/core.cljs: (ns ca-adv-bug.core (:require [cljs.core.async :as ca] [cljs.nodejs :as nodejs]) (:require-macros

[ANN] core.async 0.3.441

2017-02-23 Thread Alex Miller
core.async 0.3.441 is now available. Try it via: [org.clojure/core.async "0.3.441"] 0.3.441 includes the following changes: - ASYNC-187 <http://dev.clojure.org/jira/browse/ASYNC-187> - Tag metadata is lost in local closed over by a loop (also see ASYNC-188 <htt

Re: [ANN] core.async 0.3.426

2017-02-23 Thread Alex Miller
On Thu, Feb 23, 2017 at 9:33 AM, Petr <petrg...@gmail.com> wrote: > I have slightly unrelated question. Why core.async is still not 1.x > version? > Because we don't really associate any emotional significance to the version number. > Is there a feeling that API is

Re: [ANN] core.async 0.3.426

2017-02-23 Thread Petr
I have slightly unrelated question. Why core.async is still not 1.x version? Is there a feeling that API is still experimental and not mature enough? Or authors of core.async are not happy with implementation? среда, 22 февраля 2017 г., 19:47:33 UTC+1 пользователь Alex Miller написал

  1   2   3   4   5   6   7   8   9   10   >