Seems risky to depend on that. eduction creates an iterable for example - it has no way of preventing somebody from creating the iterator on one thread and consuming it on another.
On Tue, Apr 11, 2017 at 7:32 AM, Léo Noel <leo.noel...@gmail.com> wrote: >> volatile! is what ensures that there's a memory barrier. > > > No. The memory barrier is set by the transducing context as a consequence of > implementing the "single thread at a time" rule. Be it lock, thread > isolation, agent isolation, or anything that ensures that the end of a step > happens-before the beginning of the next. All these techniques ensure > visibility of unsynchronized variables between two successive steps, even > when multiple threads are involved. > > > On Tuesday, April 11, 2017 at 1:36:30 PM UTC+2, Seth Verrinder wrote: >> >> The single thread at a time rule is implemented by the transducing >> context (transduce, into, core.async, etc). Inside of a transducer's >> implementation you just have to make the assumption that it's being >> used properly. volatile! is what ensures that there's a memory >> barrier. >> >> On Tue, Apr 11, 2017 at 2:46 AM, Léo Noel <leo.n...@gmail.com> wrote: >> > Thank you Alex for these precisions. >> > >> > >> >> The JVM is pretty good at minimizing this stuff - so while you are >> >> stating >> >> these barriers are redundant and are implying that's an issue, it would >> >> not >> >> surprise me if the JVM is able to reduce or eliminate the impacts of >> >> that. >> >> At the very least, it's too difficult to reason about without a real >> >> perf >> >> test and numbers. >> > >> > >> >> Fast wrong results are still wrong. I do not think it's at all obvious >> >> how >> >> this affects performance without running some benchmarks. Volatiles do >> >> not >> >> require flushing values to all cores or anything like that. They just >> >> define >> >> constraints - the JVM is very good at optimizing these kinds of things. >> >> It >> >> would not surprise me if an uncontended thread-contained volatile could >> >> be >> >> very fast (for the single-threaded transducer case) or that a volatile >> >> under >> >> a lock would be no worse than the lock by itself. >> > >> > >> > I agree that the perf argument is weak. >> > >> > >> >> A transducer can assume it will be invoked by no more than one thread >> >> at a >> >> time >> > >> > >> > Fine. Even simpler like this. >> > >> > >> >> Transducers should ensure stateful changes guarantee visibility. That >> >> is: >> >> you should not make assumptions about external memory barriers. >> > >> > >> > How do you enforce no more than one thread at a time without setting a >> > memory barrier ? >> > For the JMM, no more than one thread at a time means exactly that return >> > of >> > step n will *happen-before* the call to step n+1. >> > This implies that what was visible to the thread performing step n will >> > be >> > visible to the thread performing the step n+1, including all memory >> > writes >> > performed during step n inside stateful transducers. >> > https://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html >> > Still no need for extra synchronization. >> > >> > >> >> You're conflating the stateful values inside the transducer with the >> >> state >> >> returned by and passed into a transducer. That's a linkage that does >> >> not >> >> necessarily exist. >> > >> > >> > What do you mean ? How could a function return a value without having >> > executed its body ? >> > >> > >> > On Monday, April 10, 2017 at 9:51:30 PM UTC+2, Alexander Gunnarson >> > wrote: >> >> >> >> Thanks for clearing all of that up Alex! Very helpful. >> >> >> >> On Monday, April 10, 2017 at 3:46:45 PM UTC-4, Alex Miller wrote: >> >>> >> >>> >> >>> >> >>> On Monday, April 10, 2017 at 2:25:48 PM UTC-5, Alexander Gunnarson >> >>> wrote: >> >>>> >> >>>> I think you present a key question: what assumptions can a transducer >> >>>> make? We know the standard ones, but what of memory barriers? >> >>> >> >>> >> >>> Transducers should ensure stateful changes guarantee visibility. That >> >>> is: >> >>> you should not make assumptions about external memory barriers. >> >>> >> >>>> >> >>>> Based on the current implementation, in terms of concurrency, it >> >>>> seems >> >>>> to make (inconsistent — see also `partition-by`) guarantees that >> >>>> sequential >> >>>> writes and reads will be consistent, no matter what thread does the >> >>>> reads or >> >>>> writes. Concurrent writes are not supported. But should sequential >> >>>> multi-threaded reads/writes be supported? >> >>> >> >>> >> >>> Yes. core.async channel transducers already do this. >> >>> >> >>>> >> >>>> This is a question best left to Alex but I think I already know the >> >>>> answer based on his conversation with Rich: it's part of the >> >>>> contract. >> >>>> >> >>>> I think another key question is, is the channel lock memory barrier >> >>>> part >> >>>> of the contract of a core.async channel implementation? >> >>> >> >>> >> >>> Yes, but other transducing processes may exist either in core in the >> >>> future or in external libs. >> >>> >> >>>> >> >>>> If not, volatiles will be necessary in that context if the memory >> >>>> barrier is ever taken away, and it would make sense that volatiles >> >>>> are used >> >>>> in transducers "just in case" specifically for that use case. But if >> >>>> the >> >>>> channel lock memory barrier is part of the contract and not just an >> >>>> implementation detail, then I'm not certain that it's very useful at >> >>>> all for >> >>>> transducers to provide a guarantee of safe sequential multi-threaded >> >>>> reads/writes. >> > >> > -- >> > You received this message because you are subscribed to the Google >> > Groups "Clojure" group. >> > To post to this group, send email to clo...@googlegroups.com >> > Note that posts from new members are moderated - please be patient with >> > your >> > first post. >> > To unsubscribe from this group, send email to >> > clojure+u...@googlegroups.com >> > For more options, visit this group at >> > http://groups.google.com/group/clojure?hl=en >> > --- >> > You received this message because you are subscribed to a topic in the >> > Google Groups "Clojure" group. >> > To unsubscribe from this topic, visit >> > https://groups.google.com/d/topic/clojure/VQj0E9TJWYY/unsubscribe. >> > To unsubscribe from this group and all its topics, send an email to >> > clojure+u...@googlegroups.com. >> > For more options, visit https://groups.google.com/d/optout. > > -- > You received this message because you are subscribed to the Google > Groups "Clojure" group. > To post to this group, send email to clojure@googlegroups.com > Note that posts from new members are moderated - please be patient with your > first post. > To unsubscribe from this group, send email to > clojure+unsubscr...@googlegroups.com > For more options, visit this group at > http://groups.google.com/group/clojure?hl=en > --- > You received this message because you are subscribed to a topic in the > Google Groups "Clojure" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/clojure/VQj0E9TJWYY/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > clojure+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en --- You received this message because you are subscribed to the Google Groups "Clojure" group. To unsubscribe from this group and stop receiving emails from it, send an email to clojure+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.