Op zo 17 mrt. 2019 om 15:28 schreef Louki Sumirniy
:
> As my simple iterative example showed, given the same sequence of events,
> channels are deterministic, so this is an approach that is orthogonal but in
> the same purpose - to prevent multiple concurrent agents from desynchronising
>
This was a good link to follow:
https://en.wikipedia.org/wiki/Bulk_synchronous_parallel
led me here:
https://en.wikipedia.org/wiki/Automatic_mutual_exclusion
and then to here:
https://en.wikipedia.org/wiki/Transactional_memory
I think this is the pattern for implementing this using channels
Ah yes, probably 'loop1' 'loop2' would be more accurate names.
Yes the number of each routine is in that 'i' variable, those other labels
are just to denote the position within the loops and before and after
sending and the state of the truthstate variable that is only accessed
inside the
I like to think of a channel as a concurrent messaging queue. You can
do all sorts of things with such constructs, including implementing
mutual exclusion constructs, but that doesn't mean that one is the
other.
Your playground example is a bit weird and very prone to various kinds
of race
Then use a mutex or atomic spin lock (although that may have issues in the
current Go implementation)
> On Mar 17, 2019, at 3:56 PM, Louki Sumirniy
> wrote:
>
> I am pretty sure the main cause of deadlocks not having senders and receivers
> in pairs in the execution path such that senders
I am pretty sure the main cause of deadlocks not having senders and
receivers in pairs in the execution path such that senders precede
receivers. Receivers wait to get something, and in another post here I
showed a playground that demonstrates that if there is one channel only one
thread is
Without reading too deeply, and only judging based on your statements, it seems
you are confusing implementation with specification. The things you cite are
subject to change. You need to reason based on the specification not the
observed behavior. Then use the observed behavior to argue that
https://play.golang.org/p/Kz9SsFeb1iK
This prints something at each interstice of the execution path and it is of
course deterministic.
I think the reason why the range loop always chooses one per channel, last
one in order because it uses a LIFO queue so the last in line gets filled
first.
https://play.golang.org/p/13GNgAyEcYv
I think this demonstrates how it works quite well, it appears that threads
stick to channels, routine 0 always sends first and 1 always receives, and
this makes sense as this is the order of their invocation. I could make
more parallel threads but clearly
https://g.co/kgs/2Q3a5n
> On Mar 17, 2019, at 2:36 PM, Louki Sumirniy
> wrote:
>
> So I am incorrect that only one goroutine can access a channel at once? I
> don't understand, only one select or receive or send can happen at one moment
> per channel, so that means that if one has started
On Sun, Mar 17, 2019 at 8:36 PM Louki Sumirniy <
louki.sumirniy.stal...@gmail.com> wrote:
> So I am incorrect that only one goroutine can access a channel at once? I
don't understand, only one select or receive or send can happen at one
moment per channel, so that means that if one has started
So I am incorrect that only one goroutine can access a channel at once? I
don't understand, only one select or receive or send can happen at one
moment per channel, so that means that if one has started others can't
start.
I was sure this was the case and this seems to confirm it:
On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy <
louki.sumirniy.stal...@gmail.com> wrote:
> My understanding of channels is they basically create exclusion by
control of the path of execution, instead of using callbacks, or they
bottleneck via the cpu thread which is the reader and writer of this
I didn't mention actually excluding access by passing data through values
either, this was just using a flag to confine accessor code to one thread,
essentially, which has the same result as a mutex as far as its granularity
goes.
On Sunday, 17 March 2019 13:04:26 UTC+1, Louki Sumirniy wrote:
My understanding of channels is they basically create exclusion by control
of the path of execution, instead of using callbacks, or they bottleneck
via the cpu thread which is the reader and writer of this shared data
anyway.
I think the way they work is that there is queues for read and write
15 matches
Mail list logo