Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
ok, I'm banning myself from this forum for a while. Sorry about this. I'm 
not right at the moment.

On Sunday, 5 May 2019 21:55:57 UTC+2, Louki Sumirniy wrote:
>
> I think the key thing is the Add function I have written is not concurrent 
> safe. I didn't intend it to be as I only had the use case of a single 
> thread managing a worker pool, and I am pretty sure it is fine for this and 
> for larger pools it has lower overhead of memory *and* processing.
>
> I have revised it so the 'we are started' clause also ensures the channel 
> is in the open and operational state as well, and the channel is closed if 
> it is open, which will, yes, cause a panic if the Add function is called 
> concurrently, which enforces the contract I specify.
>
> It does not cover all of the cases like sync.WaitGroup, but it covers the 
> biggest use case, with a lot less code (no imports at all)
>
> https://play.golang.org/p/FwdKAVnNMk-
>
> On Saturday, 4 May 2019 23:56:01 UTC+2, Robert Engels wrote:
>>
>> The reason your code is shorter is that it is broken. I tried to explain 
>> that to you. Try running the stdlib wait group tests against your code. 
>> They will fail. 
>>
>> On May 4, 2019, at 4:22 PM, Louki Sumirniy  
>> wrote:
>>
>> Those who follow some of my posts here might know that I was discussing 
>> the subject of channels and waitgroups recently, and I wrote a very slim 
>> and  simple waitgroup that works purely with a channel.
>>
>> Note that it is only one channel it requires, at first I had a ready and 
>> done channel, but found a way to use nil and close to replace the ready and 
>> done signals for the main thread. Here is the link to it:
>>
>>
>> https://git.parallelcoin.io/dev/9/src/branch/dev/pkg/util/chanwg/waitgroup.go
>>
>> For comparison, here is the code in the sync library:
>>
>> https://golang.org/src/sync/waitgroup.go
>>
>> The first thing you will notice is that it is a LOT shorter. It does not 
>> make use of the race library, though I can see how that would allow me to 
>> allow callers to inspect the worker count, a function I tried to add but 
>> made races no matter which way the data fed out (even when copying it in 
>> the critical section there in the New function.
>>
>> It is not racy if it is used exactly the way the API presents itself.
>>
>> I haven't written a comparison benchmark to evaluate the differences in 
>> overhead between the two yet, but it seems to me that almost certainly my 
>> code is at least not any heavier in size and thus cache burden, and unless 
>> all those extra things relating to handling unsafafe pointers and race 
>> library are a lot more svelte code than they look, I'd guess that maybe 
>> even my waitgroup is lower overhead. But of course such guesses are 
>> worthless if microseconds are at stake. So I should really write a  
>> benchmark in the test.
>>
>> The one last thing is that I avoid the need for use of atomic by using a 
>> concurrent replicated datatype design for the increment/decrement, which is 
>> not order-sensitive, given the same set of inputs, it makes no difference 
>> what order they are received, at the end the total will be the same. Ah 
>> yes, they are called Commutable Replicated Data Types.
>>
>> This isn't a distributed system, but the order sensitivity of concurrent 
>> computations is the same problem no matter what pipes the messages pass 
>> through. This datatype is perfectly applicable in distributed as 
>> concurrent, in this type of use case.
>>
>> I just wanted to present it here and any comments about it are most 
>> welcome.
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golan...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
I think the key thing is the Add function I have written is not concurrent 
safe. I didn't intend it to be as I only had the use case of a single 
thread managing a worker pool, and I am pretty sure it is fine for this and 
for larger pools it has lower overhead of memory *and* processing.

I have revised it so the 'we are started' clause also ensures the channel 
is in the open and operational state as well, and the channel is closed if 
it is open, which will, yes, cause a panic if the Add function is called 
concurrently, which enforces the contract I specify.

It does not cover all of the cases like sync.WaitGroup, but it covers the 
biggest use case, with a lot less code (no imports at all)

https://play.golang.org/p/FwdKAVnNMk-

On Saturday, 4 May 2019 23:56:01 UTC+2, Robert Engels wrote:
>
> The reason your code is shorter is that it is broken. I tried to explain 
> that to you. Try running the stdlib wait group tests against your code. 
> They will fail. 
>
> On May 4, 2019, at 4:22 PM, Louki Sumirniy  > wrote:
>
> Those who follow some of my posts here might know that I was discussing 
> the subject of channels and waitgroups recently, and I wrote a very slim 
> and  simple waitgroup that works purely with a channel.
>
> Note that it is only one channel it requires, at first I had a ready and 
> done channel, but found a way to use nil and close to replace the ready and 
> done signals for the main thread. Here is the link to it:
>
>
> https://git.parallelcoin.io/dev/9/src/branch/dev/pkg/util/chanwg/waitgroup.go
>
> For comparison, here is the code in the sync library:
>
> https://golang.org/src/sync/waitgroup.go
>
> The first thing you will notice is that it is a LOT shorter. It does not 
> make use of the race library, though I can see how that would allow me to 
> allow callers to inspect the worker count, a function I tried to add but 
> made races no matter which way the data fed out (even when copying it in 
> the critical section there in the New function.
>
> It is not racy if it is used exactly the way the API presents itself.
>
> I haven't written a comparison benchmark to evaluate the differences in 
> overhead between the two yet, but it seems to me that almost certainly my 
> code is at least not any heavier in size and thus cache burden, and unless 
> all those extra things relating to handling unsafafe pointers and race 
> library are a lot more svelte code than they look, I'd guess that maybe 
> even my waitgroup is lower overhead. But of course such guesses are 
> worthless if microseconds are at stake. So I should really write a  
> benchmark in the test.
>
> The one last thing is that I avoid the need for use of atomic by using a 
> concurrent replicated datatype design for the increment/decrement, which is 
> not order-sensitive, given the same set of inputs, it makes no difference 
> what order they are received, at the end the total will be the same. Ah 
> yes, they are called Commutable Replicated Data Types.
>
> This isn't a distributed system, but the order sensitivity of concurrent 
> computations is the same problem no matter what pipes the messages pass 
> through. This datatype is perfectly applicable in distributed as 
> concurrent, in this type of use case.
>
> I just wanted to present it here and any comments about it are most 
> welcome.
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golan...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
I didn't intend Add to be used concurrently, by the way. One pool one 
thread for the controller.

On Sunday, 5 May 2019 21:09:40 UTC+2, Louki Sumirniy wrote:
>
> If at 13 in the else clause, if I first test for a non-nil wg.ops and 
> close it if it's open, I think that stops that channel leak. 
>
> On Sunday, 5 May 2019 18:32:14 UTC+2, Marcin Romaszewicz wrote:
>>
>> I've done quite a bit of MP programming over 20+ years now, and skimming 
>> your code, I see a number of issues. There are probably a lot more that I 
>> don't. In the latest go playground link
>>
>> Line14: if "if wg.started {" has a race condition, both on accessing the 
>> variable, and logically, in that two goroutines can come into the function 
>> and both go down the wg.started = false path. You can't do this without a 
>> synchronization primitive.
>>
>> Line 17:  (wg.ops = make(chan int))  Because of line 14, you could create 
>> more than one channel here, and different go routines will read a different 
>> wg.ops, so your WaitGroup won't work. You also have a data race in 
>> assigning wg.ops because wg.ops isn't an atomic type.
>>
>> Line 41: op, ok := <-wg.ops . You could be waiting forever because of 
>> leaked channel in line 17
>>
>> It seems like your code works for a simple test, but if you really hammer 
>> on this thing, it's going to fail.
>>
>> -- Marcin
>>
>>
>> On Sun, May 5, 2019 at 8:33 AM Louki Sumirniy  
>> wrote:
>>
>>> just had to drop an update, I further improved it, now when it stops it 
>>> resets itself and you can use Add again, I removed the unnecessary 
>>> positive/negative checks and condensed the Add and Done function into an 
>>> Add that can take a negative (tried to think of a better word but Modify 
>>> and Change... just didn't quite fit)
>>>
>>> https://play.golang.org/p/cO3sV1w9-Re
>>>
>>> One could add back separate Add and Done functions, even conveniences 
>>> that just inc 1 dec 1 for each but I don't see the point in that. It just 
>>> simply allows you to make Wait block while the count is above zero, which 
>>> is all you need to prevent goroutine leaks.
>>>
>>> On Sunday, 5 May 2019 17:18:52 UTC+2, Louki Sumirniy wrote:
>>>>
>>>> I figured out that the API of my design had a flaw separating starting 
>>>> the goroutine and adding a new item, so as you can see in this code, I 
>>>> have 
>>>> merged them and notice that there is basically no extraneous 'helper' 
>>>> functions also:
>>>>
>>>> https://play.golang.org/p/hR1sTOAwkOm
>>>>
>>>> The flaw I made relates to API abuse - the contract of the library is 
>>>> quite simply to concurrently keep track of the adjacent 'go' calls 
>>>> starting 
>>>> goroutines, which doesn't apply until there is an increment. So you should 
>>>> not be able to abuse this one the way you did the last.
>>>>
>>>> I'm pretty sure at somewhere between 50 and 100 worker routines the 
>>>> lack of complex extra code makes this a better implementation. Probably 
>>>> the 
>>>> original is fine for smaller worker pools.
>>>>
>>>> On Sunday, 5 May 2019 13:01:58 UTC+2, Jan Mercl wrote:
>>>>>
>>>>> On Sun, May 5, 2019 at 12:45 PM Louki Sumirniy 
>>>>>  wrote: 
>>>>> > 
>>>>> > https://play.golang.org/p/5KwJQcTsUPg 
>>>>> > 
>>>>> > I fixed it. 
>>>>>
>>>>> Not really. You've introduced a data race. 
>>>>>
>>>>> jnml@4670:~/src/tmp> cat main.go 
>>>>> package main 
>>>>>
>>>>> type WaitGroup struct { 
>>>>> workers int 
>>>>> ops chan int 
>>>>> } 
>>>>>
>>>>> func New() *WaitGroup { 
>>>>> wg := new(WaitGroup) 
>>>>> return wg 
>>>>> } 
>>>>>
>>>>> func startWait(wg *WaitGroup) { 
>>>>> wg.ops = make(chan int) 
>>>>>
>>>>> go func() { 
>>>>> done := false 
>>>>> for !done { 
>>>>> select { 
>>>>> case op := <-wg.ops: 
>>>>>  

Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
If at 13 in the else clause, if I first test for a non-nil wg.ops and close 
it if it's open, I think that stops that channel leak. 

On Sunday, 5 May 2019 18:32:14 UTC+2, Marcin Romaszewicz wrote:
>
> I've done quite a bit of MP programming over 20+ years now, and skimming 
> your code, I see a number of issues. There are probably a lot more that I 
> don't. In the latest go playground link
>
> Line14: if "if wg.started {" has a race condition, both on accessing the 
> variable, and logically, in that two goroutines can come into the function 
> and both go down the wg.started = false path. You can't do this without a 
> synchronization primitive.
>
> Line 17:  (wg.ops = make(chan int))  Because of line 14, you could create 
> more than one channel here, and different go routines will read a different 
> wg.ops, so your WaitGroup won't work. You also have a data race in 
> assigning wg.ops because wg.ops isn't an atomic type.
>
> Line 41: op, ok := <-wg.ops . You could be waiting forever because of 
> leaked channel in line 17
>
> It seems like your code works for a simple test, but if you really hammer 
> on this thing, it's going to fail.
>
> -- Marcin
>
>
> On Sun, May 5, 2019 at 8:33 AM Louki Sumirniy  > wrote:
>
>> just had to drop an update, I further improved it, now when it stops it 
>> resets itself and you can use Add again, I removed the unnecessary 
>> positive/negative checks and condensed the Add and Done function into an 
>> Add that can take a negative (tried to think of a better word but Modify 
>> and Change... just didn't quite fit)
>>
>> https://play.golang.org/p/cO3sV1w9-Re
>>
>> One could add back separate Add and Done functions, even conveniences 
>> that just inc 1 dec 1 for each but I don't see the point in that. It just 
>> simply allows you to make Wait block while the count is above zero, which 
>> is all you need to prevent goroutine leaks.
>>
>> On Sunday, 5 May 2019 17:18:52 UTC+2, Louki Sumirniy wrote:
>>>
>>> I figured out that the API of my design had a flaw separating starting 
>>> the goroutine and adding a new item, so as you can see in this code, I have 
>>> merged them and notice that there is basically no extraneous 'helper' 
>>> functions also:
>>>
>>> https://play.golang.org/p/hR1sTOAwkOm
>>>
>>> The flaw I made relates to API abuse - the contract of the library is 
>>> quite simply to concurrently keep track of the adjacent 'go' calls starting 
>>> goroutines, which doesn't apply until there is an increment. So you should 
>>> not be able to abuse this one the way you did the last.
>>>
>>> I'm pretty sure at somewhere between 50 and 100 worker routines the lack 
>>> of complex extra code makes this a better implementation. Probably the 
>>> original is fine for smaller worker pools.
>>>
>>> On Sunday, 5 May 2019 13:01:58 UTC+2, Jan Mercl wrote:
>>>>
>>>> On Sun, May 5, 2019 at 12:45 PM Louki Sumirniy 
>>>>  wrote: 
>>>> > 
>>>> > https://play.golang.org/p/5KwJQcTsUPg 
>>>> > 
>>>> > I fixed it. 
>>>>
>>>> Not really. You've introduced a data race. 
>>>>
>>>> jnml@4670:~/src/tmp> cat main.go 
>>>> package main 
>>>>
>>>> type WaitGroup struct { 
>>>> workers int 
>>>> ops chan int 
>>>> } 
>>>>
>>>> func New() *WaitGroup { 
>>>> wg := new(WaitGroup) 
>>>> return wg 
>>>> } 
>>>>
>>>> func startWait(wg *WaitGroup) { 
>>>> wg.ops = make(chan int) 
>>>>
>>>> go func() { 
>>>> done := false 
>>>> for !done { 
>>>> select { 
>>>> case op := <-wg.ops: 
>>>> wg.workers += op 
>>>> if wg.workers < 1 { 
>>>> done = true 
>>>> close(wg.ops) 
>>>> } 
>>>> } 
>>>> } 
>>>> }() 
>>>> } 
>>>>
>>>> // Add adds a non-negative number 
>>>> func (wg *WaitGroup) Add(delta int) { 
>>>> if delta < 0 { 
>>>> return 
>

Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
just had to drop an update, I further improved it, now when it stops it 
resets itself and you can use Add again, I removed the unnecessary 
positive/negative checks and condensed the Add and Done function into an 
Add that can take a negative (tried to think of a better word but Modify 
and Change... just didn't quite fit)

https://play.golang.org/p/cO3sV1w9-Re

One could add back separate Add and Done functions, even conveniences that 
just inc 1 dec 1 for each but I don't see the point in that. It just simply 
allows you to make Wait block while the count is above zero, which is all 
you need to prevent goroutine leaks.

On Sunday, 5 May 2019 17:18:52 UTC+2, Louki Sumirniy wrote:
>
> I figured out that the API of my design had a flaw separating starting the 
> goroutine and adding a new item, so as you can see in this code, I have 
> merged them and notice that there is basically no extraneous 'helper' 
> functions also:
>
> https://play.golang.org/p/hR1sTOAwkOm
>
> The flaw I made relates to API abuse - the contract of the library is 
> quite simply to concurrently keep track of the adjacent 'go' calls starting 
> goroutines, which doesn't apply until there is an increment. So you should 
> not be able to abuse this one the way you did the last.
>
> I'm pretty sure at somewhere between 50 and 100 worker routines the lack 
> of complex extra code makes this a better implementation. Probably the 
> original is fine for smaller worker pools.
>
> On Sunday, 5 May 2019 13:01:58 UTC+2, Jan Mercl wrote:
>>
>> On Sun, May 5, 2019 at 12:45 PM Louki Sumirniy 
>>  wrote: 
>> > 
>> > https://play.golang.org/p/5KwJQcTsUPg 
>> > 
>> > I fixed it. 
>>
>> Not really. You've introduced a data race. 
>>
>> jnml@4670:~/src/tmp> cat main.go 
>> package main 
>>
>> type WaitGroup struct { 
>> workers int 
>> ops chan int 
>> } 
>>
>> func New() *WaitGroup { 
>> wg := new(WaitGroup) 
>> return wg 
>> } 
>>
>> func startWait(wg *WaitGroup) { 
>> wg.ops = make(chan int) 
>>
>> go func() { 
>> done := false 
>> for !done { 
>> select { 
>> case op := <-wg.ops: 
>> wg.workers += op 
>> if wg.workers < 1 { 
>> done = true 
>> close(wg.ops) 
>> } 
>> } 
>> } 
>> }() 
>> } 
>>
>> // Add adds a non-negative number 
>> func (wg *WaitGroup) Add(delta int) { 
>> if delta < 0 { 
>> return 
>> } 
>> if wg.ops == nil { 
>> startWait(wg) 
>> } 
>> wg.ops <- delta 
>> } 
>>
>> // Done subtracts a non-negative value from the workers count 
>> func (wg *WaitGroup) Done(delta int) { 
>> if delta < 0 { 
>> return 
>> } 
>> wg.ops <- -delta 
>> } 
>>
>> // Wait blocks until the waitgroup decrements to zero 
>> func (wg *WaitGroup) Wait() { 
>> for { 
>> if wg.workers < 1 { 
>> break 
>> } 
>> op, ok := <-wg.ops 
>> if !ok { 
>> break 
>> } else { 
>> wg.ops <- op 
>> } 
>> } 
>> } 
>>
>> var wg = New() 
>>
>> func main() { 
>> for i := 0; i < 2; i++ { 
>> wg.Add(1) 
>> go f() 
>> } 
>> wg.Wait() 
>> } 
>>
>> func f() { 
>> defer wg.Done(1) 
>>
>> for i := 0; i < 2; i++ { 
>> wg.Add(1) 
>> go g() 
>> } 
>> } 
>>
>> func g() { wg.Done(1) } 
>> jnml@4670:~/src/tmp> go run -race main.go 
>> == 
>> WARNING: DATA RACE 
>> Read at 0x00c8a000 by main goroutine: 
>>   main.(*WaitGroup).Wait() 
>>   /home/jnml/src/tmp/main.go:53 +0x6f 
>>   main.main() 
>>   /home/jnml/src/tmp/main.go:72 +0x104 
>>
>> Previous write at 0x00c8a000 by goroutine 5: 
>>   main.startWait.func1() 
>>   /home/jnml/src/tmp/main.go:21 +0xbb 
>>
>> Goroutine 5 (running) created at: 
>>   main.startWait() 
>>   /home/jnml/src/tmp/main.go:16 +0x9e 
>>   main.main() 
>>   /home/jnml/src/tmp/main.go:37 +0xdf 
>> == 
>> Found 1 data race(s) 
>> exit status 66 
>> jnml@4670:~/src/tmp> 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
I figured out that the API of my design had a flaw separating starting the 
goroutine and adding a new item, so as you can see in this code, I have 
merged them and notice that there is basically no extraneous 'helper' 
functions also:

https://play.golang.org/p/hR1sTOAwkOm

The flaw I made relates to API abuse - the contract of the library is quite 
simply to concurrently keep track of the adjacent 'go' calls starting 
goroutines, which doesn't apply until there is an increment. So you should 
not be able to abuse this one the way you did the last.

I'm pretty sure at somewhere between 50 and 100 worker routines the lack of 
complex extra code makes this a better implementation. Probably the 
original is fine for smaller worker pools.

On Sunday, 5 May 2019 13:01:58 UTC+2, Jan Mercl wrote:
>
> On Sun, May 5, 2019 at 12:45 PM Louki Sumirniy 
> > wrote: 
> > 
> > https://play.golang.org/p/5KwJQcTsUPg 
> > 
> > I fixed it. 
>
> Not really. You've introduced a data race. 
>
> jnml@4670:~/src/tmp> cat main.go 
> package main 
>
> type WaitGroup struct { 
> workers int 
> ops chan int 
> } 
>
> func New() *WaitGroup { 
> wg := new(WaitGroup) 
> return wg 
> } 
>
> func startWait(wg *WaitGroup) { 
> wg.ops = make(chan int) 
>
> go func() { 
> done := false 
> for !done { 
> select { 
> case op := <-wg.ops: 
> wg.workers += op 
> if wg.workers < 1 { 
> done = true 
> close(wg.ops) 
> } 
> } 
> } 
> }() 
> } 
>
> // Add adds a non-negative number 
> func (wg *WaitGroup) Add(delta int) { 
> if delta < 0 { 
> return 
> } 
> if wg.ops == nil { 
> startWait(wg) 
> } 
> wg.ops <- delta 
> } 
>
> // Done subtracts a non-negative value from the workers count 
> func (wg *WaitGroup) Done(delta int) { 
> if delta < 0 { 
> return 
> } 
> wg.ops <- -delta 
> } 
>
> // Wait blocks until the waitgroup decrements to zero 
> func (wg *WaitGroup) Wait() { 
> for { 
> if wg.workers < 1 { 
> break 
> } 
> op, ok := <-wg.ops 
> if !ok { 
> break 
> } else { 
> wg.ops <- op 
> } 
> } 
> } 
>
> var wg = New() 
>
> func main() { 
> for i := 0; i < 2; i++ { 
> wg.Add(1) 
> go f() 
> } 
> wg.Wait() 
> } 
>
> func f() { 
> defer wg.Done(1) 
>
> for i := 0; i < 2; i++ { 
> wg.Add(1) 
> go g() 
> } 
> } 
>
> func g() { wg.Done(1) } 
> jnml@4670:~/src/tmp> go run -race main.go 
> == 
> WARNING: DATA RACE 
> Read at 0x00c8a000 by main goroutine: 
>   main.(*WaitGroup).Wait() 
>   /home/jnml/src/tmp/main.go:53 +0x6f 
>   main.main() 
>   /home/jnml/src/tmp/main.go:72 +0x104 
>
> Previous write at 0x00c8a000 by goroutine 5: 
>   main.startWait.func1() 
>   /home/jnml/src/tmp/main.go:21 +0xbb 
>
> Goroutine 5 (running) created at: 
>   main.startWait() 
>   /home/jnml/src/tmp/main.go:16 +0x9e 
>   main.main() 
>   /home/jnml/src/tmp/main.go:37 +0xdf 
> == 
> Found 1 data race(s) 
> exit status 66 
> jnml@4670:~/src/tmp> 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
Ahaha! you made a race, actually! I mean I made a race, but that code 
exposed it.

I'm gonna just go away for a while. My brain doesn't really seem to be keen 
on doing this kind of thinking right at this minute.

On Sunday, 5 May 2019 12:54:25 UTC+2, Louki Sumirniy wrote:
>
> hang on, sorry to be so chatty on this but I'm learning a lot about 
> handling edge cases from this, so I need to comment further
>
> ok, I got it working for that test also:
>
> https://play.golang.org/p/nd_EuCSOWto
>
> I can tell by the fact you used the 'sync' keyword that you didn't in fact 
> test what you wrote, but the above completes the code and shows the 
> condition is handled.
>
> On Sunday, 5 May 2019 12:27:10 UTC+2, Jan Mercl wrote:
>>
>> On Sun, May 5, 2019 at 12:06 PM Louki Sumirniy 
>>  wrote: 
>>
>> > Is there ANY other use case for waitgroup other than preventing a 
>> goroutine leak or ensuring that it empties the channels at the end of 
>> execution? 
>>
>> I don't think this question is related to the correctness of your 
>> shorter implementation of WaitGroup. Anyway, what about such code? 
>>
>> var wg sync.WaitGroup 
>>
>> func main() { 
>> defer wg.Wait() 
>>
>> if someCondition { 
>> for i := 0; i < 4; i++ { 
>> wg.Add(1) 
>> go worker(i) 
>> } 
>> } 
>> ... 
>> } 
>>
>> func worker(i int) { 
>> defer wg.Done() 
>>
>> ... 
>> } 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
hang on, sorry to be so chatty on this but I'm learning a lot about 
handling edge cases from this, so I need to comment further

ok, I got it working for that test also:

https://play.golang.org/p/nd_EuCSOWto

I can tell by the fact you used the 'sync' keyword that you didn't in fact 
test what you wrote, but the above completes the code and shows the 
condition is handled.

On Sunday, 5 May 2019 12:27:10 UTC+2, Jan Mercl wrote:
>
> On Sun, May 5, 2019 at 12:06 PM Louki Sumirniy 
> > wrote: 
>
> > Is there ANY other use case for waitgroup other than preventing a 
> goroutine leak or ensuring that it empties the channels at the end of 
> execution? 
>
> I don't think this question is related to the correctness of your 
> shorter implementation of WaitGroup. Anyway, what about such code? 
>
> var wg sync.WaitGroup 
>
> func main() { 
> defer wg.Wait() 
>
> if someCondition { 
> for i := 0; i < 4; i++ { 
> wg.Add(1) 
> go worker(i) 
> } 
> } 
> ... 
> } 
>
> func worker(i int) { 
> defer wg.Done() 
>
> ... 
> } 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
ok, you found another flaw :) Not adding is now accounted for but waiting 
twice isn't.

On Sunday, 5 May 2019 12:27:10 UTC+2, Jan Mercl wrote:
>
> On Sun, May 5, 2019 at 12:06 PM Louki Sumirniy 
> > wrote: 
>
> > Is there ANY other use case for waitgroup other than preventing a 
> goroutine leak or ensuring that it empties the channels at the end of 
> execution? 
>
> I don't think this question is related to the correctness of your 
> shorter implementation of WaitGroup. Anyway, what about such code? 
>
> var wg sync.WaitGroup 
>
> func main() { 
> defer wg.Wait() 
>
> if someCondition { 
> for i := 0; i < 4; i++ { 
> wg.Add(1) 
> go worker(i) 
> } 
> } 
> ... 
> } 
>
> func worker(i int) { 
> defer wg.Done() 
>
> ... 
> } 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
https://play.golang.org/p/5KwJQcTsUPg

I fixed it.

The code you have written there has weird non-standard utf-8 code points in 
it. It won't make any difference whether you defer the Wait() if no Add is 
called the goroutine does not start now.

On Sunday, 5 May 2019 12:27:10 UTC+2, Jan Mercl wrote:
>
> On Sun, May 5, 2019 at 12:06 PM Louki Sumirniy 
> > wrote: 
>
> > Is there ANY other use case for waitgroup other than preventing a 
> goroutine leak or ensuring that it empties the channels at the end of 
> execution? 
>
> I don't think this question is related to the correctness of your 
> shorter implementation of WaitGroup. Anyway, what about such code? 
>
> var wg sync.WaitGroup 
>
> func main() { 
> defer wg.Wait() 
>
> if someCondition { 
> for i := 0; i < 4; i++ { 
> wg.Add(1) 
> go worker(i) 
> } 
> } 
> ... 
> } 
>
> func worker(i int) { 
> defer wg.Done() 
>
> ... 
> } 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
and mine is also incorrect. now it handles your test but not intended 
operation. :)

On Sunday, 5 May 2019 12:25:29 UTC+2, Louki Sumirniy wrote:
>
> With your (imho incorrect) code, the following small modification defers 
> starting the channel until an add has been called and passes the test:
>
> https://play.golang.org/p/sEFcwcPMdHF
>
> On Sunday, 5 May 2019 11:54:40 UTC+2, Jan Mercl wrote:
>>
>> On Sun, May 5, 2019 at 11:46 AM Louki Sumirniy 
>>  wrote: 
>>
>> > That would mean your code, which breaks this code, also breaks the rule 
>> about never starting a goroutine without having a way to stop it. My code 
>> only fails when the caller is also failing. 
>>
>> My code does not even contain a go statement. How it can break a rule 
>> about starting goroutines? 
>>
>> By "my code" I mean lines 62 to 66 here 
>> https://play.golang.org/p/v3OSWxTpTQY, the rest is your code. 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
With your (imho incorrect) code, the following small modification defers 
starting the channel until an add has been called and passes the test:

https://play.golang.org/p/sEFcwcPMdHF

On Sunday, 5 May 2019 11:54:40 UTC+2, Jan Mercl wrote:
>
> On Sun, May 5, 2019 at 11:46 AM Louki Sumirniy 
> > wrote: 
>
> > That would mean your code, which breaks this code, also breaks the rule 
> about never starting a goroutine without having a way to stop it. My code 
> only fails when the caller is also failing. 
>
> My code does not even contain a go statement. How it can break a rule 
> about starting goroutines? 
>
> By "my code" I mean lines 62 to 66 here 
> https://play.golang.org/p/v3OSWxTpTQY, the rest is your code. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
Is there ANY other use case for waitgroup other than preventing a goroutine 
leak or ensuring that it empties the channels at the end of execution? 

On Sunday, 5 May 2019 11:54:40 UTC+2, Jan Mercl wrote:
>
> On Sun, May 5, 2019 at 11:46 AM Louki Sumirniy 
> > wrote: 
>
> > That would mean your code, which breaks this code, also breaks the rule 
> about never starting a goroutine without having a way to stop it. My code 
> only fails when the caller is also failing. 
>
> My code does not even contain a go statement. How it can break a rule 
> about starting goroutines? 
>
> By "my code" I mean lines 62 to 66 here 
> https://play.golang.org/p/v3OSWxTpTQY, the rest is your code. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] request for feedback on this channel based waitgroup

2019-05-05 Thread Louki Sumirniy
I didn't even think of the condition your code exposes. Partly because at 
the same time, if I wrote code that starts goroutines and doesn't have a 
section to close them, this deadlock condition created would also be 
exposing the fact that the worker pool is not being stopped correctly 
either. 

That would mean your code, which breaks this code, also breaks the rule 
about never starting a goroutine without having a way to stop it. My code 
only fails when the caller is also failing.

On Saturday, 4 May 2019 23:53:03 UTC+2, Jan Mercl wrote:
>
> On Sat, May 4, 2019 at 11:22 PM Louki Sumirniy 
> > wrote: 
>
> > The first thing you will notice is that it is a LOT shorter. 
>
> It fails a simple test: https://play.golang.org/p/v3OSWxTpTQY The 
> original is ok: https://play.golang.org/p/OhB8qZl2QLQ 
>
> Another problem is starting a new goroutine per Waitgroup. Not only 
> that consumes more resources, but it is a way how to leak the 
> behind-the-scenes goroutine on any unexpected/incorrect usage pattern. 
> The original is immune to this problem. 
>
> Also, please note that a select statement with one case only and no 
> default case can be replaced by that single case operation itself. 
>
> And closing the channel on the receiving side is not something that 
> should usually be done. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] request for feedback on this channel based waitgroup

2019-05-04 Thread Louki Sumirniy
Those who follow some of my posts here might know that I was discussing the 
subject of channels and waitgroups recently, and I wrote a very slim and  
simple waitgroup that works purely with a channel.

Note that it is only one channel it requires, at first I had a ready and 
done channel, but found a way to use nil and close to replace the ready and 
done signals for the main thread. Here is the link to it:

https://git.parallelcoin.io/dev/9/src/branch/dev/pkg/util/chanwg/waitgroup.go

For comparison, here is the code in the sync library:

https://golang.org/src/sync/waitgroup.go

The first thing you will notice is that it is a LOT shorter. It does not 
make use of the race library, though I can see how that would allow me to 
allow callers to inspect the worker count, a function I tried to add but 
made races no matter which way the data fed out (even when copying it in 
the critical section there in the New function.

It is not racy if it is used exactly the way the API presents itself.

I haven't written a comparison benchmark to evaluate the differences in 
overhead between the two yet, but it seems to me that almost certainly my 
code is at least not any heavier in size and thus cache burden, and unless 
all those extra things relating to handling unsafafe pointers and race 
library are a lot more svelte code than they look, I'd guess that maybe 
even my waitgroup is lower overhead. But of course such guesses are 
worthless if microseconds are at stake. So I should really write a  
benchmark in the test.

The one last thing is that I avoid the need for use of atomic by using a 
concurrent replicated datatype design for the increment/decrement, which is 
not order-sensitive, given the same set of inputs, it makes no difference 
what order they are received, at the end the total will be the same. Ah 
yes, they are called Commutable Replicated Data Types.

This isn't a distributed system, but the order sensitivity of concurrent 
computations is the same problem no matter what pipes the messages pass 
through. This datatype is perfectly applicable in distributed as 
concurrent, in this type of use case.

I just wanted to present it here and any comments about it are most welcome.


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Exporting and renaming

2019-05-04 Thread Louki Sumirniy
The rename tool is great, but be aware that it doesn't work if the compiler 
encounters even one error anywhere in the project. If the code is already 
complete and runs fine, rename works just fine, and is more selective than 
a blanket S The rename tool parses the code into an AST, so it fully 
respects same identifiers that are not in the scope of the identifier you 
are trying to rename. This is a good thing, but just the code has to be 
100% compilable and there seems to be a number of linter conditions that 
block renaming, even if you can compile and run the code, even then.

I'd also suggest that you first want to draw a clear line around the code, 
separate it from the package it was in, to make the task easier before you 
start using rename.

go rename is very finicky, and I personally rarely find I can even use it, 
and just reach for the search and replace anyway.

On Friday, 3 May 2019 21:58:34 UTC+2, Tamás Gulácsi wrote:
>
> Rename the struct fields first, then move it to its own package.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: the Dominance of English in Programming Languages

2019-05-04 Thread Louki Sumirniy
Oh, I don't mean 'funny' in a derogatory way. Some of them are beautiful 
and I find the languages that use them, fascinating grammar and etymology 
and differences in grammar. For me language is a general category of much 
interest, and programming very specific and use-targeted, but for sure, 
many computer languages are affected by the languages of its designers and 
some even are named to signify that, such as the syntax RPN, which is 
closely related to Lisp's syntax.

The issue about capitalisation and equivalence of symbols would make the 
use of languages without capitalisation difficult for sure. It's great to 
see that someone is actually caring enough about it to facilitate it. Go 
idiom prescribes certain policies with naming - I'd guess that if those 
rules were tightened up a bit more, and tools built to lint them, that 
fully translating into another script (even potentially rtl) would be a lot 
easier than the good old days when the compiler only recognised ascii.

On Friday, 3 May 2019 19:30:33 UTC+2, Ian Lance Taylor wrote:
>
> On Fri, May 3, 2019 at 8:25 AM Louki Sumirniy 
> > wrote: 
> > 
> > https://en.wikipedia.org/wiki/Unicode#General_Category_property 
> > 
> > This section in the wp entry lists these categories. 
> > 
> > So, in Go, actually, all identifiers can be in practically any language. 
> Even many of those funny african scripts and west asian languages! 
>
> Yes.  Note that those scripts are not funny for the people who use 
> them every day, they are just normal writing. 
>
> It gets a little more complicated when discussing which identifiers 
> are visible in other packages.  See https://golang.org/issue/5763 and 
> https://golang.org/issue/22188.  Separately, but related to this 
> general topic, see also https://golang.org/issue/20706 and 
> https://golang.org/issue/27896. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: What does "identifier...type" mean in a function definition?

2019-05-03 Thread Louki Sumirniy
Whenever I write appends and I'm splicing slices together, I often get an 
error saying the second slice is the wrong type (it wants the slice element 
type). So, doesn't that mean the trailing ellipsis is like an iterator 
feeding out one element at a time? Is there some reason this is needed, 
because if slice types match this unrolling is implicit, I mean, the 
programmer obviously intends the two slices be spliced into one new one...

On Friday, 3 May 2019 19:13:02 UTC+2, Ian Lance Taylor wrote:
>
> On Fri, May 3, 2019 at 7:57 AM Louki Sumirniy 
> > wrote: 
> > 
> > Ellipsis makes the parameter type into a slice, but in append it makes 
> the append repeat for each element, or do I misunderstand this? 
> > 
> > There is a syntactic distinction between them too. Parameters it is a 
> prefix to the type, append it is a suffix to the name. It neatly alludes to 
> the direction in which the affected variable is operated on - inside the 
> function name ...type means name []type and for append, we are splitting 
> the slice into a tuple (internally), at least as I understand it, and the 
> parameter is the opposite, tuple to slice. 
> > 
> > I sometimes lament the lack of a tuple type in Go (I previously worked a 
> lot with Python and PHP), but []interface{} isn't that much more difficult 
> and the ellipsis syntax is quite handy for these cases - usually loading or 
> otherwise modifying essentially a super simple container array. 
>
> For any function F and some type T declared as 
>
> func F(x ...T) {} 
>
> within F x will have type []T.  You can call F with a slice s of type []T 
> as 
>
> F(s...) 
>
> That will pass the slice s to F as the final parameter.  This works 
> for any variadic function F. 
>
> The append function is implicitly declared as 
>
> func append(to []byte, add ...byte) 
>
> You can call it as 
>
> append(to, add...) 
>
> Here F is append and T is byte. 
>
> There is a special case for append with an argument of type string, 
> but other than that append is just like any other variadic function. 
>
> Ian 
>
>
>
> > On Friday, 3 May 2019 16:44:47 UTC+2, Ian Lance Taylor wrote: 
> >> 
> >> On Fri, May 3, 2019 at 7:34 AM Louki Sumirniy 
> >>  wrote: 
> >> > 
> >> > The ellipsis has two uses in Go, one is in variadic parameters, the 
> other is in the slice append operator. It is essentially an iterator that 
> takes a list and turns it into a slice (parameters) or takes a slice and 
> turns it into a recursive iteration (append). Parameters with the ellipsis 
> are addressed inside the function as a slice of the type after the 
> ellipsis. 
> >> 
> >> Note that there is nothing special about append here, it's just like 
> >> passing a slice to any other variadic parameter.  See 
> >> https://golang.org/ref/spec#Passing_arguments_to_..._parameters . 
> >> 
> >> Ian 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "golang-nuts" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to golan...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-03 Thread Louki Sumirniy
You'll probably be amused to know that I started studying for Network+ but 
didn't hold the job long enough to finish it. Hence my somewhat muddled 
chatter about it. I'm one of those people who usually starts with a brief 
overview and only digs into details when the task directly requires it, 
hence the imprecision of my recollection of these kinds of details. Maybe I 
should read more type less :) Makes me look kinda stupid :)

My current work is putting me through a lot of study of several low level 
elements of Go, networking and cryptography, so as I go I get better at it.

On Friday, 3 May 2019 12:10:32 UTC+2, ohir wrote:
>
> On Thu, 2 May 2019 17:22:15 -0700 (PDT) 
> Louki Sumirniy > wrote: 
>
> > I'm quite aware of that it's part of the ARP, and allows the router to 
>
>https://tools.ietf.org/html/rfc826 [read updates too] 
>
> Main source of the knowledge of Internet internals is publicly available 
> at 
> https://tools.ietf.org/rfc/index. Make it (the index) your bedtime 
> reading for 
> a while. It helps a lot with further searches to have such an aerial view. 
>
> Then I'd suggest to fact check against suitable RFC(s) the details in a 
> text 
> you're about to send. It certainly would be fruitful both for your 
> learning and 
> for your future readers. 
>
> P.S. It is noble that you are willing to learn and to share your thoughts 
> about your current understanding of the matter but often presented to us 
> understanding is a bit off.  And sometimes, like in this thread, it 
> appears 
> straight out of some parallel universe. 
>
>
> Happy learning :) 
>
> -- 
> Wojciech S. Czarnecki 
>  << ^oo^ >> OHIR-RIPE 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] BCE and stdlib

2019-05-03 Thread Louki Sumirniy
https://i.postimg.cc/BbQDrTy1/Screenshot-from-2019-05-03-18-06-02-2x.png

I installed a few nicknacks today after reading stuff about BCE and escape 
analysis, and made some change to my config and now vscode is showing me 
all the bounds checks. 

There is a LOTTA bounds checks in the standard library. That list there 
shown in the picture totals around 200 or so items, just from mainly "os".

I am sure many of them have trivial effects on performance, being part of 
IO bound code as I am seeing it there, probably not so much as it might 
look like.

I think one of the users on this forum created the tool that gives me 
access to this information. 

I don't want to spend *too* much time on optimisation especially until I 
get through  debugging, but these tools are fantastic.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: the Dominance of English in Programming Languages

2019-05-03 Thread Louki Sumirniy
https://en.wikipedia.org/wiki/Unicode#General_Category_property 

This section in the wp entry lists these categories.

So, in Go, actually, all identifiers can be in practically any language. 
Even many of those funny african scripts and west asian languages!

On Friday, 3 May 2019 17:17:56 UTC+2, Jan Mercl wrote:
>
> On Fri, May 3, 2019 at 5:14 PM Louki Sumirniy 
> > wrote: 
>
> > If the 'letter' classification is the same as used in .NET's unicode 
> implementation, this info lists the categories of symbols that unicode 
> classifies as letters: 
>
> https://golang.org/ref/spec#Characters 
>
> """" 
> In The Unicode Standard 8.0, Section 4.5 "General Category" defines a 
> set of character categories. 
> Go treats all characters in any of the Letter categories Lu, Ll, Lt, 
> Lm, or Lo as Unicode letters, and 
> those in the Number category Nd as Unicode digits. 
> """" 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: the Dominance of English in Programming Languages

2019-05-03 Thread Louki Sumirniy
If the 'letter' classification is the same as used in .NET's unicode 
implementation, this info lists the categories of symbols that unicode 
classifies as letters:

https://docs.microsoft.com/en-us/dotnet/api/system.char.isletter?view=netframework-4.8

On Friday, 3 May 2019 17:11:55 UTC+2, Louki Sumirniy wrote:
>
> Oh, I *can* use UTF-8 in identifiers?? nooo:
>
> Identifiers name program entities such as variables and types. An 
> identifier is a sequence of one or more letters and digits. The first 
> character in an identifier must be a letter.
>
> identifier = letter { letter | unicode_digit } .
>
>  
>
> ...
>
>  
>
> Letters and digits
> The underscore character _ (U+005F) is considered a letter.
>
> letter= unicode_letter | "_" .
> decimal_digit = "0" … "9" .
> octal_digit   = "0" … "7" .
> hex_digit = "0" … "9" | "A" … "F" | "a" … "f" .
>
>
> but `unicode_letter` - what is that? Does that include such as æ ? If so 
> then I guess it would also allow ⻄ too.
>
> I have seen source code from chinese authors that has comments in cn 
> traditional. So does this mean, in theory, I can use any valid unicode 
> letter from alphabet (or even pictograpic) language scripts??
>
> On Friday, 3 May 2019 16:43:09 UTC+2, Ian Lance Taylor wrote:
>>
>> On Fri, May 3, 2019 at 7:28 AM Louki Sumirniy 
>>  wrote: 
>> > 
>> > It would be incredibly computationally costly to add a natural language 
>> translator to the compilation process. I'm not sure, but I think also 
>> identifiers in Go can only be plain ASCII, ie pure latin script (and 
>> initial character must be a letter) 
>>
>> That turns out not to be the case.  The rules for identifiers are at 
>> https://golang.org/ref/spec#Identifiers, where the definition of 
>> "letter" is at https://golang.org/ref/spec#Characters . 
>>
>> Ian 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: the Dominance of English in Programming Languages

2019-05-03 Thread Louki Sumirniy
Oh, I *can* use UTF-8 in identifiers?? nooo:

Identifiers name program entities such as variables and types. An 
identifier is a sequence of one or more letters and digits. The first 
character in an identifier must be a letter.

identifier = letter { letter | unicode_digit } .

 

...

 

Letters and digits
The underscore character _ (U+005F) is considered a letter.

letter= unicode_letter | "_" .
decimal_digit = "0" … "9" .
octal_digit   = "0" … "7" .
hex_digit = "0" … "9" | "A" … "F" | "a" … "f" .


but `unicode_letter` - what is that? Does that include such as æ ? If so 
then I guess it would also allow ⻄ too.

I have seen source code from chinese authors that has comments in cn 
traditional. So does this mean, in theory, I can use any valid unicode 
letter from alphabet (or even pictograpic) language scripts??

On Friday, 3 May 2019 16:43:09 UTC+2, Ian Lance Taylor wrote:
>
> On Fri, May 3, 2019 at 7:28 AM Louki Sumirniy 
> > wrote: 
> > 
> > It would be incredibly computationally costly to add a natural language 
> translator to the compilation process. I'm not sure, but I think also 
> identifiers in Go can only be plain ASCII, ie pure latin script (and 
> initial character must be a letter) 
>
> That turns out not to be the case.  The rules for identifiers are at 
> https://golang.org/ref/spec#Identifiers, where the definition of 
> "letter" is at https://golang.org/ref/spec#Characters . 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Does fmt.Fprint use WriteString ?

2019-05-03 Thread Louki Sumirniy
You are totally correct about this - the only real performance booster for 
IO bound operations is buffering, which delays the writes to be less 
frequent or follow a clock.

I wrote a logging library that used channels, and it was pointed out to me 
that this doesn't have a big effect. But I think it *should* allow a 
multi-core system to keep one or a few cores dedicated to the CPU bound 
processing part of the work, and the IO, with all its waiting, to other 
threads/cores. 

So, using WriteString would make sense, if the Writer it addresses is 
buffered, if performance is needing to be squeezed just a little more. But 
the buffer matters far more.

On Friday, 3 May 2019 16:54:37 UTC+2, Robert Engels wrote:
>
> I suggest that it might benefit you to understand cost of IO. In most 
> systems the IO cost dwarfs the CPU cost of optimizations like these. I am 
> not saying it never matters - I have significant HFT experience and sone 
> HPC - but in MOST cases it holds true. 
>
> So micro optimizing the CPU usually has little effect on total runtime. 
>
> Broken algs, ON^2, are another story. 
>
> On May 3, 2019, at 9:38 AM, Louki Sumirniy  > wrote:
>
> There is a big difference between the parameters of these two functions. 
> One is a slice of interface, the other is only a a single string parameter. 
> fmt print functions all have nasty messy interface switching and reflection 
> internally hence the significant overhead.
>
> A lot of people clearly don't know this, also - there is a builtin print() 
> and println() function in Go. If the output is stdout, these are probably 
> the most efficient ways to thow strings at it. Clearly the same goes for 
> io.WriteString, but with the option of using another Writer instead of 
> stdout.
>
> On Monday, 22 April 2019 03:13:22 UTC+2, codi...@gmail.com wrote:
>>
>> Hi gophers! Just wondering if in a Handler I should (w is the 
>> http.ResponseWriter):
>>
>> fmt.Fprint(w, "Hello world")
>>
>> or is it better to 
>>
>> io.WriteString(w, "Hello world")
>>
>> or is it the same if fmt.Fprint already uses WriteString internally?
>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golan...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: What does "identifier...type" mean in a function definition?

2019-05-03 Thread Louki Sumirniy
Ellipsis makes the parameter type into a slice, but in append it makes the 
append repeat for each element, or do I misunderstand this?

There is a syntactic distinction between them too. Parameters it is a 
prefix to the type, append it is a suffix to the name. It neatly alludes to 
the direction in which the affected variable is operated on - inside the 
function name ...type means name []type and for append, we are splitting 
the slice into a tuple (internally), at least as I understand it, and the 
parameter is the opposite, tuple to slice.

I sometimes lament the lack of a tuple type in Go (I previously worked a 
lot with Python and PHP), but []interface{} isn't that much more difficult 
and the ellipsis syntax is quite handy for these cases - usually loading or 
otherwise modifying essentially a super simple container array.

On Friday, 3 May 2019 16:44:47 UTC+2, Ian Lance Taylor wrote:
>
> On Fri, May 3, 2019 at 7:34 AM Louki Sumirniy 
> > wrote: 
> > 
> > The ellipsis has two uses in Go, one is in variadic parameters, the 
> other is in the slice append operator. It is essentially an iterator that 
> takes a list and turns it into a slice (parameters) or takes a slice and 
> turns it into a recursive iteration (append). Parameters with the ellipsis 
> are addressed inside the function as a slice of the type after the 
> ellipsis. 
>
> Note that there is nothing special about append here, it's just like 
> passing a slice to any other variadic parameter.  See 
> https://golang.org/ref/spec#Passing_arguments_to_..._parameters . 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Go Profiling helper extension for VSCode

2019-05-03 Thread Louki Sumirniy
Ah, just want to give a big thanks for making this tool, I will be needing 
to do a lot of optimisation in a few weeks once I finish all the 
implementation and initial debugging. This should help a lot.

On Monday, 22 April 2019 16:20:04 UTC+2, mediam...@gmail.com wrote:
>
> Hello,
>
>
> we have published our first version of VSCode extension to help in 
> profiling of your benchmarks.
> This is a first test version. Please post your feedback here.
>
>
> marketplace link:
> https://marketplace.visualstudio.com/items?itemName=MaxMedia.go-prof
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Does fmt.Fprint use WriteString ?

2019-05-03 Thread Louki Sumirniy
There is a big difference between the parameters of these two functions. 
One is a slice of interface, the other is only a a single string parameter. 
fmt print functions all have nasty messy interface switching and reflection 
internally hence the significant overhead.

A lot of people clearly don't know this, also - there is a builtin print() 
and println() function in Go. If the output is stdout, these are probably 
the most efficient ways to thow strings at it. Clearly the same goes for 
io.WriteString, but with the option of using another Writer instead of 
stdout.

On Monday, 22 April 2019 03:13:22 UTC+2, codi...@gmail.com wrote:
>
> Hi gophers! Just wondering if in a Handler I should (w is the 
> http.ResponseWriter):
>
> fmt.Fprint(w, "Hello world")
>
> or is it better to 
>
> io.WriteString(w, "Hello world")
>
> or is it the same if fmt.Fprint already uses WriteString internally?
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: the Dominance of English in Programming Languages

2019-05-03 Thread Louki Sumirniy
It would be incredibly computationally costly to add a natural language 
translator to the compilation process. I'm not sure, but I think also 
identifiers in Go can only be plain ASCII, ie pure latin script (and 
initial character must be a letter)

These days in most countries where foreign scripts are used there is some 
(usually fairly standardised) latinisation rules.

The thing is, Go, speaking in terms of idiom, has the opinion that names 
should be chosen carefully and follow rules about stutter and so forth. I 
find myself reaching for a thesaurus a lot when writing code. If I came 
across code that had words like 'benutzer' or 'nachalnik' I'd know what I'm 
looking at but I know a lot of vocab from latinic and germanic continental 
european languages.

It's unfortunate, but I don't think it's really a problem. Language 
learning is quite peculiar - most polyglots, who speak 4 or more 
significantly different languages, will tell you the more languages you 
learn the easier each next one gets. Computer programming is about language 
also. I can express a simple algorithm in about 5 major computer languages, 
as can most programmers who are much over the age of 40.

The thing is, with the exception of maybe German and Russian, almost every 
paper written on any computer science subject is in english. You can't even 
hardly understand the principles without english, but let's say you want to 
get into distributed systems or language processing... ha!. Plus, english 
conveniently has such a rabble of different syntax and semantics that 
resembles the variations in programming languages, left branch, right 
branch, prefix, suffix, infix, compound words and modifiers, etc. Just to 
contrast, Georgian is 100% left branch syntax. Go's function syntax is an 
example of right branch syntax (C's is a jumble of both).

On Monday, 29 April 2019 07:36:37 UTC+2, Chris Burkert wrote:
>
> I recently read an article (German) about the dominance of English in 
> programming languages [1]. It is about the fact that keywords in a language 
> typically are English words. Thus it would be hard for non English speakers 
> to learn programming - argue the authors.
>
> I wonder if there is really demand for that but of course it is weird to 
> ask that on an English list.
>
> I also wonder if it would be possible on a tooling level to support 
> keywords in other languages e.g. via build tags: // +language german
>
> Besides keywords we have a lot of names for functions, methods, structs, 
> interfaces and so on. So there is definitely more to it.
>
> While such a feature may be beneficial for new programmers, to me it comes 
> with many downsides like: readability, ambiguous naming / clashes, global 
> teams ...
>
> I also believe the authors totally miss the point that learning Go is 
> about to learn a language as it is because it is the language of the 
> compiler.
>
> However I find the topic interesting and want to hear about your opinions.
>
> thanks - Chris
>
> 1: 
>
> https://www.derstandard.de/story/2000101285309/programmieren-ist-fuer-jeden-aber-nur-wenn-man-englisch-spricht
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: What does "identifier...type" mean in a function definition?

2019-05-03 Thread Louki Sumirniy
The ellipsis has two uses in Go, one is in variadic parameters, the other 
is in the slice append operator. It is essentially an iterator that takes a 
list and turns it into a slice (parameters) or takes a slice and turns it 
into a recursive iteration (append). Parameters with the ellipsis are 
addressed inside the function as a slice of the type after the ellipsis.

The uses for it are not at all consistent but they are kinds of iteration 
operators, one bundles a slice, the other unbundles it.

On Thursday, 25 April 2019 21:35:16 UTC+2, Andrew Price wrote:
>
> Hey folks,
>
> A colleague wrote this:
>
> func (l *Logger) log2StdFormatted(level string, msgOrFormatOrArg 
>> interface{}, args... interface{}) (formatted string) {
>
>
> Note the position of the space *between* the ... and interface{}, not 
> before the ...
>
> [btw does "..." have an easy-to-search-for name?]
>
> It compiles, I think, but what what does it mean? My braincell hurts.
>
> I 'corrected' this and now my colleague is complaining :(
>
> Andy
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: the Dominance of English in Programming Languages

2019-05-03 Thread Louki Sumirniy
I'd also go further and point out that the Go language has a somewhat 
peculiar and unique feature that code reusability is not considered a holy 
grail. If I really needed a library that was written in portuguese, it 
would not be hard to figure out how to rename everything for my easier 
readability.

On Tuesday, 30 April 2019 21:46:08 UTC+2, jucie@zanthus.com.br wrote:
>
> Here in Brazil we usually code in Brazil's native language: Portuguese. 
> Yes, there are some companies that mandate the use of English, albeit the 
> additional costs of doing so, but that is very exceptional. The vast 
> majority of brazilian software houses use Portuguese everywhere.
>
> The only English words are the programming language keywords and library 
> function calls, for obvious reasons. This scheme has the advantage that it 
> differentiates code created in house from foreign code.
>
> We pick words from the problem domain. So, if we are coding retail 
> software for a chain store, we don't even think about using the word 
> "INVOICE" ( are you kidding? ) Our clients don't say "invoice", they say 
> "nota fiscal", so we code using the name notaFiscal.
>
> That is not nationalism, it's a practical matter and, generally speaking, 
> it works great.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] multiple array declaration.. Is there a easier way ?

2019-05-03 Thread Louki Sumirniy
It looks like it should be an array T to me, then:

var T = [5][256]uint32

On Thursday, 2 May 2019 03:45:07 UTC+2, kortschak wrote:
>
> var T0, T1, T2, T3, T5 [256]uint32 
>
> https://play.golang.org/p/6Cm4p_NyD8m 
>
> On Wed, 2019-05-01 at 18:40 -0700, lgo...@gmail.com  wrote: 
> > The following statement seems very awkward, is there a cleaner way to 
> > write  
> > it ? 
> > 
> > var T0= [256]uint32;  var T1= [256]uint32; var T2= [256]uint32; var 
> > T3=  
> > [256]uint32;  var T5= [256]uint32  
> > 
> > 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-03 Thread Louki Sumirniy
Well, talking about faults in the stdlib opens a whole can of worms. There 
is more than a few in there that have code that is quite non-idiomatic, and 
some needs serious optimisation... And while there is a clear aim to not 
unnecessary complicate the compiler, I think there is a lot that can be 
said for bringing more of these APIs up to date with the best practices 
found in the use of the language.

I have been watching this thing about bounds check elimination. It's a nice 
trick but there needs to be some solution to how to reduce BCs more 
efficiently automatically. Perhaps an extension to gofmt? A lotta stuff in 
go has to be implemented with slice iteration so bounds checks are a big 
unnecessary overhead in some parts of code everywhere.

I'm writing a bitcoin-based server at the moment, and its replay 
performance, script engine, etc, all need to be dramatically optimised (the 
code is quite stable), as it is more than 10x slower than bitcoin core. I'd 
say BC's are part of that.

Anyway, back to more on topic, it would be good to see some project start 
up with the intention of tightening up the go standard library. It's a 
phenomenal piece of work, I'm not saying it's bad, but Go's main downsides 
stem from several of its safety features, which need to be relaxable 
sometimes and probably are harbouring wasteful overheads here and there in 
places.

On Thursday, 2 May 2019 23:44:25 UTC+2, John Dreystadt wrote:
>
>
>
>
>
> On Thursday, 2 May 2019 14:09:09 UTC+2, Louki Sumirniy wrote:
>>
>> The function has a very specific purpose that I have encountered in 
>> several applications, that being to automatically set the netmask based on 
>> the IP being one of the several defined ones, 192, 10, and i forget which 
>> others. 
>>
>> Incorrect netmask can result in not recognising a LAN address that is 
>> incorrect. A 192.168 network has 255 available addresses. You can't just 
>> presume to make a new 192.168.X... address with a /16, as no other 
>> correctly configured node in the LAN will be able to route to it due to it 
>> being a /16. 
>>
>> If you consider the example of an elastic cloud type network environment, 
>> it is important that all nodes agree on netmask or they will become 
>> (partially) disconnected from each other. An app can be spun up for a few 
>> seconds and grab a new address from the range, this could be done with a 
>> broker (eg dhcp), but especially with cloud, one could use a /8 address 
>> range and randomly select out of the 16 million possible, a big enough 
>> space that random generally won't cause a collision - which is a cheaper 
>> allocation procedure than a list managing broker, and would be more suited 
>> to the dynamic cloud environment.
>>
>> This function allows this type of client-side decisionmaking that a 
>> broker bottlenecks into a service, creating an extra startup latency cost. 
>> A randomly generated IP address takes far less time than sending a request 
>> to a centralised broker and receiving it.
>>
>> That's just one example I can think of where a pre-made list of netmasks 
>> is useful, I'm sure more experienced network programmers can rattle off a 
>> laundry list.
>>
>>
> While I kind of see your point, it still seems odd that you want a 
> function for this that is in the main net package. I would expect most if 
> not all applications doing dynamic assignment to pick one address range and 
> then use a fixed netmask. I just think that very few programmers will need 
> such a function so I don’t think Go, with its emphasis on simplicity, so 
> have it. 
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golan...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
ah yes, no, if you see the code in the play link below, it only has three 
channels, ops, done and ready. I just figured out that I replaced that 
ready by putting the close in the clause that processes incoming ops, and 
it's unused as well. I managed to trim it down to just one channel, the ops 
channel, and the done uses a 'comma ok' check and breaks when that produces 
a false value, otherwise it pushes the op back on the channel in case it 
was meant for the main loop: https://play.golang.org/p/zuNAJvwRlf-

On Friday, 3 May 2019 02:50:42 UTC+2, Louki Sumirniy wrote:
>
> oh, I did forget one thing. The race detector does not flag a race in this 
> code: https://play.golang.org/p/M1uGq1g4vjo (play refuses to run it 
> though)
>
> As I understand it, that's because the add/subtract operations are 
> happening serially within the main handler goroutine. I suppose if I were 
> to change my 'worker count' query to just print the value right there in 
> the select statement, the race would disappear, but that's not *that* 
> convenient. Though it covers the case of debugging, really. But how is it 
> not still a race since the data is being copied to send to a TTY?
>
> It would be quite handy, though, if one could constrain the race detector 
> by telling the compiler somehow that 'this goroutine owns that variable' 
> and any reads are ignored. It isn't really exactly a race condition to 
> sample the state at any given moment to give potentially useful information 
> to the caller. 
>
> On Friday, 3 May 2019 02:39:09 UTC+2, Louki Sumirniy wrote:
>>
>> I more or less eventually figured that out since it is impossible to 
>> query the number of workers without a race anyway, and then I started 
>> toying with atomic.Value and made that one race as well (obviously the 
>> value was copied by fmt.Println). I guess keeping track of the number of 
>> workers is on the caller side not on the waitgroup side, the whole thing is 
>> a black box because of the ease with which race conditions can arise when 
>> you let things inside the box. 
>>
>> The thing that I find odd though, is it is impossible to not trip the 
>> race detector, period, when copying that value out, it sees where it goes. 
>> The thing is that in the rest of the library, no operation on the worker 
>> counter triggers the race, I figure that's because it's one goroutine and 
>> the other functions are separate. As soon as the internal value crosses 
>> outwards as caller adds and subtracts workers concurrently, it is racy, but 
>> I don't see how reading a maybe racy value itself is racy if I am not going 
>> to do anything other than tell the user how many workers are running at a 
>> given moment. It wouldn't be to make any concrete, immediate decision to 
>> act based on this. Debugging is a prime example of when you want to read 
>> racy data but have no need to write back where it is being rendered to the 
>> user.
>>
>> Ah well, newbie questions. I think that part of the reason why for many 
>> people goroutines and channels are so fascinating is about concurrency, but 
>> not just concurrency, distributed processing more so. Distributed systems 
>> need concurrent and asynchronous responses to network activity, and 
>> channels are a perfect fit for eliminating context switch overhead from 
>> operations that span many machines.
>>
>> On Friday, 3 May 2019 01:33:53 UTC+2, Robert Engels wrote:
>>>
>>> Channels use sync primitives under the hood so you are not saving 
>>> anything by using multiple channels instead of a single wait group. 
>>>
>>> On May 2, 2019, at 5:57 PM, Louki Sumirniy  
>>> wrote:
>>>
>>> As I mentioned earlier, I wanted to see if I could implement a waitgroup 
>>> with channels instead of the stdlib's sync.Atomic counters, and using a 
>>> special type of concurrent datatype called a PN Converged Replicated 
>>> Datatype. Well, I'm not sure if this implementation precisely implements 
>>> this type of CRDT, but it does work, and I wanted to share it. Note that 
>>> play doesn't like these long running (?) examples, so here it is verbatim 
>>> as I just finished writing it:
>>>
>>> package chanwg
>>>
>>> import "fmt"
>>>
>>> type WaitGroup struct {
>>> workers uint
>>> ops chan func()
>>> ready chan struct{}
>>> done chan struct{}
>>> }
>>>
>>> func New() *WaitGroup {
>>> wg := {
>>> ops: make(chan func()),
>>> done: make(chan struct{}),
>>> ready: make(chan 

Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
oh, I did forget one thing. The race detector does not flag a race in this 
code: https://play.golang.org/p/M1uGq1g4vjo (play refuses to run it though)

As I understand it, that's because the add/subtract operations are 
happening serially within the main handler goroutine. I suppose if I were 
to change my 'worker count' query to just print the value right there in 
the select statement, the race would disappear, but that's not *that* 
convenient. Though it covers the case of debugging, really. But how is it 
not still a race since the data is being copied to send to a TTY?

It would be quite handy, though, if one could constrain the race detector 
by telling the compiler somehow that 'this goroutine owns that variable' 
and any reads are ignored. It isn't really exactly a race condition to 
sample the state at any given moment to give potentially useful information 
to the caller. 

On Friday, 3 May 2019 02:39:09 UTC+2, Louki Sumirniy wrote:
>
> I more or less eventually figured that out since it is impossible to query 
> the number of workers without a race anyway, and then I started toying with 
> atomic.Value and made that one race as well (obviously the value was copied 
> by fmt.Println). I guess keeping track of the number of workers is on the 
> caller side not on the waitgroup side, the whole thing is a black box 
> because of the ease with which race conditions can arise when you let 
> things inside the box. 
>
> The thing that I find odd though, is it is impossible to not trip the race 
> detector, period, when copying that value out, it sees where it goes. The 
> thing is that in the rest of the library, no operation on the worker 
> counter triggers the race, I figure that's because it's one goroutine and 
> the other functions are separate. As soon as the internal value crosses 
> outwards as caller adds and subtracts workers concurrently, it is racy, but 
> I don't see how reading a maybe racy value itself is racy if I am not going 
> to do anything other than tell the user how many workers are running at a 
> given moment. It wouldn't be to make any concrete, immediate decision to 
> act based on this. Debugging is a prime example of when you want to read 
> racy data but have no need to write back where it is being rendered to the 
> user.
>
> Ah well, newbie questions. I think that part of the reason why for many 
> people goroutines and channels are so fascinating is about concurrency, but 
> not just concurrency, distributed processing more so. Distributed systems 
> need concurrent and asynchronous responses to network activity, and 
> channels are a perfect fit for eliminating context switch overhead from 
> operations that span many machines.
>
> On Friday, 3 May 2019 01:33:53 UTC+2, Robert Engels wrote:
>>
>> Channels use sync primitives under the hood so you are not saving 
>> anything by using multiple channels instead of a single wait group. 
>>
>> On May 2, 2019, at 5:57 PM, Louki Sumirniy  
>> wrote:
>>
>> As I mentioned earlier, I wanted to see if I could implement a waitgroup 
>> with channels instead of the stdlib's sync.Atomic counters, and using a 
>> special type of concurrent datatype called a PN Converged Replicated 
>> Datatype. Well, I'm not sure if this implementation precisely implements 
>> this type of CRDT, but it does work, and I wanted to share it. Note that 
>> play doesn't like these long running (?) examples, so here it is verbatim 
>> as I just finished writing it:
>>
>> package chanwg
>>
>> import "fmt"
>>
>> type WaitGroup struct {
>> workers uint
>> ops chan func()
>> ready chan struct{}
>> done chan struct{}
>> }
>>
>> func New() *WaitGroup {
>> wg := {
>> ops: make(chan func()),
>> done: make(chan struct{}),
>> ready: make(chan struct{}),
>> }
>> go func() {
>> // wait loop doesn't start until something is put into thte
>> done := false
>> for !done {
>> select {
>> case fn := <-wg.ops:
>> println("received op")
>> fn()
>> fmt.Println("num workers:", wg.WorkerCount())
>> // if !(wg.workers < 1) {
>> //  println("wait counter at zero")
>> //  done = true
>> //  close(wg.done)
>> // }
>> default:
>> }
>> }
>>
>> }()
>> return wg
>> }
>>
>> // Add adds a non-negative number
>> func (wg *WaitGroup) Add(de

Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
I more or less eventually figured that out since it is impossible to query 
the number of workers without a race anyway, and then I started toying with 
atomic.Value and made that one race as well (obviously the value was copied 
by fmt.Println). I guess keeping track of the number of workers is on the 
caller side not on the waitgroup side, the whole thing is a black box 
because of the ease with which race conditions can arise when you let 
things inside the box. 

The thing that I find odd though, is it is impossible to not trip the race 
detector, period, when copying that value out, it sees where it goes. The 
thing is that in the rest of the library, no operation on the worker 
counter triggers the race, I figure that's because it's one goroutine and 
the other functions are separate. As soon as the internal value crosses 
outwards as caller adds and subtracts workers concurrently, it is racy, but 
I don't see how reading a maybe racy value itself is racy if I am not going 
to do anything other than tell the user how many workers are running at a 
given moment. It wouldn't be to make any concrete, immediate decision to 
act based on this. Debugging is a prime example of when you want to read 
racy data but have no need to write back where it is being rendered to the 
user.

Ah well, newbie questions. I think that part of the reason why for many 
people goroutines and channels are so fascinating is about concurrency, but 
not just concurrency, distributed processing more so. Distributed systems 
need concurrent and asynchronous responses to network activity, and 
channels are a perfect fit for eliminating context switch overhead from 
operations that span many machines.

On Friday, 3 May 2019 01:33:53 UTC+2, Robert Engels wrote:
>
> Channels use sync primitives under the hood so you are not saving anything 
> by using multiple channels instead of a single wait group. 
>
> On May 2, 2019, at 5:57 PM, Louki Sumirniy  > wrote:
>
> As I mentioned earlier, I wanted to see if I could implement a waitgroup 
> with channels instead of the stdlib's sync.Atomic counters, and using a 
> special type of concurrent datatype called a PN Converged Replicated 
> Datatype. Well, I'm not sure if this implementation precisely implements 
> this type of CRDT, but it does work, and I wanted to share it. Note that 
> play doesn't like these long running (?) examples, so here it is verbatim 
> as I just finished writing it:
>
> package chanwg
>
> import "fmt"
>
> type WaitGroup struct {
> workers uint
> ops chan func()
> ready chan struct{}
> done chan struct{}
> }
>
> func New() *WaitGroup {
> wg := {
> ops: make(chan func()),
> done: make(chan struct{}),
> ready: make(chan struct{}),
> }
> go func() {
> // wait loop doesn't start until something is put into thte
> done := false
> for !done {
> select {
> case fn := <-wg.ops:
> println("received op")
> fn()
> fmt.Println("num workers:", wg.WorkerCount())
> // if !(wg.workers < 1) {
> //  println("wait counter at zero")
> //  done = true
> //  close(wg.done)
> // }
> default:
> }
> }
>
> }()
> return wg
> }
>
> // Add adds a non-negative number
> func (wg *WaitGroup) Add(delta int) {
> if delta < 0 {
> return
> }
> fmt.Println("adding", delta, "workers")
> wg.ops <- func() {
> wg.workers += uint(delta)
> }
> }
>
> // Done subtracts a non-negative value from the workers count
> func (wg *WaitGroup) Done(delta int) {
> println("worker finished")
> if delta < 0 {
> return
> }
> println("pushing op to channel")
> wg.ops <- func() {
> println("finishing")
> wg.workers -= uint(delta)
> }
> // println("op should have cleared by now")
> }
>
> // Wait blocks until the waitgroup decrements to zero
> func (wg *WaitGroup) Wait() {
> println("a worker is waiting")
> <-wg.done
> println("job done")
> }
>
> func (wg *WaitGroup) WorkerCount() int {
> return int(wg.workers)
> }
>
>
> There could be some bug lurking in there, I'm not sure, but it runs 
> exactly as I want it to, and all the debug prints show you how it works.
>
> Possibly one does not need to use channels containing functions that 
> mutate the counter, but rather maybe they can be just directly 
> increment/decremented within a select

Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-02 Thread Louki Sumirniy
I'm quite aware of that, it's part of the ARP, and allows the router to 
quickly determine which port to send to. If you put say 192.168.1.1 to a 
router configured with DHCP to 192.168.0.x/24 it first checks the mask by 
ANDing it with the list of address/port network lists' gateway to find the 
port that matches (mask removes the part with the arbitrary addressable 
range), it will return no path error packet to the machine that sent it.

responding back to John Dreystadt - Since netmasks are about ARP address 
resolution and routing and not DHCP or BIND, they most definitely do belong 
there in the "net" library. Firstly, yes, you could just change it to a set 
of constants, but why?? Secondly, if you aren't gonna bake the 
non-routeable address ranges default netmasks into the network library, 
then where?

The use case is easy for me to see: dynamic cloud service providers need to 
generate virtual IP addresses for virtual lans in a cluster. Sure you could 
force the dev/admin to input the netmask for every request to use when 
generating new addresses for virtual interfaces, but why? I might even go 
from the opposite direction here: what about a function that gives you a 
address range/netmask based on the number of addresses you want? Isn't that 
also the same, and also necessary for dynamic VLAN management.

On Thursday, 2 May 2019 14:38:04 UTC+2, Robert Engels wrote:
>
> The net mask is not part of the ip packet. It is a local config in the 
> router.
>
> On May 2, 2019, at 7:20 AM, Louki Sumirniy  > wrote:
>
> Upon review one thing occurred to me also - Netmasks are specifically a 
> fast way to decide at the router which direction a packet should go. The 
> interface netmask is part of the IP part of the header and allows the 
> router to quickly determine whether a packet should go to the external 
> rather than internal interface.
>
> When you use the expression 'should x exist in todays internet', an 
> unspoken aspect of this has to do with IPv6, which does not have a formal 
> NAT specification, and 'local address' range that is as big as the whole 
> IPv4 is now. This serves a similar purpose for routing as a netmask in 
> IPv4, but IPv6 specifically aims to solve the problem of allowing inbound 
> routing to any node. The address shortage that was resolved by CIDR and NAT 
> is not relevant to IPv6, and I believe, in general, applications are 
> written to generate valid addresses proactively and only change it in the 
> rare case it randomly selects an address already in use. This is an 
> optimistic algorithm that can save a lot of latency for a highly dynamic 
> server application running on many worker node machines.
>
> Yes, it's long past due that we abandon IPv4 and NAT, peer to peer 
> applications and dynamic cloud applications are becoming the dominant form 
> for applications and the complexity of arranging peer to peer connections 
> in this environment is quite high compared to IPv6. IPv6 does not need 
> masks as they are built into the 128 bit address coding system.
>
> On Thursday, 2 May 2019 14:09:09 UTC+2, Louki Sumirniy wrote:
>>
>> The function has a very specific purpose that I have encountered in 
>> several applications, that being to automatically set the netmask based on 
>> the IP being one of the several defined ones, 192, 10, and i forget which 
>> others. 
>>
>> Incorrect netmask can result in not recognising a LAN address that is 
>> incorrect. A 192.168 network has 255 available addresses. You can't just 
>> presume to make a new 192.168.X... address with a /16, as no other 
>> correctly configured node in the LAN will be able to route to it due to it 
>> being a /16. 
>>
>> If you consider the example of an elastic cloud type network environment, 
>> it is important that all nodes agree on netmask or they will become 
>> (partially) disconnected from each other. An app can be spun up for a few 
>> seconds and grab a new address from the range, this could be done with a 
>> broker (eg dhcp), but especially with cloud, one could use a /8 address 
>> range and randomly select out of the 16 million possible, a big enough 
>> space that random generally won't cause a collision - which is a cheaper 
>> allocation procedure than a list managing broker, and would be more suited 
>> to the dynamic cloud environment.
>>
>> This function allows this type of client-side decisionmaking that a 
>> broker bottlenecks into a service, creating an extra startup latency cost. 
>> A randomly generated IP address takes far less time than sending a request 
>> to a centralised broker and receiving it.
>>
>> That's just one example I can think of where a pre-made list of netmasks 
>> is useful, I'm sur

[go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
As I mentioned earlier, I wanted to see if I could implement a waitgroup 
with channels instead of the stdlib's sync.Atomic counters, and using a 
special type of concurrent datatype called a PN Converged Replicated 
Datatype. Well, I'm not sure if this implementation precisely implements 
this type of CRDT, but it does work, and I wanted to share it. Note that 
play doesn't like these long running (?) examples, so here it is verbatim 
as I just finished writing it:

package chanwg

import "fmt"

type WaitGroup struct {
workers uint
ops chan func()
ready chan struct{}
done chan struct{}
}

func New() *WaitGroup {
wg := {
ops: make(chan func()),
done: make(chan struct{}),
ready: make(chan struct{}),
}
go func() {
// wait loop doesn't start until something is put into thte
done := false
for !done {
select {
case fn := <-wg.ops:
println("received op")
fn()
fmt.Println("num workers:", wg.WorkerCount())
// if !(wg.workers < 1) {
//  println("wait counter at zero")
//  done = true
//  close(wg.done)
// }
default:
}
}

}()
return wg
}

// Add adds a non-negative number
func (wg *WaitGroup) Add(delta int) {
if delta < 0 {
return
}
fmt.Println("adding", delta, "workers")
wg.ops <- func() {
wg.workers += uint(delta)
}
}

// Done subtracts a non-negative value from the workers count
func (wg *WaitGroup) Done(delta int) {
println("worker finished")
if delta < 0 {
return
}
println("pushing op to channel")
wg.ops <- func() {
println("finishing")
wg.workers -= uint(delta)
}
// println("op should have cleared by now")
}

// Wait blocks until the waitgroup decrements to zero
func (wg *WaitGroup) Wait() {
println("a worker is waiting")
<-wg.done
println("job done")
}

func (wg *WaitGroup) WorkerCount() int {
return int(wg.workers)
}


There could be some bug lurking in there, I'm not sure, but it runs exactly 
as I want it to, and all the debug prints show you how it works.

Possibly one does not need to use channels containing functions that mutate 
the counter, but rather maybe they can be just directly 
increment/decremented within a select statement. I've gotten really used to 
using generator functions and they seem to be extremely easy to use and so 
greatly simplify and modularise my code that I am now tackling far more 
complex (if cyclomatic complexity is a measure - over 130 paths in a menu 
system I wrote that uses generators to parse a declaration of data types 
that also uses generators).

I suppose the thing is it wouldn't be hard to extend the types of 
operations that you push to the ops  channel, I can't think off the top of 
my head exactly any reasonable use case for some other operation though. 
One thing that does come to mind, however, is that a more complex, 
conditional increment operation could be written and execute based on other 
channel signals or the state of some other data, but I can't see any real 
use for that.

I should create a benchmark that tests the relative performance of this 
versus sync.Atomic add/subtract operations. I think also that as I 
mentioned, changing the ops channel to just contain deltas on the group 
size might be a little bit faster than the conditional jumps a closure 
requires to enter and exit.

So the jury is out still if this is in any way superior to sync.WaitGroup, 
but because I know that this library does not use channels that it almost 
certainly has a little higher overhead due to the function call context 
switches hidden inside the Atomic increment/decrement operations.

Because all of those ops occur within the one supervisor waitgroup 
goroutine only, they are serialised automatically by the channel buffer (or 
the wait sync as sender and receiver both become ready), and no 
atomic/locking operations are required to prevent a race.

I enabled race detector on a test of this code just now. The WorkerCount() 
function is racy. I think I need to change it so there is a channel 
underlying the retrieval implementation, it then would send the (empty) 
query to the query channel, and listen on an answer channel (maybe make 
them one-direction) to get the value without an explicit race.

Yes, and this is probably why sync.WaitGroup has no way to inspect the 
current wait count also. I will see if I can make that function not racy.

On Thursday, 2 May 2019 23:29:35 UTC+2, Øyvind Teig wrote:
>
> Thanks for the reference to Dave Cheney's blog note! And for this thread, 
> quite interesting to read. I am not used to explicitly closing channels at 
> all (occam (in the ninetees) and XC (now)), but I have sat through several 
> presentations on conferences seen the theme being discussed, like with 

Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
Ah, so this is what they are for - the same thing implemented with channels 
would be a nasty big slice with empty struct quit channels to first tell 
the main they are done. wg.Done() and wg.Wait()  eliminate the complexity 
that a pure channel implementation would require.

With that code I also toyed with changing the size of the 'queue' buffer. 
At 1:1 per data item, it terminates in about 40ns. Obviously it just 
abandons all of the work, since with one channel the routines are taking 
over 32ms a piece, and this produces the correct output except maybe the 
last one. 

So I do need to be using waitgroups if I want to orchestrate 
parallelisation (if available) using goroutines, as otherwise the work will 
be abandoned immediately as the signal is sent.

For some work types, dropping the whole load is correct, that would be 
io-bound stuff that has a short time to live. If the load (like my code) is 
process intensive, one needs the final result so the goroutines must not 
stop until each one finishes its job.

IO-bound jobs like the transport I am writing run continously for the whole 
lifetime of the application's execution. They will only need this 
orchestration to cleanly shut down, but since that adds no overhead during 
the processing loop there is no sane reason to not put the wait in there if 
finishing jobs is mandatory.

On Thursday, 2 May 2019 22:22:33 UTC+2, Steven Hartland wrote:
>
> You can see it doesn't wait by adding a counter as seen here:
> https://play.golang.org/p/-eqKggUEjhQ
>
> On 02/05/2019 21:09, Louki Sumirniy wrote:
>
> I have been spending my time today getting my knowledge of this subject 
> adequate enough to use channels for a UDP transport with FEC creating 
> sharded pieces of the packets, and I just found this and played with some 
> of the code on it and I just wanted to mention these things: 
>
> https://dave.cheney.net/2013/04/30/curious-channels
>
> In the code, specifically first section of this article, I found that the 
> sync.WaitGroup stuff can be completely left out. The quit channel 
> immediately unblocks the select when it is closed and 100 of the goroutines 
> immediately stop. Obviously in a real situation you would put cleanup code 
> in the finish clauses of the goroutines, but yeah, point is the waitgroup 
> is literally redundant in this code:
>
> package main
>
> import (
> "fmt"
> "time"
> )
>
> func main() {
> const n = 100
> finish := make(chan bool)
> for i := 0; i < n; i++ {
> go func() {
> select {
> case <-time.After(1 * time.Hour):
> case <-finish:
> }
> }()
> }
> t0 := time.Now()
> close(finish) // closing finish makes it ready to receive
> fmt.Printf("Waited %v for %d goroutines to stop\n", time.Since(t0), n)
> }
>
> The original version uses waitgroups but you can remove them as above and 
> it functions exactly the same. Presumably it has lower overhead from the 
> mutex not being made and propagating to each thread when it finishes a 
> cycle. 
>
> It really seems to me like for this specific case, the use of the property 
> of a closed channel to yield zero completely renders a waitgroup irrelevant.
>
> What I'm curious about is, what reasons would I have for not wanting to 
> use this feature of closed channels as a stop signal versus using a 
> waitgroup?
>
> On Thursday, 2 May 2019 16:20:26 UTC+2, Louki Sumirniy wrote: 
>>
>> It's not precisely the general functionality that I will implement for my 
>> transport, but here is a simple example of a classifier type processing 
>> queue:
>>
>> https://play.golang.org/p/ytdrXgCdbQH
>>
>> This processes a series of sequential integers and pops them into an 
>> array to find the highest factor of a given range of numbers. The code I 
>> will write soon is slightly different, as, obviously, that above there is 
>> not technically a queue. This code shows how to make a non-deadlocking 
>> processing queue, however. 
>>
>> Adding an actual queue like for my intended purpose of bundling packets 
>> with a common uuid is not much further, instead of just dropping the 
>> integers into their position in the slice, it iterates them as each item is 
>> received to find a match, if it doesn't find enough, then it puts the item 
>> back at the end of the search on the queue and waits for the next new item 
>> to arrive. I'll be writing that shortly.
>>
>> For that, I think the simple example would use an RNG to generate numbers 
>> within the specified range, and then for the example, it will continue to 
>> accumulate numbers in the buffer until a recurrance occurs, then the 
>> numbers are appended to  the array and this index is ignored when an

[go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
I have been spending my time today getting my knowledge of this subject 
adequate enough to use channels for a UDP transport with FEC creating 
sharded pieces of the packets, and I just found this and played with some 
of the code on it and I just wanted to mention these things:

https://dave.cheney.net/2013/04/30/curious-channels

In the code, specifically first section of this article, I found that the 
sync.WaitGroup stuff can be completely left out. The quit channel 
immediately unblocks the select when it is closed and 100 of the goroutines 
immediately stop. Obviously in a real situation you would put cleanup code 
in the finish clauses of the goroutines, but yeah, point is the waitgroup 
is literally redundant in this code:

package main

import (
"fmt"
"time"
)

func main() {
const n = 100
finish := make(chan bool)
for i := 0; i < n; i++ {
go func() {
select {
case <-time.After(1 * time.Hour):
case <-finish:
}
}()
}
t0 := time.Now()
close(finish) // closing finish makes it ready to receive
fmt.Printf("Waited %v for %d goroutines to stop\n", time.Since(t0), n)
}

The original version uses waitgroups but you can remove them as above and 
it functions exactly the same. Presumably it has lower overhead from the 
mutex not being made and propagating to each thread when it finishes a 
cycle. 

It really seems to me like for this specific case, the use of the property 
of a closed channel to yield zero completely renders a waitgroup irrelevant.

What I'm curious about is, what reasons would I have for not wanting to use 
this feature of closed channels as a stop signal versus using a waitgroup?

On Thursday, 2 May 2019 16:20:26 UTC+2, Louki Sumirniy wrote:
>
> It's not precisely the general functionality that I will implement for my 
> transport, but here is a simple example of a classifier type processing 
> queue:
>
> https://play.golang.org/p/ytdrXgCdbQH
>
> This processes a series of sequential integers and pops them into an array 
> to find the highest factor of a given range of numbers. The code I will 
> write soon is slightly different, as, obviously, that above there is not 
> technically a queue. This code shows how to make a non-deadlocking 
> processing queue, however.
>
> Adding an actual queue like for my intended purpose of bundling packets 
> with a common uuid is not much further, instead of just dropping the 
> integers into their position in the slice, it iterates them as each item is 
> received to find a match, if it doesn't find enough, then it puts the item 
> back at the end of the search on the queue and waits for the next new item 
> to arrive. I'll be writing that shortly.
>
> For that, I think the simple example would use an RNG to generate numbers 
> within the specified range, and then for the example, it will continue to 
> accumulate numbers in the buffer until a recurrance occurs, then the 
> numbers are appended to  the array and this index is ignored when another 
> one comes in later. That most closely models what I am building.
>
> On Thursday, 2 May 2019 13:26:47 UTC+2, Louki Sumirniy wrote:
>>
>> Yeah, I was able to think a bit more about it as I was falling asleep 
>> later and I realised how I meant it to run. I had to verify that indeed 
>> channels are FIFO queues, as that was the basis of this way of using them.
>>
>> The receiver channel is unbuffered, and lives in one goroutine. When it 
>> receives something it bounces it into the queue and for/range loops through 
>> the content of a fairly big-buffered working channel where items can 
>> collect while they are fresh, and upon arrival of a new item the new item 
>> is checked for a match against the contents of the queue, as well as 
>> kicking out stale data (and recording the uuid of the stale set so it can 
>> be immediately dumped if any further packets got hung up and come after way 
>> too long.
>>
>> This differs a lot from the loopy design I made in the OP. In this design 
>> there is only to threads instead of three. I think the geometry of a 
>> channel pattern is important - specifically, everything needs to be done in 
>> pairs with channels, although maybe sometimes you want it too receive but 
>> not need it to send it anywhere, just store/drop, as the algorithm requires.
>>
>> I still need to think through the design a bit more. Like, perhaps the 
>> queue channel *should* be a pair of one-direction channels so one is the 
>> main fifo and the other side each item is taken off the queue, processed, 
>> and then put back into the stream. Ordering is not important, except that 
>> it is very handy that it is a FIFO because this means if I have a buffer 
>> with some number, and get a new item, put it into the buffer queue, 

[go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
It's not precisely the general functionality that I will implement for my 
transport, but here is a simple example of a classifier type processing 
queue:

https://play.golang.org/p/ytdrXgCdbQH

This processes a series of sequential integers and pops them into an array 
to find the highest factor of a given range of numbers. The code I will 
write soon is slightly different, as, obviously, that above there is not 
technically a queue. This code shows how to make a non-deadlocking 
processing queue, however.

Adding an actual queue like for my intended purpose of bundling packets 
with a common uuid is not much further, instead of just dropping the 
integers into their position in the slice, it iterates them as each item is 
received to find a match, if it doesn't find enough, then it puts the item 
back at the end of the search on the queue and waits for the next new item 
to arrive. I'll be writing that shortly.

For that, I think the simple example would use an RNG to generate numbers 
within the specified range, and then for the example, it will continue to 
accumulate numbers in the buffer until a recurrance occurs, then the 
numbers are appended to  the array and this index is ignored when another 
one comes in later. That most closely models what I am building.

On Thursday, 2 May 2019 13:26:47 UTC+2, Louki Sumirniy wrote:
>
> Yeah, I was able to think a bit more about it as I was falling asleep 
> later and I realised how I meant it to run. I had to verify that indeed 
> channels are FIFO queues, as that was the basis of this way of using them.
>
> The receiver channel is unbuffered, and lives in one goroutine. When it 
> receives something it bounces it into the queue and for/range loops through 
> the content of a fairly big-buffered working channel where items can 
> collect while they are fresh, and upon arrival of a new item the new item 
> is checked for a match against the contents of the queue, as well as 
> kicking out stale data (and recording the uuid of the stale set so it can 
> be immediately dumped if any further packets got hung up and come after way 
> too long.
>
> This differs a lot from the loopy design I made in the OP. In this design 
> there is only to threads instead of three. I think the geometry of a 
> channel pattern is important - specifically, everything needs to be done in 
> pairs with channels, although maybe sometimes you want it too receive but 
> not need it to send it anywhere, just store/drop, as the algorithm requires.
>
> I still need to think through the design a bit more. Like, perhaps the 
> queue channel *should* be a pair of one-direction channels so one is the 
> main fifo and the other side each item is taken off the queue, processed, 
> and then put back into the stream. Ordering is not important, except that 
> it is very handy that it is a FIFO because this means if I have a buffer 
> with some number, and get a new item, put it into the buffer queue, and 
> then the queue unpacks the newest item last. I think I could make it like 
> this, actually:
>
> one channel inbound receiver, it passes into a buffered queue channel, and 
> triggers the passing out of buffered items from the head of the queue to 
> watcher 1, 2, 3, each watcher process being a separate process that may 
> swallow or redirect the contents. For each new UUID item that comes in, a 
> single thread could be started that keeps reading, checking and (re) 
> directing the input as it passes out of the buffer and through the 
> watchers. Something like this:
>
> input -> buffer[64] -> (watcher 1) -> (watcher 2) -> buffer [64] 
>
> With this pattern I could have a new goroutine spawn for each new UUID 
> that marks out a batch, that springs a deadline tick and when the deadline 
> hits the watcher's buffer is cleared and the goroutine ends, implementing 
> expiry, and the UUID is attached to a simple buffered channel that keeps 
> the last 100 or so UUIDs and uses it to immediately identify stale junk 
> (presumably the main data type in the other channels is significantly 
> bigger data than the UUID integer - my intent is that the data type should 
> be a UDP packet so that means it is size restricted and contains something 
> arbitrary that watchers detect, decode and respond to.
>
> It's a work in progress, but I know from previous times writing code 
> dealing with simple batch/queue problems like this, that the Reader/Writer 
> pattern is most often used and requires a lot of slice fiddling implemented 
> using arrays/slices, but a buffered channel, being a FIFO, is a queue 
> buffer, so it can be used to store (relatively) ordered items that age as 
> they get to the head of the queue, and allow a check-pass on each item. 
>
> These checkers can 'return' to the next in line so the checker-queu

Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-02 Thread Louki Sumirniy
Upon review one thing occurred to me also - Netmasks are specifically a 
fast way to decide at the router which direction a packet should go. The 
interface netmask is part of the IP part of the header and allows the 
router to quickly determine whether a packet should go to the external 
rather than internal interface.

When you use the expression 'should x exist in todays internet', an 
unspoken aspect of this has to do with IPv6, which does not have a formal 
NAT specification, and 'local address' range that is as big as the whole 
IPv4 is now. This serves a similar purpose for routing as a netmask in 
IPv4, but IPv6 specifically aims to solve the problem of allowing inbound 
routing to any node. The address shortage that was resolved by CIDR and NAT 
is not relevant to IPv6, and I believe, in general, applications are 
written to generate valid addresses proactively and only change it in the 
rare case it randomly selects an address already in use. This is an 
optimistic algorithm that can save a lot of latency for a highly dynamic 
server application running on many worker node machines.

Yes, it's long past due that we abandon IPv4 and NAT, peer to peer 
applications and dynamic cloud applications are becoming the dominant form 
for applications and the complexity of arranging peer to peer connections 
in this environment is quite high compared to IPv6. IPv6 does not need 
masks as they are built into the 128 bit address coding system.

On Thursday, 2 May 2019 14:09:09 UTC+2, Louki Sumirniy wrote:
>
> The function has a very specific purpose that I have encountered in 
> several applications, that being to automatically set the netmask based on 
> the IP being one of the several defined ones, 192, 10, and i forget which 
> others. 
>
> Incorrect netmask can result in not recognising a LAN address that is 
> incorrect. A 192.168 network has 255 available addresses. You can't just 
> presume to make a new 192.168.X... address with a /16, as no other 
> correctly configured node in the LAN will be able to route to it due to it 
> being a /16. 
>
> If you consider the example of an elastic cloud type network environment, 
> it is important that all nodes agree on netmask or they will become 
> (partially) disconnected from each other. An app can be spun up for a few 
> seconds and grab a new address from the range, this could be done with a 
> broker (eg dhcp), but especially with cloud, one could use a /8 address 
> range and randomly select out of the 16 million possible, a big enough 
> space that random generally won't cause a collision - which is a cheaper 
> allocation procedure than a list managing broker, and would be more suited 
> to the dynamic cloud environment.
>
> This function allows this type of client-side decisionmaking that a broker 
> bottlenecks into a service, creating an extra startup latency cost. A 
> randomly generated IP address takes far less time than sending a request to 
> a centralised broker and receiving it.
>
> That's just one example I can think of where a pre-made list of netmasks 
> is useful, I'm sure more experienced network programmers can rattle off a 
> laundry list.
>
> On Monday, 11 March 2019 20:45:32 UTC+1, John Dreystadt wrote:
>>
>> Yes, I was mistaken on this point. I got confused over someone's 
>> discussion of RFC 1918 with what the standard actually said. I should have 
>> checked closer before I posted that point. But I still don't see the reason 
>> for this function. In today's networking, the actual value you should use 
>> for a mask on an interface on the public Internet is decided by a 
>> combination of the address range you have and how it is divided by your 
>> local networking people. On the private networks, it is entirely up to the 
>> local networking people. The value returned by this function is only a 
>> guess, and I think it is more likely to mislead than to inform.
>>
>> On Friday, March 8, 2019 at 12:51:41 PM UTC-5, Tristan Colgate wrote:
>>>
>>> Just on a point of clarity. DefaultMask is returning the mask associates 
>>> with the network class. RFC1918 specifies a bunch of class A,B and C 
>>> networks for private use. E.g. 192.168/16 is a set of 256 class C networks. 
>>> The correct netmask for one of those class Cs is 255.255.255.0 (/24). So 
>>> the function returns the correct thing by the RFC.
>>>   
>>>
>>>
>>>>
>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-02 Thread Louki Sumirniy
The function has a very specific purpose that I have encountered in several 
applications, that being to automatically set the netmask based on the IP 
being one of the several defined ones, 192, 10, and i forget which others. 

Incorrect netmask can result in not recognising a LAN address that is 
incorrect. A 192.168 network has 255 available addresses. You can't just 
presume to make a new 192.168.X... address with a /16, as no other 
correctly configured node in the LAN will be able to route to it due to it 
being a /16. 

If you consider the example of an elastic cloud type network environment, 
it is important that all nodes agree on netmask or they will become 
(partially) disconnected from each other. An app can be spun up for a few 
seconds and grab a new address from the range, this could be done with a 
broker (eg dhcp), but especially with cloud, one could use a /8 address 
range and randomly select out of the 16 million possible, a big enough 
space that random generally won't cause a collision - which is a cheaper 
allocation procedure than a list managing broker, and would be more suited 
to the dynamic cloud environment.

This function allows this type of client-side decisionmaking that a broker 
bottlenecks into a service, creating an extra startup latency cost. A 
randomly generated IP address takes far less time than sending a request to 
a centralised broker and receiving it.

That's just one example I can think of where a pre-made list of netmasks is 
useful, I'm sure more experienced network programmers can rattle off a 
laundry list.

On Monday, 11 March 2019 20:45:32 UTC+1, John Dreystadt wrote:
>
> Yes, I was mistaken on this point. I got confused over someone's 
> discussion of RFC 1918 with what the standard actually said. I should have 
> checked closer before I posted that point. But I still don't see the reason 
> for this function. In today's networking, the actual value you should use 
> for a mask on an interface on the public Internet is decided by a 
> combination of the address range you have and how it is divided by your 
> local networking people. On the private networks, it is entirely up to the 
> local networking people. The value returned by this function is only a 
> guess, and I think it is more likely to mislead than to inform.
>
> On Friday, March 8, 2019 at 12:51:41 PM UTC-5, Tristan Colgate wrote:
>>
>> Just on a point of clarity. DefaultMask is returning the mask associates 
>> with the network class. RFC1918 specifies a bunch of class A,B and C 
>> networks for private use. E.g. 192.168/16 is a set of 256 class C networks. 
>> The correct netmask for one of those class Cs is 255.255.255.0 (/24). So 
>> the function returns the correct thing by the RFC.
>>   
>>
>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
stale thread, which can then terminate 
once it is no longer in the filter queue.

On Thursday, 2 May 2019 10:30:28 UTC+2, Øyvind Teig wrote:
>
> Hi, Louki Sumirniy 
>
> This is not really a response to your problem in particular, so it may 
> totally miss your target. It's been a while since I did anything in this 
> group. However, it's a response to the use of buffered channels. It's a 
> coincidence that I react to your posting (and not the probably hundreds of 
> others over the years where this comment may have been relevant). But I 
> decided this morning to actually look into one of the group update mails, 
> and there you were! 
>
> In a transcript from [1] Rob Pike says that 
>
> “Now for those experts in the room who know about buffered channels in Go 
> – which exist – you can create a channel with a buffer. And buffered 
> channels have the property that they don’t synchronise when you send, 
> because you can just drop a value in the buffer and keep going. So they 
> have different properties. And they’re kind of subtle. They’re very useful 
> for certain problems, but you don’t need them. And we’re not going to use 
> them at all in our examples today, because I don’t want to complicate life 
> by explaining them.” 
>
> I don't know if that statement is still valid, and I would not know 
> whether your example is indeed one of the "certain problems" where you have 
> got the correct usage. In that case, my comments below would be of less 
> value in this concrete situation. Also, whether there is a more generic 
> library in Go now that may help getting rid of buffered channels. Maybe 
> even an output into a zero-buffered channel in a select with a timeout will 
> do. 
>
> If you fill up a channel with data you would often need some state to know 
> about what is in the channel. If it's a safety critical implementation you 
> may not want to just drop the data into the channel and forget. If you need 
> to restart the comms in some way you would need to flush the channel, 
> without easily knowing what you are flushing. The message "fire in 1 second 
> if not cancelled" comes through but you would not know that the "cancel!" 
> message was what you had to flush in the channel. In any case, a full 
> channel would be blocking anyhow - so you would have to take care of that. 
> Or alternatively _know_ that the consumer always stays ahead of the 
> buffered channel, which may be hard to know. 
>
> I guess there are several (more complex?) concurrency patterns available 
> that may be used instead of the (simple?) buffered channel: 
>
> All of the patterns below would use synchronised rendezvous with 
> zero-buffered channels that would let a server goroutine (task, process) 
> never have to block to get rid of its data. After all, that's why one would 
> use a buffered channel; so that one would not need to block. All of the 
> below patterns move data, but I guess there may be patterns for moving 
> access as well (like mobile channels). All would also be deadlock free. 
>
> The Overflow Buffer pattern uses a composite goroutine consisting of two 
> inner goroutines. One Input that always accepts data and one Output that 
> blocks to output, and in between there is a channel with Data one direction 
> that never blocks and a channel with Data-sent back. If the Input has not 
> got the Data-sent back then there is an overflow that may be handled by 
> user code. See [2], figure 3. 
>
> Then there are the Knock-Come pattern [3] and a pattern like the XCHAN 
> [4]. In the latter's appendix a Go solution is discussed. 
>
> - - - 
>
> [1] Rob Pike: "Go concurrency patterns": 
> https://www.youtube.com/watch?v=f6kdp27TYZs=em at Google I/O 2012. 
> Discussed in [5] 
>
> Disclaimer: there are no ads, no gifts, no incoming anything with my blog 
> notes, just fun and expenses: 
>
> [2] http://www.teigfam.net/oyvind/pub/pub_details.html#NoBlocking - See 
> Figure 3 
>
> [3] 
> https://oyvteig.blogspot.com/2009/03/009-knock-come-deadlock-free-pattern.html
>  
> Knock-come 
>
> [4] http://www.teigfam.net/oyvind/pub/pub_details.html#XCHAN - XCHANs: 
> Notes on a New Channel Type 
>
> [5] 
> http://www.teigfam.net/oyvind/home/technology/072-pike-sutter-concurrency-vs-concurrency/
>  

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] using channels to gather bundles from inputs for batch processing

2019-05-01 Thread Louki Sumirniy
I am working on a reliable UDP transport currently, and I am writing it 
using exclusively buffered channels.

The way it works is there is three goroutines, one accepts new input, 
forwards it to an intermediate 'incoming' worker, who checks if the new 
data can be bundled (it is for reed solomon encoded shards, of which 3 
valid pieces are required), then either bundles it or pushes the new data 
back through a 'return' channel, which simply then loads it back into the 
incoming channel.

I actually drafted this design some time ago but only just now I tried to 
make it work and after some hours with several dead loops appearing in the 
workers, and I realised I needed to first fully understand every aspect of 
how one implements this thing.

So I wrote this simple version of a batch processor which just sequentially 
pushes incrementing integers into the channels and each time it gets three, 
which it collects as it goes, it then announces it has a batch and voila, 
it works.

I had a few small details to properly work out and in the code below I show 
the basic algorithm, but in brief:

1. firstly, you always have to start up the goroutines that respond to 
channels being loaded, and then after all the goroutines are started, you 
run the main receiving loop that gets the batch items and puts them into 
the channel

2. In the 'pusher' goroutine, notice that the 'work complete' check, I 
observed that one has to set it outside of the bound intended, here, 7, 
when I want to see 6 items completely batched. If batches were atomic and a 
shutdown request happens during an operation, you would need to have 
something that notifies the other end the last batch won't be processed. I 
am noting this mainly to highlight the fact that the pusher goroutine is 
first in the processing scheme, and the returner is one step further down 
the track, so if the receiving thread is at 7, the returner is pushing item 
6 back to incoming, where it will be bundled before the pusher shuts down.

3. Last general observation - when working with goroutines, you really need 
to keep a very close track on the number of selectors, pushers and pullers 
for each channel. Pushers and pullers have to be paired or one can be 
starved and deadlock, and selectors, for example for implementing a 
shutdown cleanup response for the goroutines, need a nonblocking selector 
at the start to check the quit notification and the second select has to 
also be nonbreaking (ie default clause). When there is only one channel 
being selected, as in the returner, you don't want a default to unblock, as 
this will fall through eithter to the enclosing for loop or terminate the 
goroutine. So the empty default clause is something you would generally 
want to omit until you are sure you need it there, otherwise you can get 
boring old infinite loop problems. The select clause will also terminate if 
the channel is nilled, which you can see in my example is done as part of 
the shutdown. This is not a clean shutdown, of course, and I might be 
throwing out more options to make sure I hit it, but I think that all that 
matters is the channel is nil, and the goroutine will then stop.

Just one tiny question about that last bit - just more getting it clear in 
my mind - nil channels cannot receive and closed channels cannot send (ie 
panic). So this means one should close a channel first and then nil it. If 
you do it the other way around, you can get a send on closed channel panic, 
or sleeping goroutines deadlock, one or the other.

Also, I might be confused about how this would work for a packet 
receiving/routing system - I have worked out from this that the channel 
size does not need to be greater than batch size for the 'pusher' (in the 
transport I am writing, this is the socket listener), and the buffer sizes 
on incoming/return channels will be something subject to some tuning to 
balance latency from several of the channels filling up and blocking the 
sender that loads them - but... so I figure that given a computer wasn't 
doing anything else except receiving, batching and relaying packets, for 
example, that you would not need any more buffers than this (like here, 
bundling sequential values). But if the input was more unpredictable, some 
amount more, which will depend partly on performance characteristics of 
computers and networks, then there needs to be some headroom to allow for 
several of the channels to get full without stopping the whole show.

However, keeping that in mind, one characteristic of this algorithm below 
is that it preserves ordering. You can see if you run it on play that each 
batch comes out in the order they were sent (thus, for my transport, 
received), but of course being that the transport willl be receiving many 
batches of up to 9 packets each from many peers, I think that for this 
case, I would need to add code that creates a set of buffers on the basis 
of the number of peers, plus, which will be learned in 

[go-nuts] Closure Scope Bugs, one small trick/tip

2019-04-18 Thread Louki Sumirniy
I just ran into an issue with a closure inside a function that I originally 
created elsewhere with the same names as where I transplanted the closure 
definition, and it took me quite some time to root out all the references 
to the outer scope.

Scope bleed and shadowing with closures is something that should be 
emphasised as an issue when dealing with closures, especially when it 
involves context variables.

I found a way to exploit the scoping rules to rapidly highlight all the 
incorrectly unchanged references to outer scope context by creating an 
inner scope declaration of a variable of a different type with the name of 
this overlapping outer scope, in this case an int. Then instantly all the 
references inside the closure referring to this outer name have been 
changed to be int type and become type/member errors. 

Unfortunately I didn't figure it out until I had just finally nabbed the 
last one, but hopefully I will remember it for future and that maybe it 
helps someone else who uses closures a lot in Go.

The code is inside a configuration menu system I am writing and it uses 
closures to attach handlers to objects. 

I have learned two important scoping rules that goes beyond just closures 
in the process;

1. It is possible to declare a name inside a block same as outside the 
block with no limitations (thus using the outer name to zap incorrect 
lingering references inside a closure).
2. For variables are not multiply declared - inside a closure a reference 
to variables declared using := in a for statement are a single variable and 
usually means the closure has only the last value in the iteration in its 
references to these values. You have to declare a new variable inside the 
for statement to ensure it pins to the specific instance of the loop and 
the state at that time.

I also observed a while ago and it can be annoying sometimes (and sometimes 
actually there is no better way to express) than an if/else if/else 
construction, one has to be aware in that case that the if{}else{}else{} is 
a contiguous scope block and all names declared either in the optional if 
statement or inside the blocks can be seen by the others. I think this 
visibility is progressive, ie you can't see the variable declared in the 
next else if{} block, only the ones beforehand.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] What happens when you call another method within a method set at the end of a method

2019-04-06 Thread Louki Sumirniy
I have become quite interested in tail-call optimisation and more distantly 
in code that assembles trees of closures all tied to the same scope through 
parameters to reduce stack management for complex but not always 
predictable input.

As I have learned from some reading, tail call optimisation is a fairly 
frequently requested feature that is not planned to go into Go 2, so I am 
just wondering what actually happens when you end functions with a call, 
especially that shares the same receiver as the calling function.

I figure it changes the procedure a little compared to calling before, 
since the compiler knows no more variables in the caller are going to be 
changed except the return value. I also figure that it reduces the rate at 
which the stack expands since the first function does not need to save its 
state to resume.

Does or can it have a performance impact? What I'm thinking is that it 
works like a trampoline, in that it means the stack doesn't grow so even if 
you change nothing else when using a recursive algorithm, moving it to tail 
calls - does it reduce the stack growth rate, and if so, how can this be 
exploited to reduce overhead?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Using Closures to Generate Code On the Fly

2019-04-03 Thread Louki Sumirniy
I have been doing a lot of work involving essentially declarative 
(nominally) complex data types involving several layers of encapsulation.

I had seen here and there examples of this 'Fluent' method-based pipeline 
chaining, with methods passing through receivers to invoke multiple 
distinct functions in the type. Here is a gist about it with an example:

https://git.parallelcoin.io/loki/gists/src/commit/d6cabfd0933d0cda731217c371e0295db331ebb1/tailrecursion-generic.md

It occurred to me that if one used this to construct complex graphs of 
processing, that the CPU's branch predictor would probably be on fire for 
that time, since even though these binary blobs are being possibly 
arbitrarily constructed based on random inputs, they will have a 
substantial amount off scope in common, so...

It might then be possible to further amplify this effect by allowing the 
runtime to lay the code ahead of the execution a bit like Magneto pulling 
those metal blocks up as he walks forwards.

I don't know how verbose it is, just at first blush, I am generally not 
fond of closure syntax in Go, but it seems to me like this dynamic 
construction pattern would be very good for speeding up complex processing 
with significant variance in sequence.

For example, playing back a journal into a database - a scouting thread 
could pre-process some of the key but simple and salient data about the 
segments of the journal, and construct ahead of time cache locality 
optimised code and data segmentation that will run with 100% confidence 
based on the structure and composition of the data.

At the moment I am just using it to chain validators together, but, for 
example, generating a graph from a blockchain ledger, in order to perform 
validation, can have a front-running pass that first generates the 
join/split paths of tokens intersecting with accounts. This graph forms the 
map of how to process the data, and for parallelisation, such a graph would 
allow the replay processing to be split automatically to make optimal use 
of cores, caches and memory bus. It could even farm the work out acrosss 
the network and all of the cluster nodes process their mostly isolated 
segment, then share their database tables directly and voila.

Such processing is naturally easier to construct using recursion, and with 
composition of closures in this way, it should also be quite efficient. 
Although at with current go 1.10+ syntax it is a little clumsy, each part 
is small and this helps a lot.

When I am making big changes to code, I have this sensation like I am 
walking on unstable ground, because sometimes I can get a way into 
something and discover I passed the correct route some way back and I 
didn't commit before it and now I have to start all over again.

Small pieces less than a screenful at a time are very manageable. Just 
gotta get a handle on that vertigo :)

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
This was a good link to follow:

https://en.wikipedia.org/wiki/Bulk_synchronous_parallel

led me here:

https://en.wikipedia.org/wiki/Automatic_mutual_exclusion

and then to here:

https://en.wikipedia.org/wiki/Transactional_memory

I think this is the pattern for implementing this using channels as an 
optimistic resource state lock during access:

// in outside scope

mx := new(chan bool)

// routine that needs exclusive read or write on variable:

go func() {

  for {
somestate := doSomething()
<-mx
if currentState() == somestate {
  break
}
  }
}

mx <- true

This is not a strict locking mechanism but a way to catch access 
contention. somestate might be a nanosecond timestamp or a value that is 
only read and always incremented by every accessor, signifying the number 
of accesses and thus the synchronisation state before the channel is 
emptied can be compared to after, and if no other access incremented that 
value then it knows it can continue with the state being correctly shared. 
I am deeply fascinated by distributed systems programming and this type of 
scheduling system suits better dealing with potentially large and complex 
state (like a database) is to take note of access sequence. If we didn't 
have the possibility of one central counter that only increments, the event 
could be tagged with a value that derives out of the event that called it 
and the result of the next event.

As my simple iterative example showed, given the same sequence of events, 
channels are deterministic, so this is an approach that is orthogonal but 
in the same purpose - to prevent multiple concurrent agents from 
desynchronising shared state, without blocking everything before access. 
It's not a journal but the idea is to have each goroutine acting on the 
final state of the value at the time it is invoked to operate on it. So you 
let everyone at it but everyone stops at this barrier and checks if anyone 
else changed it and then they try again to have a conflict free access.

On Sunday, 17 March 2019 20:52:12 UTC+1, Robert Engels wrote:
>
> https://g.co/kgs/2Q3a5n
>
> On Mar 17, 2019, at 2:36 PM, Louki Sumirniy  > wrote:
>
> So I am incorrect that only one goroutine can access a channel at once? I 
> don't understand, only one select or receive or send can happen at one 
> moment per channel, so that means that if one has started others can't 
> start. 
>
> I was sure this was the case and this seems to confirm it:
>
> https://stackoverflow.com/a/19818448
>
> https://play.golang.org/p/NQGO5-jCVz
>
> In this it is using 5 competing receivers but every time the last one in 
> line gets it, so there is scheduling and priority between two possible 
> receivers when a channel is filled. 
>
> This is with the range statement, of course, but I think the principle I 
> am seeing here is that in all cases it's either one to one between send and 
> receive, or one to many, or many from one, one side only receives the other 
> only sends. If you consider each language element in the construction, and 
> the 'go' to be something like a unix fork(), this means that the first 
> statement inside the goroutine and the very next one in the block where the 
> goroutine is started potentially can happen at the same time, but only one 
> can happen at a time.
>
> So the sequence puts the receive at the end of the goroutine, which 
> presumably is cleared to run, whereas the parent where it is started is 
> waiting to have a receiver on the other end.
>
> If there is another thread in line to access that channel, at any given 
> time only one can be on the other side of send or receive. That code shows 
> that there is a deterministic order to this, so if I have several 
> goroutines running each one using this same channel lock to cover a small 
> number of mutable shared objects, only one can use the channel at once. 
> Since the chances are equal whether one or the other gets the channel at 
> any given time, but it is impossible that two can be running the accessor 
> code at the same time.
>
> Thus, this construction is a mutex because it prevents more than one 
> thread accessing at a time. It makes sense to me since it takes several 
> instructions to read and write variables copying to register or memory. If 
> you put two slots in the buffer, it can run in parallel, that's the point, 
> a single element in the channel means only one access at a time and thus it 
> is a bottleneck that protects from simultaneous read and right by parallel 
> threads.
>
> On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote:
>>
>> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  
>> wrote:
>>
>> > My understanding of channels is they basically create exclusion by 
>> control of the path of execution, instead 

Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
Ah yes, probably 'loop1' 'loop2' would be more accurate names.

Yes the number of each routine is in that 'i' variable, those other labels 
are just to denote the position within the loops and before and after 
sending and the state of the truthstate variable that is only accessed 
inside the goroutine.

Yes, this is a very artificial example because the triggering of send 
operations is what puts all the goroutines into action. The serial nature 
of the outer part in the main means that this means only the sequentially 
sent and received messages will mostly always come out in the same order as 
they go in (I'd say, pretty much always, except maybe if there was a lot of 
heavy competition for goroutines compared to the supply of CPU threads.

If in an event driven structure with multiple workers that need to share 
and notify state through state variables, one goroutine might send and then 
another runs and receives. So, maybe this means I need to have a little 
more code in the goroutine after it empties the channel that verifies by 
reading that state hasn't changed and starts again if it has.

So, ok, I guess the topic is kinda wrongly labeled. I am just looking for a 
way to queue read and write access to a couple of variables, and order 
didn't matter just that one read or write is happening at any given moment. 
So to be a complete example I would need randomised and deliberately 
congested spawing of threads competing to push to the channel, and inside 
the loop it should have a boolean or maybe 'last modified' stamp that after 
it empties the channel it checks that isn't changed and if it is restarts.

But yes, that could still get into a deadlock if somehow two routines get 
into a perfect rhythm with each other. 

I will have to think more about this, the code I am trying to fix I suppose 
it would help if I understood its logic flow before I just try and prevent 
contention by changing the flow if this is possible.

On Sunday, 17 March 2019 21:59:30 UTC+1, Devon H. O'Dell wrote:
>
> I like to think of a channel as a concurrent messaging queue. You can 
> do all sorts of things with such constructs, including implementing 
> mutual exclusion constructs, but that doesn't mean that one is the 
> other. 
>
> Your playground example is a bit weird and very prone to various kinds 
> of race conditions that indicate that it may not be doing what you 
> expect. At a high level, your program loops 10 times. Each loop 
> iteration spawns a new concurrent process that attempts to read a 
> value off of a channel three times. Each iteration of the main loop 
> writes exactly one value into the channel. 
>
> As a concurrent queue, writes to the channel can be thought of as 
> appending an element, reads can be thought of as removing an element 
> from the front. 
>
> Which goroutine will read any individual value is not deterministic. 
> Since you're only sending 11 values over the channel, but spawn 10 
> goroutines that each want to read 3 values, you have at best 6 
> goroutines still waiting for data to be sent (and at worse, all 10) at 
> the time the program exits. 
>
> I would also point out that this is not evidence of mutual exclusion. 
> Consider a case where the work performed after the channel read 
> exceeds the time it takes for the outer loop to write a new value to 
> the channel. In that case, another goroutine waiting on the channel 
> would begin executing. This is not mutual exclusion. In this regard, 
> the example you've posted is more like a condition variable or monitor 
> than it is like a mutex. 
>
> Also note that in your second playground post, you're spawning 12 
> goroutines, so I'm not sure what "goroutine1" and "goroutine2" are 
> supposed to mean. 
>
> Kind regards, 
>
> --dho 
>
> Op zo 17 mrt. 2019 om 13:07 schreef Louki Sumirniy 
> >: 
> > 
> > https://play.golang.org/p/13GNgAyEcYv 
> > 
> > I think this demonstrates how it works quite well, it appears that 
> threads stick to channels, routine 0 always sends first and 1 always 
> receives, and this makes sense as this is the order of their invocation. I 
> could make more parallel threads but clearly this works as a mutex and only 
> one thread gets access to the channel per send/receive (one per side). 
> > 
> > On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote: 
> >> 
> >> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy <
> louki.sumir...@gmail.com> wrote: 
> >> 
> >> > My understanding of channels is they basically create exclusion by 
> control of the path of execution, instead of using callbacks, or they 
> bottleneck via the cpu thread which is the reader and writer of this shared 
> data anyway. 
> >> 
> >> The language specification ne

Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
I am pretty sure the main cause of deadlocks not having senders and 
receivers in pairs in the execution path such that senders precede 
receivers. Receivers wait to get something, and in another post here I 
showed a playground that demonstrates that if there is one channel only one 
thread is every accessing them (because the code has those variables only 
accessed in there). In a nondeterministic input situation where a listener 
might trigger a send (and run this protected code), it is still going to be 
one or another is in front of the queue, in this case we are not concerned 
with sequence only excluding simultaneous read/write operations.

I would not use a buffered channel to implement a mutex, as this implicitly 
means two or more threads can read and write variables inside the goroutine.

That was my main question, as I want to use the lightest possible mechanism 
to simply control that only one reader or writer is working at one moment 
on two variables. that the race detector is flagging in my code.

On Sunday, 17 March 2019 20:51:33 UTC+1, Jan Mercl wrote:
>
> On Sun, Mar 17, 2019 at 8:36 PM Louki Sumirniy  > wrote:
>
> > So I am incorrect that only one goroutine can access a channel at once? 
> I don't understand, only one select or receive or send can happen at one 
> moment per channel, so that means that if one has started others can't 
> start. 
>
> All channel operations can be safely used by multiple, concurrently 
> executing goroutines. The black box inside the channel implementation can 
> do whatever it likes as long as it follows the specs and memory model. But 
> from the outside, any goroutine can safely send to, read from, close or 
> query length of a channel at any time without any explicit synchronization 
> whatsoever. By safely I mean "without creating a data race just by 
> executing the channel operation". The black box takes care of that.
>
> However, the preceding _does not_ mean any combination of channel 
> operations performed by multiple goroutines is always sane and that it 
> will, for example, never deadlock. But that's a different story.
>
> -- 
>
> -j
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
https://play.golang.org/p/Kz9SsFeb1iK

This prints something at each interstice of the execution path and it is of 
course deterministic.

I think the reason why the range loop always chooses one per channel, last 
one in order because it uses a LIFO queue so the last in line gets filled 
first.

The example in there shows with 1, 2 and 3 slots in the buffer, the 
exclusion only occurs properly in the single slot, first goroutine always 
sends and second always receives. This is because of the order of 
execution. If the sends were externally determined and random it still only 
gets written within one location. If you only read and write these 
variables inside this loop they can never be clobbered while you read them, 
which is the contention that a mutex is for determining the sequence of 
execution.

The main costs I see are that though the overhead is lower, there is still 
overhead so there is some reasonable ratio between the amount of code you 
execute in a given moment is shorter the other competing parallel threads 
with external, not-deterministic inputs will lock the value for not longer 
than you want to avoid with adding response latency. The scheduling 
overhead escalates with the number of threads and costs also in memory.

But my main point is that it functions correctly as a mutex mechanism and 
code inside the goroutine can count on nobody else accessing the variables 
that are only read and written inside it.

On Sunday, 17 March 2019 20:36:40 UTC+1, Louki Sumirniy wrote:
>
> So I am incorrect that only one goroutine can access a channel at once? I 
> don't understand, only one select or receive or send can happen at one 
> moment per channel, so that means that if one has started others can't 
> start. 
>
> I was sure this was the case and this seems to confirm it:
>
> https://stackoverflow.com/a/19818448
>
> https://play.golang.org/p/NQGO5-jCVz
>
> In this it is using 5 competing receivers but every time the last one in 
> line gets it, so there is scheduling and priority between two possible 
> receivers when a channel is filled. 
>
> This is with the range statement, of course, but I think the principle I 
> am seeing here is that in all cases it's either one to one between send and 
> receive, or one to many, or many from one, one side only receives the other 
> only sends. If you consider each language element in the construction, and 
> the 'go' to be something like a unix fork(), this means that the first 
> statement inside the goroutine and the very next one in the block where the 
> goroutine is started potentially can happen at the same time, but only one 
> can happen at a time.
>
> So the sequence puts the receive at the end of the goroutine, which 
> presumably is cleared to run, whereas the parent where it is started is 
> waiting to have a receiver on the other end.
>
> If there is another thread in line to access that channel, at any given 
> time only one can be on the other side of send or receive. That code shows 
> that there is a deterministic order to this, so if I have several 
> goroutines running each one using this same channel lock to cover a small 
> number of mutable shared objects, only one can use the channel at once. 
> Since the chances are equal whether one or the other gets the channel at 
> any given time, but it is impossible that two can be running the accessor 
> code at the same time.
>
> Thus, this construction is a mutex because it prevents more than one 
> thread accessing at a time. It makes sense to me since it takes several 
> instructions to read and write variables copying to register or memory. If 
> you put two slots in the buffer, it can run in parallel, that's the point, 
> a single element in the channel means only one access at a time and thus it 
> is a bottleneck that protects from simultaneous read and right by parallel 
> threads.
>
> On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote:
>>
>> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  
>> wrote:
>>
>> > My understanding of channels is they basically create exclusion by 
>> control of the path of execution, instead of using callbacks, or they 
>> bottleneck via the cpu thread which is the reader and writer of this shared 
>> data anyway.
>>
>> The language specification never mentions CPU threads. Reasoning about 
>> the language semantics in terms of CPU threads is not applicable.
>>
>> Threads are mentioned twice in the Memory Model document. In both cases I 
>> think it's a mistake and we should s/threads/goroutines/ without loss of 
>> correctness.
>>
>> Channel communication establish happen-before relations (see Memory 
>> Model). I see nothing equivalent directly to a critical section in that 
>> behavio

Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
https://play.golang.org/p/13GNgAyEcYv

I think this demonstrates how it works quite well, it appears that threads 
stick to channels, routine 0 always sends first and 1 always receives, and 
this makes sense as this is the order of their invocation. I could make 
more parallel threads but clearly this works as a mutex and only one thread 
gets access to the channel per send/receive (one per side).

On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote:
>
> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  > wrote:
>
> > My understanding of channels is they basically create exclusion by 
> control of the path of execution, instead of using callbacks, or they 
> bottleneck via the cpu thread which is the reader and writer of this shared 
> data anyway.
>
> The language specification never mentions CPU threads. Reasoning about the 
> language semantics in terms of CPU threads is not applicable.
>
> Threads are mentioned twice in the Memory Model document. In both cases I 
> think it's a mistake and we should s/threads/goroutines/ without loss of 
> correctness.
>
> Channel communication establish happen-before relations (see Memory 
> Model). I see nothing equivalent directly to a critical section in that 
> behavior, at least as far as when observed from outside. It was mentioned 
> before that it's possible to _construct a mutex_ using a channel. I dont 
> think that implies channel _is a mutex_ from the perspective of a program 
> performing channel communication. The particular channel usage pattern just 
> has the same semantics as a mutex.
>
> -- 
>
> -j
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
So I am incorrect that only one goroutine can access a channel at once? I 
don't understand, only one select or receive or send can happen at one 
moment per channel, so that means that if one has started others can't 
start. 

I was sure this was the case and this seems to confirm it:

https://stackoverflow.com/a/19818448

https://play.golang.org/p/NQGO5-jCVz

In this it is using 5 competing receivers but every time the last one in 
line gets it, so there is scheduling and priority between two possible 
receivers when a channel is filled. 

This is with the range statement, of course, but I think the principle I am 
seeing here is that in all cases it's either one to one between send and 
receive, or one to many, or many from one, one side only receives the other 
only sends. If you consider each language element in the construction, and 
the 'go' to be something like a unix fork(), this means that the first 
statement inside the goroutine and the very next one in the block where the 
goroutine is started potentially can happen at the same time, but only one 
can happen at a time.

So the sequence puts the receive at the end of the goroutine, which 
presumably is cleared to run, whereas the parent where it is started is 
waiting to have a receiver on the other end.

If there is another thread in line to access that channel, at any given 
time only one can be on the other side of send or receive. That code shows 
that there is a deterministic order to this, so if I have several 
goroutines running each one using this same channel lock to cover a small 
number of mutable shared objects, only one can use the channel at once. 
Since the chances are equal whether one or the other gets the channel at 
any given time, but it is impossible that two can be running the accessor 
code at the same time.

Thus, this construction is a mutex because it prevents more than one thread 
accessing at a time. It makes sense to me since it takes several 
instructions to read and write variables copying to register or memory. If 
you put two slots in the buffer, it can run in parallel, that's the point, 
a single element in the channel means only one access at a time and thus it 
is a bottleneck that protects from simultaneous read and right by parallel 
threads.

On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote:
>
> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  > wrote:
>
> > My understanding of channels is they basically create exclusion by 
> control of the path of execution, instead of using callbacks, or they 
> bottleneck via the cpu thread which is the reader and writer of this shared 
> data anyway.
>
> The language specification never mentions CPU threads. Reasoning about the 
> language semantics in terms of CPU threads is not applicable.
>
> Threads are mentioned twice in the Memory Model document. In both cases I 
> think it's a mistake and we should s/threads/goroutines/ without loss of 
> correctness.
>
> Channel communication establish happen-before relations (see Memory 
> Model). I see nothing equivalent directly to a critical section in that 
> behavior, at least as far as when observed from outside. It was mentioned 
> before that it's possible to _construct a mutex_ using a channel. I dont 
> think that implies channel _is a mutex_ from the perspective of a program 
> performing channel communication. The particular channel usage pattern just 
> has the same semantics as a mutex.
>
> -- 
>
> -j
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-17 Thread Louki Sumirniy
I didn't even think of the idea of using buffered channels, I was trying to 
not lean too far in towards that side of thing, but it is good you mention 
it, it would be simple to just pre-allocate a buffer and trigger the print 
call only when that buffer fills up (say like half a screenful, maybe 4kb 
to allow for stupidly heavy logging outputs.

As you point out, there is some threads and buffering going on with the 
writer already.

I do think, though, that while on one hand you are correct the load is not 
really other than shifted, that on the other hand, the scheduler can more 
cleanly keep the threads separated. In my use case it is only one main 
thread processing a lot of crypto-heavy virtual machine stuff, and the most 
important thing that needs low overhead (in thread) diversions, not that 
the load is reduced.

As I mentioned, I wrote a logger that does this deferral of processing 
until two hops down stream to the root, I am going to look at buffering it 
and actually I think it would be good to actually try to pace it by time 
instead of line-printing speed, and when the heavy debug printing is being 
used performance is not a concern  at all - but being single thread, the 
less time that thread spends talking to other threads, the better, very 
short loops, under 100ms most of the time, making actual calls to log. or 
fmt.Print functions adds more overhead to the main thread than loading a 
channel and dispatching it.

On my main workstation, there is another 11 cores and they can be busy 
doing things without slowing the central process, so long as they aren't 
needing to synchronise with it or put the loop on hold longer than 
necessary. Using a buffer and a fast ticker to fill and empty bulk sends to 
the output sounds like a sensible idea to me, as slicing strings, it is 
easy to design it to flow as a stream, they only need one bottleneck as 
they are streamed into the buffer.

On Sunday, 17 March 2019 10:15:29 UTC+1, Christophe Meessen wrote:
>
> What you did is decouple production from consumption. You can speed up the 
> production go routine if the rate is irregular. But if, on average, 
> consumption is too slow, the list will grow until out of memory. 
>
> If you want to speed up consumption, you may group the strings in one big 
> string and print once. This will reduce the rate of system calls to print 
> each string individually.
>
> Something like this (warning: raw untested code)
>
>
> buf := make([]byte, 0, 1024)
> ticker := time.NewTicker(1*time.Second)
> for {
> select {
> case ticker.C:
> if len(buf) > 0 {
> fmt.Print(string(buf))
> buf := buf[:0]
> }
> case m := <-buffChan:
> buf = append(buf, m...)
> buf = append(buf, '\n')
> }
> }
>
>
> Adjust the ticker period time and initial buffer size to what matches your 
> need. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-17 Thread Louki Sumirniy
I just wrote this a bit over a month ago:

https://git.parallelcoin.io/dev/pod/src/branch/master/pkg/util/cl

It was brutal simple before (only one 600 line source lots of type 
switches) but it now also has a registration system for setting up 
arbitrary subsystem log levels.

By the way, I could be wrong in my thinking about this, but it was central 
to the design to use interface{} channels and meaningful type names and the 
variables and inputs for, such as a printf type function, are this way only 
bundled into a struct literal, no function is called, the execution thread 
just forks and log messages pass through directly without any function call.

I even have a type for closures to pass straight through and they also 
don't execute until the last step of composing the output, so I have 6 core 
goroutines and 6 for each subsystem I set up, and most of the work is done 
in the end of two passes through channels (subsystems drop them if so 
configured). 

I think it also has a somewhat crude shutdown handling system in it too. 

It could definitely do with more work and actual performance overhead in 
throughput and latency to be profiled and optimised, but being simple I 
think it's not got much room to improve in the general design.

The messages pass through two 'funnels', with one per subsystem gated by 
its level setting, and then the actual function calls to splice the strings 
and output the result is deferred until the final 6 channel level main 
thread. I have just used them mainly on a package level basis but you can 
declare several in a package they just have to have different string labels.

Anyway, I'm kinda proud of it because my specific need is for avoiding such 
in-thread overheads as possible since during replay of the database log it 
only runs one thread due to the dependencies between data elements, and I 
think this does this well but I can't say I have any measurements either 
way, except I do know that especially permanent allocated channels only 
have a startup cost and their scheduling is very low cost, and also 
explicates to the compiler that the processes are concurrent and ideally 
should not be overlapping each other.

The print functions in fmt and log all have the per-call overhead cost, 
whereas channels have initialisation and lower context switch cost, but 
yes, more memory use but mainly because it funnels messages from another 
couple hundred other threads. So that's also my theory and why this post 
caught my eye. My logger is unlicenced so do whatever you like except 
pretend it's not prior art.

On Saturday, 16 March 2019 17:02:06 UTC+1, Thomas S wrote:
>
> Hello,
>
> I have a software needing a lot of logs when processing.
>
> I need to :
> 1- Improve the processing time by doing non-blocking logs
> 2- Be safe with goroutines parallel logs
>
> fmt package doesn't match with (1) & (2)
> log package doesn't match with (1)
>
> *Here is my proposal. What do you think ?*
>
> *Design :*
>
> [Log functions] -channel>[Buffering function (goroutine)] 
> channel> [Printing function (goroutine)]
>
>
> package glog
>
> import (
> "container/list"
> "encoding/json"
> "fmt"
> )
>
> /*
> ** Public
>  */
>
> func Println(data ...interface{}) {
> bufferChan <- fmt.Sprintln(data...)
> }
>
> func Print(data ...interface{}) {
> bufferChan <- fmt.Sprint(data...)
> }
>
> func Printf(s string, data ...interface{}) {
> go func() {
> r := fmt.Sprintf(s, data...)
> bufferChan <- r
> }()
> }
>
> /*
> ** Private
>  */
>
> var bufferChan chan string
> var outChan chan string
>
> func init() {
> bufferChan = make(chan string)
> outChan = make(chan string)
> go centrale()
> go buffout()
> }
>
> func centrale() {
> var buff *list.List
> buff = list.New()
> for {
> if buff.Len() > 0 {
> select {
> case outChan <- buff.Front().Value.(string):
> buff.Remove(buff.Front())
> case tmp := <-bufferChan:
> buff.PushBack(tmp)
> }
>
> } else {
> tmp := <-bufferChan
> buff.PushBack(tmp)
> }
> }
> }
>
> func buffout() {
> for {
> data := <-outChan
> fmt.Print(data)
> }
> }
>
>
>
> It works well for now, I want to be sure to not miss anything as it's a 
> very important part of the code.
>
> Thank you for your review.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
I didn't mention actually excluding access by passing data through values 
either, this was just using a flag to confine accessor code to one thread, 
essentially, which has the same result as a mutex as far as its granularity 
goes.

On Sunday, 17 March 2019 13:04:26 UTC+1, Louki Sumirniy wrote:
>
> My understanding of channels is they basically create exclusion by control 
> of the path of execution, instead of using callbacks, or they bottleneck 
> via the cpu thread which is the reader and writer of this shared data 
> anyway.
>
> I think the way they work is that there is queues for read and write 
> access based on most recent, so when a channel is loaded, the most 
> proximate (if possible same) thread executes the other side of the channel, 
> and then if another thread off execution bumps into a patch involving 
> accessing the channel, if the channel is full and it wants to fill, it is 
> blocked, if it wants to unload, and it's empty, it is blocked, but the main 
> goroutine scheduler basically is the gatekeeper and assigns exeuction 
> priority based on sequence and first available.
>
> So, if that is correct, then the version with the load after the goroutine 
> and unload at the end of the goroutine functions to grab the thread of the 
> channel, and when it ends, gives it back, and if another is ready to use 
> it, it is already lined up and the transfer is made. So any code I wrap 
> every place inside the goroutine/unload-load pattern (including inside 
> itself) can only be run by one thread at once. If you ask me, that's better 
> and more logical than callbacks.
>
> On Sunday, 17 March 2019 11:05:35 UTC+1, Jan Mercl wrote:
>>
>>
>> On Sun, Mar 17, 2019 at 10:49 AM Louki Sumirniy  
>> wrote:
>>
>> > I just ran into my first race condition-related error and it made me 
>> wonder about how one takes advantage of the mutex properties of channels.
>>
>> I'd not say there are any such properties. However, it's easy to 
>> implement a semaphore with a channel. And certain semaphores can act as 
>> mutexes.
>>
>> > If I understand correctly, this is a simple example:
>>
>> That example illustrates IMO more of a condition/signal than a typical 
>> mutex usage pattern.
>>
>> -- 
>>
>> -j
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
My understanding of channels is they basically create exclusion by control 
of the path of execution, instead of using callbacks, or they bottleneck 
via the cpu thread which is the reader and writer of this shared data 
anyway.

I think the way they work is that there is queues for read and write access 
based on most recent, so when a channel is loaded, the most proximate (if 
possible same) thread executes the other side of the channel, and then if 
another thread off execution bumps into a patch involving accessing the 
channel, if the channel is full and it wants to fill, it is blocked, if it 
wants to unload, and it's empty, it is blocked, but the main goroutine 
scheduler basically is the gatekeeper and assigns exeuction priority based 
on sequence and first available.

So, if that is correct, then the version with the load after the goroutine 
and unload at the end of the goroutine functions to grab the thread of the 
channel, and when it ends, gives it back, and if another is ready to use 
it, it is already lined up and the transfer is made. So any code I wrap 
every place inside the goroutine/unload-load pattern (including inside 
itself) can only be run by one thread at once. If you ask me, that's better 
and more logical than callbacks.

On Sunday, 17 March 2019 11:05:35 UTC+1, Jan Mercl wrote:
>
>
> On Sun, Mar 17, 2019 at 10:49 AM Louki Sumirniy  > wrote:
>
> > I just ran into my first race condition-related error and it made me 
> wonder about how one takes advantage of the mutex properties of channels.
>
> I'd not say there are any such properties. However, it's easy to implement 
> a semaphore with a channel. And certain semaphores can act as mutexes.
>
> > If I understand correctly, this is a simple example:
>
> That example illustrates IMO more of a condition/signal than a typical 
> mutex usage pattern.
>
> -- 
>
> -j
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Persistence of value, or Safely close what you expected

2019-03-17 Thread Louki Sumirniy
I am currently dealing with a race related bug, at least, I found a bug, 
and it coincided with a race when I enabled the race detector, and the bug 
is a deadlock, and the shared shut-down button clearly can be pressed on 
and off in the wrong order.

So my first strategy in fixing the bug is putting channel mutexes around 
the read/write operations.However, I'm not sure it's the most efficient 
solution. Basically it's two variables, which have related functions (quit 
and a cursor) so I think I can cover them with one lock. The quit lock will 
be infrequently accessed so I think it doesn't affect performance when it's 
actually meant to be working.

As regards to closing channels, this is always about only doing that in one 
place. Unless closing the channel is itself the signal, there should only 
be one open and one close function and usually in the same scope, possibly 
even with a defer to be safe it is closed in a panic and when the system it 
is part of can be reinitialised and restarted.

As I understand it, closing channels is mostly used as a quit signal, so 
there is usually only sender and many receivers, you can't get a race 
inside a serial thread. This doubles the utility of the channel that is 
being used to pass data around between processes, in that you can configure 
this signal for quit or reload or pause or whatever toggle/pushbutton type 
switch you want to have, with the addition of a response that reopens the 
channel after it closes, to act like a 'my turn' signal. But more usually 
that would just be done by having several workers listening for work, and 
only one sender, or one broker, or other consensus.

There's more than a few layers to the interlock between this and others of 
the go maxims, sharing a sender and receiver channel at the same time is a 
bad idea, in general. You either are funnelling concurrent processes into a 
serial process, or fanning a serial stream out into concurrent processes. 
The close is cognate to the nil in that it is not zero but it is not a 
number either, so in some situations you can use it as a message in itself.

On Thursday, 14 March 2019 13:51:29 UTC+1, rog wrote:
>
> On Wed, 13 Mar 2019 at 23:01, Andrey Tcherepanov  > wrote:
>
>> There were couple of things I was implementing for our little in-house 
>> server app. 
>> One was (in)famous fan-out pattern for broadcasting messages to clients, 
>> and one for job queue(s) that could run fast (seconds) or long (hours, 
>> days) where items kind of coming it from multiple (same) clients.
>>
>
> I'd be interested to hear a little more about these tasks. There are many 
> possible design choices there. For example, if one client is slow to read 
> its messages, what strategy would you choose? You could discard messages, 
> amalgamate messages, slow messages to other clients, etc. The choice you 
> make can have significant impact on the design of your system.
>
> FWIW the usual convention is to avoid sharing the memory that contains the 
> channel. The same channel can be stored in two places - both refer to the 
> same underlying channel. So when the sender goes away, it can nil its own 
> channel without interfering with the receiver's channel.
>
>   cheers,
> rog.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
I just ran into my first race condition-related error and it made me wonder 
about how one takes advantage of the mutex properties of channels.

If I understand correctly, this is a simple example:

mx := make(chan bool)

// in separate scope, presumably...

go func() {

<-mx

 

doSomething()

}()

mx <- true


So what happens here is the contents of the goroutine are waiting on 
something to come out of the channel, which is executed in parallel, or in 
sequential order, the content of the goroutine doesn't start until the 
channel is loaded, which happens at the same time, so it's impossible for 
two competing over the same lock to hold it at the same time.

If we have one thread, I think the goroutine runs first, it hits a block, 
and then the main resumes which blocks loading the other one, so if another 
thread also tries to wait on that channel it will be second (or later) in 
line waiting on that channel to pop something out.

I can see there is also another strategy where the shared memory is the 
variable passing through the channel (so it would also probably be a chan 
*type as well, distinguishing it), and the difference and which to choose 
would depend on how many locks you want to tie to how many sensitive items. 
If it's a bundle of things, like a map or array, then it might be better to 
pass the object around with its pointers, using channels as entrypoints. 
But more often it's one or two out of a set of other variables so it makes 
more sense to lock them separately, and with channels each lock would be 
just one extra (small) field.

Or maybe I am saying that backwards. If the state is big, use a few small 
locks to cover each part of the ephemeral shared state, if the state is 
small, pass it around directly through channels.

I'm really a beginner at concurrent programming, I apologise for my 
noobishness... and thanks to anyone who helps me and anyone understand it 
better :)

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Visual Studio Code oddity with Go

2019-03-03 Thread Louki Sumirniy
That is a strange setting for GOBIN... Isn't that the folder `go install` 
puts binaries? I always set it to ~/bin and  put that also in my path. I 
like to use `go install` instead of `go build` because then I don't have to 
remember not to let it slip through into a commit.

On Sunday, 3 March 2019 15:29:28 UTC+1, Rich wrote:
>
> Thank you!! This worked for me. $GOPATH is set to ~/go, but when it was 
> installing gocode it installed to /usr/local/go/bin instead of ~/go/bin -- 
> this is because the $GOBIN variable is set to /usr/local/go/bin. 
>
> On Saturday, February 23, 2019 at 3:05:58 PM UTC-5, Joseph Pratt wrote:
>>
>> Rich, you should check where your GOPATH is pointing. My guess is that 
>> VSCode is successfully installing the tool, but it's installing it in the 
>> "wrong" place. I have my $GOPATH is set to "\go" and my project 
>> folder structure is "$GOPATH\src\myDomain.com\myProject\main.go" and when I 
>> run VSCode I open the top-level $GOPATH folder. That way, I see all the 
>> tools that the VSCode Go Extension recommends to installation go in the 
>> "$GOPATH\go\src\github.com\.." and the "$GOPATH\go\golang.org.." in the 
>> folder explorer side-bar (screenshot attached). Hope that helps!
>>
>> On Friday, February 22, 2019 at 12:03:59 AM UTC-5, Rich wrote:
>>>
>>> Yeah. When I install the tool, it always gives me a success.  When I 
>>> selected all of them it also gave me a success.
>>>
>>> Thanks!
>>>
>>> On Thursday, February 21, 2019 at 10:42:33 PM UTC-5, andrey mirtchovski 
>>> wrote:

 > I tried the solution posted by Andrey (Thank you!) and it still does 
 the popup thing.  Oh well, it's a minor distraction, click update and it 
 goes away. 

 If you go to the OUTPUT tab does it give you an error message? or does 
 it say "things successfully installed"? 

>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-03-02 Thread Louki Sumirniy
The function is really just looking up the IP prefixes of the non-routeable 
address ranges. It has nothing to do with CIDR, it is for generating a sane 
default mask when the user has not specified the mask.

It most definitely should not be deprecated, as these nonrouteable 
addresses are definitely not deprecated, and CIDR is an extension, not a 
replacement, for IPv4 subnet specification, to give administrators more 
flexibliity when configuring multiple address ranges in a fairly large 
intranet.

On Saturday, 2 March 2019 22:32:28 UTC+1, John Dreystadt wrote:
>
> I am new to Go so feel free to point out if I am breaking protocol but I 
> ran into the function DefaultMask() in the net package and did some 
> research. This function returns the IPMask by assuming that you are using 
> IP class A, B, and C addresses. But this concept is from the early days of 
> the Internet, and was superseded by CIDR in 1993. See 
> https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing for the 
> history here. I looked around both this group and on Stack Overflow to see 
> what people had posted about this function. The only reference in 
> golang-nuts was to someone using this call to decide if an address was IPv4 
> or not. As the last posting on that thread pointed out, you can use To4() 
> for the same purpose (since DefaultMask actually calls To4). The only 
> reference on Stack Overflow was someone using it to get the "Next IP 
> address". Sorry but I don't understand what he was doing. If you want the 
> IPMask for 127.0.0.1, you can just get that interface and get the mask that 
> way. I even tried Google for "golang DefaultMask" and only found hits about 
> non network things. So I don't believe that this function is useful today 
> and should be deprecated, maybe with a message about using To4() if you 
> just want to see if an address is IPv4.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] finding a way to convert "map[string]map[string]Somthing" into "map[string]interface{}"

2019-03-02 Thread Louki Sumirniy
It sounds like you are more asking a question about an inbuilt construct 
for performing this conversion, not 'can this be done'. Some are talking 
about the use of reflection, but I aver from this and absolutely avoid 
reflection at all whenever possible, with the exception of its back-end use 
in formatted print functions and json/etc parsers, which one should avoid 
in high repetition loops.

No, there is no such method, as with everything in Go, many thtings that 
are automatic, in other language syntax, must be explicitly declared. 
Constructors, Destructors, type conversions. Note that, in this latter 
case, which applies to your question, that regardless of the syntax any 
language shows you, on the implementation level it is the same as how you 
do it and as I showed in my snippet above.

I'd say that you will find a solution to your problem by changing the way 
you approach the construction of these compound types. Two-plus dimensional 
maps to me tend to be a red flag for the use of slices instead, and this is 
confirmed if in any way the order of reading the elements out affects the 
result. These also require for loops like I demonstrated, and are still 
faster than dealing with maps for up to at least up to 100 possible members.

Maps have two important features for implementations, when considering 
performance: 

- they work by using hash tables, so every resolution of them to their 
member value requires a hash function and a loop that iterates the hash 
table, then returns the variable in the array of struct { int, interface{} 
} in the actual underlying runtime that will execute.

- the order of iteration is entirely dictated by the scalar value of the 
hash of the key, and it should thus be pointed out also that slice and 
string keys cost more on hashing

The use of iota-based or manually created enumerators with arrays is a 
better way of implementing some types of sets, but using maps is easier and 
can be faster if you are only checking for membership of one or two in the 
set. If you are iterating, the array will be faster and lets you define the 
order in which they iterate (by either declaration or a sort in the 
initialiser).

If I haven't covered relevant cases to your specific purpose, or not 
adequately, you should be able to find more information in other places. 
Common types of algorithm are covered in some of the beginner tour things 
for go, some might be described in stackexchange or in here, and still 
others it's worth checking rosettacode.com though more of the algorithms 
there are for math than datatypes.

On Saturday, 2 March 2019 14:10:06 UTC+1, 김용빈 wrote:
>
> yes I can do this in a for loop. but that is not I want.
>
> what I really want to do is "create a function that returns sorted 
> []string keys from any map[string]... in type safe way".
>
> when I said 'there is no easy way...' , I mean I cannot create that 
> function easily.
>
> I did not very clarify. sorry for the confusion. 
>
> 2019년 3월 2일 토요일 오후 8시 46분 50초 UTC+9, Louki Sumirniy 님의 말:
>>
>> Only an assigment to a pre-declared map[string]interface{} in a loop 
>> walking the first layer of that complex map type is required to put the 
>> values into that type, it's not complicated and doesn't have to touch the 
>> template. Something like this:
>>
>> var MapStringInterface map[string]interface{}
>> for i,x := range MapStringMapStringThing {
>> MapStringInterface[i]=x
>> }
>>
>> On Saturday, 2 March 2019 11:01:10 UTC+1, 김용빈 wrote:
>>>
>>> Thank you, Mercl.
>>>
>>> So there isn't an easy way, you mean, right?
>>>
>>> 2019년 3월 2일 토요일 오후 6시 45분 8초 UTC+9, Jan Mercl 님의 말:
>>>>
>>>> On Sat, Mar 2, 2019 at 10:32 AM 김용빈  wrote:
>>>>
>>>> > but it seems the argument is not automatically converted.
>>>>
>>>> Things are automatically converted to a different type in Go only when 
>>>> they are assigned to, or passed as arguments of, interface types.
>>>>
>>>> > manual type cast `map[string]interface{}(myMap)` also not working.
>>>> > is there a way of doing this? 
>>>>
>>>> Go does not have casts. Conversion rules[0] do not allow a conversion 
>>>> of different map types because the memory layouts are not compatible.
>>>>
>>>>   [0]: https://golang.org/ref/spec#Conversions
>>>>
>>>> -- 
>>>>
>>>> -j
>>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Will a pointer point to C.xxx be garbage collected by Go runtime?

2019-03-02 Thread Louki Sumirniy
It won't flag any error whatsoever, actually, but if you don't free that 
allocation correctly it will not be freed until termination.

On Friday, 1 March 2019 06:36:50 UTC+1, Cholerae Hu wrote:
>
> Consider the following code:
> ```
> package main
>
> /*
> struct B {
>   int i;
> };
>
> struct A {
>   int j;
>   struct B b;
> };
> */
> import "C"
>
> func NewA() *C.struct_A {
>   return _A{
> j: 1,
> b: C.struct_B{
>   i: 2,
> },
>   }
> }
>
> func main() {
>   a := NewA()
> }
> ```
> Will 'a' be scanned by go runtime? It is allocate by go runtime.
>
> However, if a pointer point to C.xxx will be scanned, consider the 
> following code:
> ```
> package main
>
> // #include 
> import "C"
> import (
>   "runtime"
>   "unsafe"
> )
>
> func main() {
>   p := (*C.int)(C.malloc(8))
>   C.free(unsafe.Pointer(p))
>   runtime.GC()
> }
> ```
> If p is scanned, go runtime should throw a 'bad pointer' error, because 
> the memory pointed by p is allocated by C.malloc, not go runtime.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is Go a single pass compiler?

2019-03-02 Thread Louki Sumirniy
It makes sense to me - since very often many of the parts that are 
processed are stand-alone and compile correctly (almost) by themselves 
(maybe a package clause to be complete in a source). These smaller pieces 
have to have one pass to grab their trees and symbols, and joined to the 
other parts. Go's compiler dodges a lot of this extra work by caching 
intermediate binary objects.

On Saturday, 2 March 2019 13:17:34 UTC+1, Jesper Louis Andersen wrote:
>
> On Thu, Feb 28, 2019 at 12:46 AM > 
> wrote:
>
>> Thanks, Ian.
>>
>> I remember reading in some compiler book that languages should be 
>> designed for a single pass to reduce compilation speed.
>>
>>
> As a guess: this was true in the past, but in a modern setting it fails to 
> hold.
>
> Andy Keep's phd dissertation[0] implements a "nanopass compiler" which is 
> taking the pass count to the extreme. Rather than having a single pass, the 
> compiler does 50 passes or so over the code, each pass doing a little 
> simplification. The compelling reason to do so is that you can do cut, 
> paste, and copy (snarf) each pass and tinker much more with the compilation 
> pipeline than you would normally be able to do. Also, rerunning certain 
> simplify passes along the way tend to help the final emitted machine code. 
> You might wonder how much this affects compilation speed. Quote:
>
> "The new compiler meets the goals set out in the research plan. When 
> compared to the original compiler on a set of benchmarks, the benchmarks, 
> for the new compiler run, on average, between 15.0% and 26.6% faster, 
> depending on the architecture and optimization level. The compile times for 
> the new compiler are also well within the goal, with a range of 1.64 to 
> 1.75 times slower. "
>
> [Note: the goal was a factor 2.0 slowdown at most]
>
> The compiler it is beating here is Chez Scheme, a highly optimizing Scheme 
> compiler.
>
> Some of the reasons are that intermediate representations can be kept in 
> memory nowadays, where it is going to be much faster to process. And that 
> memory is still getting faster, even though at a slower pace than the CPUs 
> are. The nanopass framework is also unique because it has macro tooling for 
> creating intermediate languages out of existing ones. So you have many IR 
> formats in the compiler as well.
>
> In conclusion: if a massive pass blowup can be implemented within a 2x 
> factor slowdown, then a couple of additional passes is not likely to make 
> the compiler run any slower.
>
> [0] http://andykeep.com/pubs/dissertation.pdf
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] does assembly pay the cgo transition cost / does runtime.LockOSThread() make CGO calls faster?

2019-03-02 Thread Louki Sumirniy
The stack requirements are quite important, I think you have to either take 
care of freeing by adding assembler function that does that, or if you can 
instead allocate the buffers from Go variable declarations the GC will take 
care of it, if it is possible to do this (very likely I think yes, since 
assembler deals mainly with what are essentially arrays of - well, you can 
say, byte, (u)int(16/32/64), and it will be a flat range of memory as the 
assembler will expect, on this way you have the other problem making sure 
the GC didn't discard it before the assembler works with it.

On Saturday, 2 March 2019 00:00:16 UTC+1, Ian Lance Taylor wrote:
>
> On Fri, Mar 1, 2019 at 2:37 PM Jason E. Aten  > wrote: 
> > 
> > On Friday, March 1, 2019 at 4:13:49 PM UTC-6, Ian Lance Taylor wrote: 
> >> 
> >> Go assembly code is compiled by the Go assembler, cmd/asm.  The author 
> >> of the assembly code is required to specify how much stack space the 
> >> assembly function requires, in the TEXT pseudo-op that introduces the 
> >> function.For more about this, see https://golang.org/doc/asm.  The 
> >> cmd/asm program will use that user declaration to insert a function 
> >> prologue that ensures that enough stack space is available, copying 
> >> the stack if necessary.  The cmd/asm program will also produce a stack 
> >> map that the garbage collector will use when tracing back the stack; 
> >> in practice it's quite difficult for the assembler code to define this 
> >> stack map as anything other than "this stack contains no pointers", 
> >> but see runtime/funcdata.h. 
> >> 
> >> In any case, your Fortran 90 code will have none of that information. 
> >> Of course, if you know the exact stack usage of your Fortran code, and 
> >> if the code never stores pointers on the stack, then you could with 
> >> some effort write Go assembly code that defines the appropriate stack 
> >> information and then calls the Fortran code.  I think that would, but 
> >> it would require a lot of manual hand-holding. 
> > 
> > 
> >  I was thinking I could write a prelude and then launch into the .f90 
> compiled routines... that I manually inline 
> > but I see that the assembler is a little higher level than actual amd64 
> machine code. 
> > 
> > But there 
> > does appear to be a BYTE escape hatch in the assembler. So if I compile 
> my .f90 to machine code and then run 
> > through each byte and generate a BYTE instruction to the Go assembler, 
> would that (at least in theory), let 
> > me call the .f90 code?  (Ignoring for the moment that I need to get the 
> parameters passed in from Go to .f90 in 
> > a way that the .f90 code expects; I think I will have to hand-craft some 
> assembly glue to make that work of course.) 
>
> In theory, sure, as long as you accommodate the difference in calling 
> convention and accurately record the stack requirements. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Performance comparison of Go, C++, and Java for biological sequencing tool

2019-02-28 Thread Louki Sumirniy
It shouldn't really be surprising. Go and Java share the use of interfaces, 
but Go's concurrency is far lighter weight, and on top, Java has the extra 
burden of a virtual machine before it actually hits the CPU as binary code. 
I suspect also that the Go version could handle a much greater level of 
concurrency and then the advantage of compilation would be more visible.

On Thursday, 28 February 2019 18:05:55 UTC+1, Isaac Gouy wrote:
>
> "We reimplemented elPrep in all three languages and benchmarked their 
> runtime performance and memory use. Results: *The Go implementation 
> performs best*, yielding the best balance between runtime performance and 
> memory use. While the Java benchmarks report a somewhat faster runtime than 
> the Go benchmarks, the memory use of the Java runs is significantly higher."
>
> proggit discussion 
> 
>
> article 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] distribution of go executables

2019-02-27 Thread Louki Sumirniy
This would only be true if *derivatives* were specified. Go links 
everything static by default, so in *very* broad terms, the binaries are 
derivative of the stdlib in the distributed go compiler package. I think 
really the proper way to look at this is this exact subject is simply not 
mentioned, only distantly implied in a specific way of interpreting it, and 
it would not be open-and-shut in a law court. 

Since the compiler produces static binaries by default, I think it should 
be explicated in the licence that embedding the unmodified binary objects 
does not qualify as 'derivation' and licence terms do not apply. It's no 
problem right now, but it seems to me the licence does give wiggle room to 
Google to play silly buggers on the margins of this.

And on the other side, what about when I build the binaries of stdlib using 
gcc or some other mostly complete not-google implementation? Seems to me 
the licence would cover this, and thus require the distribution of that 
licence.

So I'm just going to quietly suggest that the licence needs to be revised.

On Wednesday, 27 February 2019 15:46:23 UTC+1, Manlio Perillo wrote:
>
>
> On Wednesday, February 27, 2019 at 2:58:40 AM UTC+1, Space A. wrote:
>>
>> Mentioned license doesn't cover binaries produced by compiler, "binary 
>> form" there means go tools themselves, and stdlib only when redistributed 
>> separately as a whole in binary form. When stdlib is used to compile 
>> regular binary, it's not "redistributed", and there are no restrictions or 
>> special requirements at all.
>>
>> Correct answer: if you are using only stdlib and Go compiler to compile a 
>> binary - there are no requirements. If you are using 3rd parties libs / 
>> binaries / sources - read their licenses.
>>
>>
> No, **there is**  a requirement:
>
> Redistributions in binary form must reproduce the above copyright notice, 
> this list of conditions and the following disclaimer in the documentation 
> and/or other materials provided with the distribution. 
>
> This requirement does not only apply when you redistribute a (possibly) 
> modified version of the Go compiler, but also to the standard library.
> So you have to link the Go License in your documentation, when you 
> redistribuite a Go program, since **every** Go program implicitly imports 
> the runtime package.
>
> But I'm not really sure.
>
>
> Manlio Perillo
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] distribution of go executables

2019-02-27 Thread Louki Sumirniy
There is one place where derivative is irrelevant, that would be where a 
patent sticks to the algorithm, and this patently idiotic situation is not 
universally applicable, some jurisdictions never added this kind of lunacy 
to copyright law (unfortunately, not all).

As I understand it, the licence on the Google official Go compiler makes 
absolutely no restrictions on *your* source code, nor the binaries the 
compiler creates from them, regardless of the fact it imports the standard 
library. Specifically, and in practise, the library links your code to the 
*compiled* binary objects created from the source of the stdlib. Besides, 
that would be quite inefficient anyway.

I think it should be obvious that a programmer's studio should not feel 
like a courtroom, or one has quite misplaced priorities on the whole 
business of facilitating development of new software.

I could say more about my opinions on copyright but I'll let the licences I 
put on my stuff speak for me.

On Wednesday, 27 February 2019 15:20:36 UTC+1, Space A. wrote:
>
> You have very poor understanding of the subject, messing everything up.
> There is no "derivatives" in Go's license terms *at all*. There is only 
> redistribution in binary and source form and it covers only what's in the 
> repo (https://github.com/golang/go/blob/master/LICENSE). 
>
> Compilation is not redistribution. 
>
> PS: Don't want to spend to much time on this, but just to point out -  
> derivative is NOT a kind of sophistic mess when something is just based on 
> something. You can fork stdlib, add some extra changes and distribute it as 
> "stdlib v.2 improved" - in this case this would become derivative. If you 
> just use stdlib for your work, your work is not derivative from stdlib. And 
> if you want to talk in copyright laws terms, lets start from the point that 
> programming languages can't be protected by copyright at all (like "idea", 
> "concept", etc - same).
>
> The short answer to this question is that a
>> lawyer should be consulted. 
>>
>
> This is 100% clear case and you can distribute your compiled binaries 
> free, without any additional requirements, restrictions, giving or not 
> credits, or binding yourself to some specific license, what so ever. 
> C'mon guys.
>
>
>
>
> ср, 27 февр. 2019 г. в 07:24, Dan Kortschak  >:
>
>> In-line
>>
>> On Wed, 2019-02-27 at 06:31 +0300, Space A. wrote:
>> > Executable is not derivative work to stdlib or anything.
>>
>> I think you'll find this is not the case in most jurisdictions. It is
>> certainly not true here, and probably also not in the US.
>>
>> From https://www.copyright.gov/circs/circ14.pdf
>>
>> "A derivative work is a work based on or derived from one or more
>> already existing works."
>>
>> > Go's repo license covers only repo.
>>
>> No.
>>
>> Point 2:
>>
>> "Redistributions in binary form must reproduce the above
>> copyright notice, this list of conditions and the following disclaimer
>> in the documentation and/or other materials provided with the
>> distribution."
>>
>> Note that redistribution is based on the notion of derivative works
>> above. The binary is a derivative of the source code, which is, in this
>> case the standard library.
>>
>> > Stdlib is not redistributed when you compile binary.
>>
>> Yes it is, in a derivative form.
>>
>> > It has nothing to do with GPL.
>>
>> The licenses are different. In this sense you are absolutely correct,
>> this has nothing to do with the GPL. However, in another, far more
>> correct sense, it is indeed related. Both the GPL and the BSD3 are
>> based on the notions that make copyright work. The licensing of the
>> work is based on that fact that the copyright owner has a sole right to
>> distribute the work. This is licensed to the recipient under a set of 
>> conditions based on well established definitions of "derivative" and 
>> "redistribute". Those two terms are shared by the GPL and BSD3.
>>
>> Note that the LGPL goes to lengths to distinguish between the binary of
>> the licensed work and items that are derivative, but dynamically
>> linked, purely because of the connection between the original source
>> and the binary that is the resulting executable (i.e. not the binary
>> representation of the library).
>>
>> > Go's license is simple and clear.
>>
>> And yet, here we are. The short answer to this question is that a
>> lawyer should be consulted.
>>
>>
>> > 
>> > ср, 27 февр. 2019 г., 6:00 Dan Kortschak > >:
>> > 
>> > > 
>> > > Probably not. The executable is a derivative work under most
>> > > understandings (this is the basis for the GPL to require that
>> > > source
>> > > code be provided if the executable is distributed to an end user).
>> > > 
>> > > Any work writen in Go, using the stdlib (which includes runtime, so
>> > > all
>> > > Go programs) is derivative of the stdlib. This means that the Go
>> > > license pertains.
>> > > 
>> > > On Tue, 2019-02-26 at 18:35 -0800, Space A. wrote:
>> > > > 
>> > > > You are wrong.

[go-nuts] Re: Why Go? What is the most important feature that led to you becoming a Go programmer?

2019-02-27 Thread Louki Sumirniy
These two points really nail it:

On Wednesday, 27 February 2019 11:02:23 UTC+1, Chris Hopkins wrote:
>
>
> What made me stay is the clarity and simplicity. So many languages seem to 
> be an exercise in showing off how clever you are, by using x clever 
> pattern. Go doesn't seem to suffer this.
>

 C++ code you find in cryptocurrency server particularly demonstrate this 
problem. Ok, so partly, I just don't understand the generic syntax, the 
template, and I find its syntax absolutely repulsive (and completely 
unintuitive). But it's not just that, the code is cryptic and incredibly 
disorganised, and I guess this is where the novel build system of Go really 
shows its superiority - CPP syntax is also very cryptic. More than one 
include root... We have modules now, and right off the bat it eliminates so 
much of the manual handling that makes code like this so irritating to 
adopt and work with.
 

> If I could just use it for the embedded stuff i do...
>

Go would require a separate runtime system for embedded, due to the usually 
tiny resources. The MIT-PDOS Biscuit research OS is an example of a 
modified runtime designed for launching off bare metal, this might be a 
direction that could do with being further developed. Embedded software 
tends to need very fussy, hand-written and careful handling of resources. 
Mainly I think for these cases one simply has to expose more of GC's 
controls and possibly write different resource managers (gc modes, perhasp) 
better suited to such environments.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Why Go? What is the most important feature that led to you becoming a Go programmer?

2019-02-26 Thread Louki Sumirniy
I just wanted to jot down and share my personal most important reason, and 
make this thread a short sample of the most important aspect of Go that 
drove you to learn and use it.

For me, it was this: I have been tinkering with programming on and off over 
the years since I was 8 years old, when a TRS-80 CoCo arrived in my house, 
and in all the time, and with many languages, from BASIC, Assembler, Amiga 
E (this was the first that really came close to this reason for me to learn 
go), C, Python and Vala, but in all of these instances, until Go, I was 
unable to do the most important thing, as I have very good visual thinking 
skills, but poor attention - to be able to complete even a relatively 
simple application. 

My usual problem always was that I would get bogged down in some detail, 
forget the bigger picture, and hit some big blocker in this detail and then 
basically turn off the computer and go ride my skateboard. I have now 
written several useful libraries, and massively extended and rewritten (now 
around 80% done) a bitcoin-based cryptocurrency wallet/node server suite.

Without Go's immediacy and simple, logical syntax and build system, I am 
lost. Go may be unforgiving in its syntax and semantics, but this is good 
because it's less decisions to make, and its really very possible with Go 
to start writing code immediately, and figuring out how to slice up the 
pieces and add new parts is far easier than in many other languages, start 
from a very simple, vague base and sketch out the details bit by bit. No 
other language has had this property that I have encountered before. I 
often remark that the language's name and the short-attention-span and high 
intelligence of many of its adopters have in common to some degree.

I think part of it has to do with how one must be explicit with many 
things, but at the same time, other places you can skip explications 
because of the implicit, also lets you focus on what's important and not so 
much distract you with superficial details.

Many other languages force you to really separate coding and architecting, 
Go lets you do it all on-the-fly.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Best way to do this in golang

2019-02-26 Thread Louki Sumirniy
I never went that far into learning SQL, but I assume that 'LIKE' is a 
substring match. As I see it, the two ways to improve the speed of it are 
to prioritise the easier to detect, and most frequent conditions, and 
whenever there is a shortcut, use it. This might mean double or more as 
many lines of code to implement the shortcuts. The OR operator in Go can 
indeed be parallelised, but you would be limited by the number of CPU 
cores, so again, prioritisation of more important results means you could 
find them and respond on that item even as the rest are being tested.

On Tuesday, 26 February 2019 07:08:00 UTC+1, RZ wrote:
>
> Thanks Tamas,
>
> Query itself is slow if i include all url strings. it takes about 10 mins. 
> But when i hit one at a time, i see better response overall. Yes so was 
> planning on running them in parallel.
>
> We only have read permissions and hence i am not allowed to add new index, 
> but good idea, will try reaching out to Admin and see if they can add this.
>
> i have not tried that way of similar to, will try that as well.
>
>
> On Tuesday, 26 February 2019 00:36:26 UTC+5:30, Tamás Gulácsi wrote:
>>
>> How fast is the query? You can make it parallel, but if it is sliw, the 
>> you have to target that first.
>>
>> How big is the set of one user's all urls? How fast is to get this? Maybe 
>> adding some indexes may help
>>
>>
>> How fast is the query with one pattern only? Maybe combining them into a 
>> "similar to '%(abc|def|ghi)%'" would be faster?
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] efficient random float32 number generator?

2019-02-26 Thread Louki Sumirniy
Assuming there is bytes in the system's entropy pool, you can also skip the 
scrambling step, though I don't know what overhead consuming these bytes 
entails compared to a standard PRNG. Then the biggest part of it is making 
the raw bytes into float. I'm not sure - could you take 4 random bytes, 
grab the unsafe pointer and cast back to float32?

On Tuesday, 26 February 2019 00:16:20 UTC+1, Rob 'Commander' Pike wrote:
>
> If you don't need precision, just generate ints and scale them.
>
> -rob
>
>
> On Tue, Feb 26, 2019 at 9:39 AM DrGo > 
> wrote:
>
>> Thanks Ian,
>> The std lib float32 is slow for my purpose. In my benchmarking it is 
>> actually slower than the float64. But I don’t even need float16 precision.
>> I am working on implementing othe alias method for sampling from a fixed 
>> freq dist with possibly thousands of arbitrary values. So the rng doesn’t 
>> need to be precise or high quality because of rounding eg a rn of .67895 
>> might end up selecting the same arbitrary value as .67091 or even .65!!
>> https://en.m.wikipedia.org/wiki/Alias_method
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Generics: the less obvious *constraints*, and the relationship to exception/error handling

2019-02-25 Thread Louki Sumirniy
Well, being I am unlikely  to gain the benefit of the as yet not fully 
settled specification anytime during this or several possible future 
projects, I was mainly posting these thoughts in part not to address the 
topic of future changes but more so to highlight the considerations anyone 
wanting to implement generic data structures should consider in their 
implementation, when using Go 1.

It didn't occur to me until I was thinking about the issues raised in 
constructing a bundle of compound literals to define the parameters for 
multi-command CLI applications that such declarations really have to be 
sanitised just like any untrusted input. In some cases the errors may be 
entirely trivial and not able to cause unintended effects later on, but 
anything less than an exhaustive validation could conceal a serious 
security hole potentially for a very long time. And thus also, should such 
mechanisms be built into the language, they have to be likewise as 
stringently checked.

It's not that you can't do generics in Go, that's not true, because we have 
the interface{}, it's just that in most cases you have to repeat a lot of 
code to maintain type safety. Exceptions and generics are really 
indispensible mechanisms in programming, but many languages threw them in 
without completely exhausting all the facets of them and create issues both 
in security and stability, as well as the no less important issue of 
diminishing readability and maintainability (it's so bad in some languages 
that holy water and crucifixes are maybe not a bad backup countermeasure!)

People tend to focus on the convenience of novel syntactic constructs and 
forget all the monsters that can spring out from the absence of care in the 
design. The one thing that I want to see, in the end stages of the design 
process, is that what is added doesn't take something away, especially 
something that isn't obvious and explicit. The most dangerous bugs are the 
least visible.

On Tuesday, 26 February 2019 00:37:50 UTC+1, Burak Serdar wrote:
>
> On Mon, Feb 25, 2019 at 3:06 PM Louki Sumirniy 
> > wrote: 
> > 
> > I am in the middle of implementing a generic tree type as part of a CLI 
> flags/configuration package, and a key goal in my design is to make the 
> declarations it parses as easy to read as possible, and as far as possible, 
> avoiding too much boilerplate. 
> > 
> > Yes, nothing new there, but in the process, I have noticed something 
> that isn't at the top of people's minds when they think of generic data 
> types: validation 
> > 
> > Actually, default values is another aspect of generics as well, because 
> a generic type also has to have some kind of valid new/empty state, to stay 
> consistent with the way that the builtins are always zeroed before use, but 
> I'll address the former first. 
> > 
> > Well, ok, let's just say that implementation of generic structures, 
> (which implicitly aren't very generic in implementation), works with the 
> interface{} container, which is a very limited Set structure, with one 
> implicit function that when satisfied makes the variable it encapsulates 
> recognised as a complete implementation and permits interchange with other 
> implementations' type. 
> > 
> > So, in Go, this means (and same in all languages though they sugar coat 
> it in various automatic stuff) that before you can call methods on a 
> generic type, the contents of the variable have to be checked. 
> > 
> > From where I am at right now, this is of course my job as the programmer 
> to write these. However, my reason for all the rambling up to this point is 
> that I think that since validation is mandatory for dynamic types, that the 
> proposed generics syntaxes that are under consideration should include 
> constraint/contract type validation. For this, then there must be a 
> constructor and a bounds/validation function that ensures that its methods 
> will find everything that must be there, where it must be, preferably 
> before the main() is invoked. 
>
> Have you seen the contracts draft? This addresses the validation issue 
> you raised, in compile time: 
>
> https://go.googlesource.com/proposal/+/master/design/go2draft-contracts.md 
>
> With lots of feedback and alternative suggestions: 
>
> https://github.com/golang/go/wiki/Go2GenericsFeedback 
>
>
> > 
> > On the latter subject, well, I think I already covered that - just 
> simply that when you declare a new type, all zeroed may not actually be a 
> valid new and empty variable. The specifics of a type may mean that certain 
> things must be in certain places, and the things that are in the container 
> are valid members of the set of possible contents and that these members 
> contents are not outside of the bounds that would lead to bugs.

[go-nuts] Re: Generics: the less obvious *constraints*, and the relationship to exception/error handling

2019-02-25 Thread Louki Sumirniy
oh, just one last point after a re-read - a great deal of what is required 
to implement generics, in go, does not need to change the syntax of the 
language, except where it concerns that CCR - (as replacing the builtin 
numeric types especially would lead to massive overhead in dereferencing), 
simply, embed error in everything, and expand error so it can store 
multiple state values, such as zero, just initialised, modified, nil 
pointers within, and so on. Again, at the centre of the implementation is 
going to be some form of Set.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Generics: the less obvious *constraints*, and the relationship to exception/error handling

2019-02-25 Thread Louki Sumirniy
I am in the middle of implementing a generic tree type as part of a CLI 
flags/configuration package, and a key goal in my design is to make the 
declarations it parses as easy to read as possible, and as far as possible, 
avoiding too much boilerplate. 

Yes, nothing new there, but in the process, I have noticed something that 
isn't at the top of people's minds when they think of generic data types: 
validation

Actually, default values is another aspect of generics as well, because a 
generic type also has to have some kind of valid new/empty state, to stay 
consistent with the way that the builtins are always zeroed before use, but 
I'll address the former first.

Well, ok, let's just say that implementation of generic structures, (which 
implicitly aren't very generic in implementation), works with the 
interface{} container, which is a very limited Set structure, with one 
implicit function that when satisfied makes the variable it encapsulates 
recognised as a complete implementation and permits interchange with other 
implementations' type.

So, in Go, this means (and same in all languages though they sugar coat it 
in various automatic stuff) that before you can call methods on a generic 
type, the contents of the variable have to be checked. 

>From where I am at right now, this is of course my job as the programmer to 
write these. However, my reason for all the rambling up to this point is 
that I think that since validation is mandatory for dynamic types, that the 
proposed generics syntaxes that are under consideration should include 
constraint/contract type validation. For this, then there must be a 
constructor and a bounds/validation function that ensures that its methods 
will find everything that must be there, where it must be, preferably 
before the main() is invoked.

On the latter subject, well, I think I already covered that - just simply 
that when you declare a new type, all zeroed may not actually be a valid 
new and empty variable. The specifics of a type may mean that certain 
things must be in certain places, and the things that are in the container 
are valid members of the set of possible contents and that these members 
contents are not outside of the bounds that would lead to bugs.

It does lead me to a thought about a go-idiomatic style of generic type, 
however - and it all centers around the type declaration syntax. Not all 
generics are going to need something other than the standard zeroing, so 
the initialiser function need not be mandatory. This is starting to blend 
in an unseemly way into some parts of OOP with the concept of a constructor 
(go dodges the need for destructors mostly, but I would suppose in 
implementation a destructor might be a good idea especially for resources 
that need freeing manually).

Actually, right there might be another issue that might be important with 
generics, but more generally - if there could be some way to tag a type 
such that unless you specify otherwise, it runs its closer on implicitly on 
close of the function scope it lives in, and this also suggests that some 
means of indicating this relationship between two methods would also make 
sense.

So, taking stock, I am saying that a consistent Go styled generic would 
need to have builtins for initialisation to valid empty state, one for 
freeing resources locked up during lifecycle, and some way of specifying a 
valid range off values for the type.

When you really look at it, implementing generics basically means code 
generation involving a lot of reflection, or, the likely and better option, 
that compilation creates tagged sets for every type defined in a package, 
so that these attributes are available without the expensive parsing of the 
metadata tied up in the symbol table. If my surmise is correct, it means 
that creating a complete generics syntax for Go also means creating 
constructors, destructors, validation/sanitising, and adding the necessary 
metadata to be efficiently accessible at runtime instead of parsed out of 
linker metadata.

It also brings up the issue that there is a more than subtle connection 
between generic types and exception/error handling, with regard to 
validation, the construct required for this is structurally similar to 
error handling, in that, already I have described initialisation, 
deallocation, and validation, all of these are states, and in 
implementation it means that one should consider carefully whether there is 
other things that seem at first blush, peripheral, but should be considered 
indispensible...

Namely, the error value. 

I thought a lot about this in the last few weeks, and I had already in 
practise experimented with creating pipeline-pattern types using 
interfaces, in which the error was inside the struct containing the 
variable data (byte slice in this case), and the interface included several 
methods relating to the examination, setting and resetting of the error 
value. I didn't quite get to fully 

[go-nuts] cleaner - a simple tool to make large source files a bit less messy

2019-02-22 Thread Louki Sumirniy
I may be one of the most obsessive about the subject of source code 
layouts, partly because of the impact that especially large and 
disorganised source files (at least to the extent of the individual files, 
but packages can get really nasty too).

It took me some time to navigate to find the fork of the ast library that 
properly retains comment positioning when the AST is rewritten (in this 
case, by changing the order of the declaration blocks), and it doesn't 
fully satisfy all my requirements yet (splits all var and type blocks per 
declaration but puts all of them inside parentheses one per declaration, 
doesn't remove the parentheses completely, and doesn't touch consts because 
iota makes them order sensitive), but it works ok and saves me a lot of 
manual reformatting work.

The other thing is that probably not many Go programmers know that braces 
and brackets blocks (except for type assertions and the type switch) can 
all be multi-line split if the final line has a comma after it. I think 
that function parameters are more readable when they are lined up 
vertically so this little tool does part of that, for now only receivers 
and putting a newline after the function parameter open bracket. I would 
like it if it did the whole set so there is no manual work remaining to get 
this, but my regexp skills are not very great at this point and I'm really 
pushing it to get a project finished - that has big nasty source files over 
2000 lines in places.

https://github.com/parallelcointeam/pod/tree/master/cmd/tools/cleaner

Just for if anyone else is as crazy as me and finds complex source code 
structures doubly difficulty without any kind of sorting done on the parts 
of the file.

I'm sure that it's written terribly. The newline parts are just bulk search 
and replace as I found writing the FSM to step through it too confusing. 

I've long dreamed of writing a full blown language parser, but for now with 
my prorities this is a little step towards that, and maybe someone else 
finds this useful.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Visual Studio Code oddity with Go

2019-02-21 Thread Louki Sumirniy
I believe that very notification has a button that triggers it to be 
automatically installed. Otherwise just exactly do what it says and it'll 
stop coming up and you'll get more juicy information from vscode about your 
go things.

On Wednesday, 20 February 2019 15:14:21 UTC+1, Rich wrote:
>
> I tried googling this but I not been able to find a solution, hopefully I 
> can ask this here and someone else knows how to fix this.  I use Visual 
> Studio Code -- because it's free. The issue I am having is that every time 
> I use Visual Studio Code I get the popup that says: 
>
> The Go extension is better with the latest version of "gocode".  Use "go 
>> get -u -v github.com/mdempsky/gocode" to update
>
>
> Anyone else have this or know how to fix this? It shouldn't ask EVERY time 
> I use Visual Studio Code? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: ctrl-c and Golang

2019-02-17 Thread Louki Sumirniy
I know what it is. The closure is implicitly receiving the variable by 
value, so its state when the goroutine spawns is fixed into its local scope.

If you moved that variable outside of the function you would not have this 
problem, as another alternative solution.

Incidentally, the issue of stale pass-by-value and even can be stale 
pointers if the pointer is changed is a low visibility serious security 
problem. Using a pointer would also solve the problem here.

I have become very leery of the := operator recently as I have noticed that 
it is one of the most frequent logic bugs in my code, for which reason I 
want to write a linter/pretty printer that removes them and places them at 
root scope, then the programmer will see name collisions. These can be 
avoided by instead pre-declaring a set of temporary variables that you 
assume need to be zeroed when they are used in an algorithm. A simple 
function with type switch could cover all builtins easily.

Another thought occurs also, since I am quite confused still about the 
value vs reference mechanism behind array variable parameter passing, since 
theoretically that code should not have this issue as it should be getting 
passed around as a pointer transparently. So it could be a logic error in 
the implementation of slices, I have seen this one also come up a lot and 
thus am completely confused about the situations when the very same thing 
happens (stale values on slices).

On Sunday, 17 February 2019 14:59:34 UTC+1, Manlio Perillo wrote:
>
> Your program has a data race, since you are accessing the array from two 
> different goroutines.
>
> Manlio
>
> On Saturday, February 16, 2019 at 3:29:45 PM UTC+1, Hemant Singh wrote:
>>
>> I have the following program.  The program is processing network packets 
>> at 1 Gbps rate.  I don't want to print the rate for every packet.  Instead, 
>> I'd like to save the rate in an array and dump the array on Ctrl-c of the 
>> program.  When I dump the array on Ctrl-c, the data is all zeroes.  Please 
>> see "<===" in the program below.  Anything else one could do?
>>
>> Thanks.  
>>
>> -Hemant
>>
>> func main() {
>>
>>   const SZ = 65536
>>   var time_array [SZ]float64
>>
>>   c := make(chan os.Signal)
>>   signal.Notify(c, os.Interrupt, syscall.SIGTERM)
>>
>>   go func() {
>>  <-c
>>  fmt.Printf("You pressed ctrl + C. User interrupted infinite loop.")
>>  fmt.Printf("%v ", time_array);  <=== all zeroes printed. 
>>  os.Exit(0)
>>   }()
>>
>>
>>   i := 0
>>   for {
>> ...
>> time_array[i] = rate
>> fmt.Printf("%f bps ", rate) <=== value printed is correct. 
>> i++
>>   }
>> }
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] The simplest solution for error recovery

2019-02-17 Thread Louki Sumirniy
In my opinion, the many options suggested for the new error handling scheme 
are bloated and unnecessary.

The defer statement is nice for placing closers next to openers, but I 
think if simply a partition of a function block was segregated to contain 
the handler code, probably nobody would be complaining about it anymore.

The nicest and neatest way would be to use a keyword label. Something like 
'recover:' or so. Labels don't get much use and usually are unnecessary if 
you change the structure of the loops and conditionals, but I don't think 
label users would complain about one special label.

The recover block could also automatically provide the variable returned by 
the recovery function as well, removing more boilerplate.

I have said it before, that I think the lack of a conditional return would 
also reduce the pain of error handling, combining a condition and the 
setting of return variables into one line, as, they are usually not long 
and thus fit in a line.

My last thought about the error system is to do with the multiple returns. 
I think they could easily be automatically bundled into a struct when the 
receiver is a function and then the error value could be just 
variablename.error or something like this. 

On that topic, there also could be some use from defining a simple category 
set to define error types, such as severity, for example. This value could 
also be used by loggers to determine automatically what log level to print 
the error in.

I am pretty sure there won't be much response to this, and I pretty much 
doubt I am going to be an early adopter of Go 2, so I will be writing a 
preprocessor that is configurable per project for what things it will 
implement. 

All of the things mentioned above can be easily, by any competent go 
programmer, expressed in the current syntax, it's just that it's tedious, 
and easy to procrastinate about it. I don't use panic except for conditions 
that should not happen, but it would be nice to compartmentalise all the 
error handling into a section alongside the code that can cause it.

Incidentally, as I am currently working on a code sorting tool, the matter 
of godoc comments in particular is a major annoyance. The syntax is an 
abberation compared to the language, and I think it's nuts that golint 
doesn't just insert them in there, preferably like how many server default 
passwords are - funny little snipes at the admin.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: tidy - a super simple source code sorting tool

2019-02-15 Thread Louki Sumirniy
yeah it is really rough... it splits blocks occasionally such that a block 
lacks its final bracket, it ends up somewhere else.

but it doesn't happen that often that it's hard to manually correct at this 
point. I am thinking now after seeing the greatly improved visual appeal of 
my code when all the comments were preceded by a blank line too, that it 
should have that in it as well, I did intend it but it dragged on so long 
...

On Friday, 15 February 2019 23:08:11 UTC+1, Louki Sumirniy wrote:
>
> ok never mind the combining files thing... doesn't work yet :) and I'm 
> sure the manual cleanup will be super messy
>
> On Friday, 15 February 2019 22:34:19 UTC+1, Louki Sumirniy wrote:
>>
>> oh, I really didn't introduce that so well...
>>
>> I fixed the issue about multiple init/main functions by setting it to 
>> count how many it finds and just leaving them where they are if there is 
>> more than one (they will just end up in arbitrary order next to each other 
>> within the func block for this case). 
>>
>> I also fixed the stdin for stdoutput, now you can go 'cat file.go|tidy 
>> stdin' and the result will print on the terminal, so you could chain it to 
>> other processing commands afterwards.
>>
>> It's very simple thing but I think it's going to be a huge help for me 
>> with handling large codebases and speeding up the process of logically 
>> dividing into files, and potentially, it may make it easier to see that a 
>> package should be split or merged (if the circular imports don't get you 
>> first).
>>
>> On Friday, 15 February 2019 22:08:04 UTC+1, Louki Sumirniy wrote:
>>>
>>> I have written a small source code processing tool which does something 
>>> that I personally think that gofmt should do. It's called tidy, and it's 
>>> hiding inside the embryonic attempt at a 'deal with all those irritating 
>>> application setup startup stuff' project here:
>>>
>>> https://github.com/l0k1verloren/skele/tree/master/cmd/tidy
>>>
>>> All it does is slice a source file up into the root level sections, 
>>> package, imports, type, const, var, func. All of the groups are sorted by 
>>> alphabetical order based on the content of their key line (line starting 
>>> with root level keyword), the 'main' and 'init' functions are pushed to the 
>>> top of the block of functions.
>>>
>>> In theory, it is possible to take an entire package, a whole folder 
>>> containing all the same package name. It only will pick up one of the 
>>> package header parts, I'm not sure about imports, I think gofmt 
>>> automatically consolidates multiple anyway, but yeah it will also stumble 
>>> over those init blocks too for this. I will probably need to fix that since 
>>> exactly this merging of sources is one of my intended uses).
>>>
>>> I don't know why it took me so bleedin long to write it, it was funny, I 
>>> wrote something that worked about 90% in about an hour and then it took 2 
>>> days to redo it, I kept overcomplicating things (ahaha, well, read the 
>>> prose, know the code :)
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: tidy - a super simple source code sorting tool

2019-02-15 Thread Louki Sumirniy
ok never mind the combining files thing... doesn't work yet :) and I'm sure 
the manual cleanup will be super messy

On Friday, 15 February 2019 22:34:19 UTC+1, Louki Sumirniy wrote:
>
> oh, I really didn't introduce that so well...
>
> I fixed the issue about multiple init/main functions by setting it to 
> count how many it finds and just leaving them where they are if there is 
> more than one (they will just end up in arbitrary order next to each other 
> within the func block for this case). 
>
> I also fixed the stdin for stdoutput, now you can go 'cat file.go|tidy 
> stdin' and the result will print on the terminal, so you could chain it to 
> other processing commands afterwards.
>
> It's very simple thing but I think it's going to be a huge help for me 
> with handling large codebases and speeding up the process of logically 
> dividing into files, and potentially, it may make it easier to see that a 
> package should be split or merged (if the circular imports don't get you 
> first).
>
> On Friday, 15 February 2019 22:08:04 UTC+1, Louki Sumirniy wrote:
>>
>> I have written a small source code processing tool which does something 
>> that I personally think that gofmt should do. It's called tidy, and it's 
>> hiding inside the embryonic attempt at a 'deal with all those irritating 
>> application setup startup stuff' project here:
>>
>> https://github.com/l0k1verloren/skele/tree/master/cmd/tidy
>>
>> All it does is slice a source file up into the root level sections, 
>> package, imports, type, const, var, func. All of the groups are sorted by 
>> alphabetical order based on the content of their key line (line starting 
>> with root level keyword), the 'main' and 'init' functions are pushed to the 
>> top of the block of functions.
>>
>> In theory, it is possible to take an entire package, a whole folder 
>> containing all the same package name. It only will pick up one of the 
>> package header parts, I'm not sure about imports, I think gofmt 
>> automatically consolidates multiple anyway, but yeah it will also stumble 
>> over those init blocks too for this. I will probably need to fix that since 
>> exactly this merging of sources is one of my intended uses).
>>
>> I don't know why it took me so bleedin long to write it, it was funny, I 
>> wrote something that worked about 90% in about an hour and then it took 2 
>> days to redo it, I kept overcomplicating things (ahaha, well, read the 
>> prose, know the code :)
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: tidy - a super simple source code sorting tool

2019-02-15 Thread Louki Sumirniy
oh, I really didn't introduce that so well...

I fixed the issue about multiple init/main functions by setting it to count 
how many it finds and just leaving them where they are if there is more 
than one (they will just end up in arbitrary order next to each other 
within the func block for this case). 

I also fixed the stdin for stdoutput, now you can go 'cat file.go|tidy 
stdin' and the result will print on the terminal, so you could chain it to 
other processing commands afterwards.

It's very simple thing but I think it's going to be a huge help for me with 
handling large codebases and speeding up the process of logically dividing 
into files, and potentially, it may make it easier to see that a package 
should be split or merged (if the circular imports don't get you first).

On Friday, 15 February 2019 22:08:04 UTC+1, Louki Sumirniy wrote:
>
> I have written a small source code processing tool which does something 
> that I personally think that gofmt should do. It's called tidy, and it's 
> hiding inside the embryonic attempt at a 'deal with all those irritating 
> application setup startup stuff' project here:
>
> https://github.com/l0k1verloren/skele/tree/master/cmd/tidy
>
> All it does is slice a source file up into the root level sections, 
> package, imports, type, const, var, func. All of the groups are sorted by 
> alphabetical order based on the content of their key line (line starting 
> with root level keyword), the 'main' and 'init' functions are pushed to the 
> top of the block of functions.
>
> In theory, it is possible to take an entire package, a whole folder 
> containing all the same package name. It only will pick up one of the 
> package header parts, I'm not sure about imports, I think gofmt 
> automatically consolidates multiple anyway, but yeah it will also stumble 
> over those init blocks too for this. I will probably need to fix that since 
> exactly this merging of sources is one of my intended uses).
>
> I don't know why it took me so bleedin long to write it, it was funny, I 
> wrote something that worked about 90% in about an hour and then it took 2 
> days to redo it, I kept overcomplicating things (ahaha, well, read the 
> prose, know the code :)
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] tidy - a super simple source code sorting tool

2019-02-15 Thread Louki Sumirniy
I have written a small source code processing tool which does something 
that I personally think that gofmt should do. It's called tidy, and it's 
hiding inside the embryonic attempt at a 'deal with all those irritating 
application setup startup stuff' project here:

https://github.com/l0k1verloren/skele/tree/master/cmd/tidy

All it does is slice a source file up into the root level sections, 
package, imports, type, const, var, func. All of the groups are sorted by 
alphabetical order based on the content of their key line (line starting 
with root level keyword), the 'main' and 'init' functions are pushed to the 
top of the block of functions.

In theory, it is possible to take an entire package, a whole folder 
containing all the same package name. It only will pick up one of the 
package header parts, I'm not sure about imports, I think gofmt 
automatically consolidates multiple anyway, but yeah it will also stumble 
over those init blocks too for this. I will probably need to fix that since 
exactly this merging of sources is one of my intended uses).

I don't know why it took me so bleedin long to write it, it was funny, I 
wrote something that worked about 90% in about an hour and then it took 2 
days to redo it, I kept overcomplicating things (ahaha, well, read the 
prose, know the code :)

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: inverse of time.Duration?

2019-02-15 Thread Louki Sumirniy
time.Duration is nanoseconds. You can easily get any other unit by dividing 
by the time.Timeunit by another time unit.(time.Minute time.Second, 
time.Millisecond, time.Microsecond)

If that 1 in your formula means time.Second then you could just replace it 
with that, and 5.671ms would be equal to 5671 ns (er, that is if I read it 
correct as milliseconds, otherwise 5671*time.Millisecond). 

On Friday, 15 February 2019 20:48:58 UTC+1, Hemant Singh wrote:
>
> This is an example of time.Duration I have: 5.671msec
>
> I need to convert the duration to rate = 1.0/5.671 msec.
>
> However, time.Duration and float in 1.0 do not mix.  How do I get the rate?
>
> thanks,
>
> Hemant
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] I want to set the position of the window in GUI display of golang

2018-12-12 Thread Louki Sumirniy
I would expect at minimum there is a field in the type in the AssignTo 
field (it is a struct, I am guessing) that you could use. In VSCode you can 
control-click on type names and it opens up the declaration of the type 
(same for functions and variables too)

On Monday, 10 December 2018 09:10:16 UTC+1, kato masa wrote:
>
> thank you for the advice!
>
> 2018年12月5日水曜日 12時26分03秒 UTC+9 Robert Engels:
>>
>> You should probably file an issue at 
>>
>> http://github.com/lxn/walk
>>
>> They don’t seem to have a community forum, but I think the author could 
>> help you easily. Also you could try stack overflow as there are a few 
>> questions about this library there. 
>>
>> On Dec 4, 2018, at 7:44 PM, mdi@gmail.com wrote:
>>
>> Hi
>> I'm Japanese, so sorry if my English is wrong.
>>
>> I'm try to display GUI window with WALK(github.com/lxn/walk).
>> But there is a problem dose not go well.
>>
>> If you run this code, maybe the window displayed upper left of screen.
>> I want to set the position of the window in center of screen.
>> What should i do?
>>
>>
>> import ( "github.com/lxn/walk" . "github.com/lxn/walk/declarative" ) type 
>> MyLoadWindow struct { *walk.MainWindow progressBar *walk.ProgressBar } 
>> func Main() { mw := {} // 画面情報設定 MW := MainWindow{ 
>> AssignTo: , // Widgetを実体に割り当て Title: "コンピュータの情報を取得中", 
>> Size: Size{ 300, 100}, Font: Font{PointSize: 12}, Layout: VBox{}, 
>> Children: []Widget{ // ウィジェットを入れるスライス ProgressBar{ AssignTo: 
>> , MarqueeMode: true, }, }, } if _, err := MW.Run(); err 
>> != nil { println("Error") return } }
>>
>>
>> If someone know solution, please show me.
>> thank you.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Json and value

2018-12-12 Thread Louki Sumirniy
I would be guessing that's because the response is actually thus an array 
(square brackets around anything mean a list of same-typed fields without 
keys) and you need to thus use response[0] to get it if there is only one, 
or process it in a for loop iterating using range. Probably you could also 
strip the brackets using strings.Split() and grab field 1 of split with '[' 
and field 0 of split with ']'

On Wednesday, 12 December 2018 18:30:43 UTC+1, Olivier GALEOTE wrote:
>
> Hi 
>
> Thank you, but i have no response with your example. 
> Maybe because i have [] around the json ?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Forking/transfering a repository and import paths

2018-12-12 Thread Louki Sumirniy
I have done this many times. Some repositories are more of a pain than 
others to move. I was amused to learn after having done this with 
github.com/btcsuite/btcd, that the btcjson, btcec, btcutil and btclog repos 
all were separate but quite tightly bound both to each other and it took me 
about half a day to fix it. Part of my solution was copying those repos, 
removing the .git folder. I could have rather forked all of them, but those 
4 in particular and the main btcd were very tangled and hard to separate.

My opinion is that there should be a simple way to refer to orthogonal 
repos, and sub-folders, with simple relative paths. So instead of 
"github.com/btcsuite/btcd/blockchain" I can just say "./blockchain" or 
instead of "github.com/btcsuite/btclog" I can say "../btclog" and the rest 
is inferred from the module spec and/or gopath location. 

I use Visual Studio Code (it has the best go toolchain integration I am 
aware of) and its search-in-repository functions are quite good, but it's 
easy to make a mistake and accidentally cast too wide a net, and the 
poor-man's-cut-down-for-no-reason regex searching doesn't help either.

It's not difficult, just tedious, and it would be nice if there was 
relative paths allowed for imports.

On Wednesday, 12 December 2018 13:12:53 UTC+1, Sotirios Mantziaris wrote:
>
> Hi,
>
> i want to move a repo from my github account to another one. Goal is to 
> have a new import path for the new forked repository.
> There are 2 ways of achieving the move:
>
>- Forking
>- Transfer repository
>
> Is it possible to fork a repo and change the import path of the repository?
>
> If the transfer option is chosen we just have to change all imports in the 
> code, which severs the ties for the originating project.
>
> Is it possible to have:
>
>- both repos
>- every repo with it's own import path
>- code exchange between them
>
> What are the options?
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Rethink possibility to make circular imports

2018-12-06 Thread Louki Sumirniy
I forgot to mention that the main way to work around this limitation has to 
do with separating declarations of types shared by the two otherwise 
circularly importing packages. But it gets tricky again then when it comes 
to defining methods on the types, because you can't declare methods on 
nonlocal types. I think for that you need type aliases.

Like many aspects of Go, you have to think like the computer to express 
your intent correctly.

On Thursday, 6 December 2018 04:05:36 UTC+1, Louki Sumirniy wrote:
>
> The main reason for the prohibition against circular imports is that you 
> have to add overhead in the processing to determine when to stop. From an 
> architectural perspective it also makes sense - most often if you stop for 
> a moment and think about what you are building, usually if you are needing 
> a circular import, the two packages really are part of one package. 
>
> In C/C++ preprocessor, the #pragma once is often liberally spiced all over 
> the package to stop the compiler overrunning the cycles. This makes for a 
> nightmare for attempting to splice out part of it, and obviously is costing 
> a lot of time for every compilation, especially when only part has changed, 
> circular imports require re-parsing the whole thing all over again.
>
> I think this is the one situation where it makes sense to use dot imports.
>
> But the more you learn about go, the more you realise that the main 
> benefits of go come from forcing the programmer to have a reasonable grasp 
> of the underlying mechanisms at play. You can't get such fast compilation 
> when you force the compiler to think like a human, and it is easier to 
> instead get the human to think like the compiler. As a long time low-level 
> programmer, my first languages being BASIC and Assembler, I didn't need to 
> have this explained to me when I saw go. When I really learned what Go is 
> all about, it immediately reminded me of a funny little language from the 
> early 90s called Amiga E. It also used an extremely simple top level 
> structuring, which yes also is where Go shares much with Pascal and its 
> family of languages. Pascal was considered to be a good teaching language 
> for this reason, and Wirth designed these languages with the complexity of 
> the compiler in mind, again, humans can think (slowly) like computers, but 
> computers think even slower, the more human-like you try to make the 
> processing. 
>
> If performance is not an issue, and complexity of managing the codebase is 
> not an issue, you probably also might consider that you don't want to use 
> Go at all. Go has an energetic name because performance is central to its 
> design goals, and in computer programming, performance flows naturally from 
> conforming to the limitations of the hardware. Heuristics can often make 
> good decisions but when it fails, you can be lost for trying to eliminate 
> the problem, and this is not acceptable for high speed, mission critical 
> database systems especially.
>
> On Thursday, 29 November 2018 14:23:04 UTC+1, Michel Levieux wrote:
>>
>> The last few days I've been thinking a lot about the fact that Go does 
>> not allow circular imports.
>> I'm not really sure of why it currently works that way, but from what 
>> I've understood the way the compiler works - which is also the reason why 
>> compilation of Go programs is much faster than other languages - makes it 
>> quite difficult to authorize circular imports.
>>
>> I'm a young developer (not particularly in Go, I have only 2-3 years of 
>> experience at total) so I'm looking forward to hearing your opinion guys, 
>> but I think Go2 should allow importing packages circularly. I have no 
>> practical reason to think that, except I've been tricking many times to 
>> have a structure in my project with which I can at least build.
>>
>> The main reason why I'm strongly convinced forbidding circular imports is 
>> not a good thing is that it separates too much the problem space from the 
>> solution space. In Golang, the majority of the solutions we find are just 
>> the translation of the logic behind our head into a language a computer can 
>> understand - I emphasize this because it might not be true for all 
>> languages (take Prolog for instance). Most of the time when you read a well 
>> written program, you clearly get the underlying logic that led to this 
>> particular solution, AND implementation of the solution.
>>
>> BUT - I think that there are some cases (and not just a few) when from a 
>> logical point of view, the solution is clear, and we have to take the 
>> structure of a project away from that logic, because circular imports are 
>> not permitted. The h

[go-nuts] Re: Rethink possibility to make circular imports

2018-12-05 Thread Louki Sumirniy
The main reason for the prohibition against circular imports is that you 
have to add overhead in the processing to determine when to stop. From an 
architectural perspective it also makes sense - most often if you stop for 
a moment and think about what you are building, usually if you are needing 
a circular import, the two packages really are part of one package. 

In C/C++ preprocessor, the #pragma once is often liberally spiced all over 
the package to stop the compiler overrunning the cycles. This makes for a 
nightmare for attempting to splice out part of it, and obviously is costing 
a lot of time for every compilation, especially when only part has changed, 
circular imports require re-parsing the whole thing all over again.

I think this is the one situation where it makes sense to use dot imports.

But the more you learn about go, the more you realise that the main 
benefits of go come from forcing the programmer to have a reasonable grasp 
of the underlying mechanisms at play. You can't get such fast compilation 
when you force the compiler to think like a human, and it is easier to 
instead get the human to think like the compiler. As a long time low-level 
programmer, my first languages being BASIC and Assembler, I didn't need to 
have this explained to me when I saw go. When I really learned what Go is 
all about, it immediately reminded me of a funny little language from the 
early 90s called Amiga E. It also used an extremely simple top level 
structuring, which yes also is where Go shares much with Pascal and its 
family of languages. Pascal was considered to be a good teaching language 
for this reason, and Wirth designed these languages with the complexity of 
the compiler in mind, again, humans can think (slowly) like computers, but 
computers think even slower, the more human-like you try to make the 
processing. 

If performance is not an issue, and complexity of managing the codebase is 
not an issue, you probably also might consider that you don't want to use 
Go at all. Go has an energetic name because performance is central to its 
design goals, and in computer programming, performance flows naturally from 
conforming to the limitations of the hardware. Heuristics can often make 
good decisions but when it fails, you can be lost for trying to eliminate 
the problem, and this is not acceptable for high speed, mission critical 
database systems especially.

On Thursday, 29 November 2018 14:23:04 UTC+1, Michel Levieux wrote:
>
> The last few days I've been thinking a lot about the fact that Go does not 
> allow circular imports.
> I'm not really sure of why it currently works that way, but from what I've 
> understood the way the compiler works - which is also the reason why 
> compilation of Go programs is much faster than other languages - makes it 
> quite difficult to authorize circular imports.
>
> I'm a young developer (not particularly in Go, I have only 2-3 years of 
> experience at total) so I'm looking forward to hearing your opinion guys, 
> but I think Go2 should allow importing packages circularly. I have no 
> practical reason to think that, except I've been tricking many times to 
> have a structure in my project with which I can at least build.
>
> The main reason why I'm strongly convinced forbidding circular imports is 
> not a good thing is that it separates too much the problem space from the 
> solution space. In Golang, the majority of the solutions we find are just 
> the translation of the logic behind our head into a language a computer can 
> understand - I emphasize this because it might not be true for all 
> languages (take Prolog for instance). Most of the time when you read a well 
> written program, you clearly get the underlying logic that led to this 
> particular solution, AND implementation of the solution.
>
> BUT - I think that there are some cases (and not just a few) when from a 
> logical point of view, the solution is clear, and we have to take the 
> structure of a project away from that logic, because circular imports are 
> not permitted. The human brain works in such a manner that circular imports 
> make sense, and I'll get even further, they are what makes the strongest 
> sense of all the solutions it can get to.
>
> That is my one and only point, but I personally think it is enough to at 
> least discuss the issue.
>
> I have many questions from this point :
> - Has there been any discussions about that for Go2 yet? If yes, could any 
> of you point me to them?
> - What do you think about what I just wrote? Is it coherent and relevant 
> or am I missing something?
> - Do you see any alternative to the problem I brought here other than 
> authorizing circular imports? 
> - Can anyone explain me exactly why circular imports are forbidden or is 
> this too complicated to hold in a mail?
>
> Thank you all for reading!
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe 

Re: [go-nuts] [ANN] fixed point math library

2018-12-05 Thread Louki Sumirniy
m!

Yes, using pure integer values for currency is generally the best way to 
deal with it. The Bitcoin standard is that 1 BTC = 10,000,000 'satoshis'. 
Many rightly point out that this is a stupidly small precision, but for the 
time being is ok. I  personally would expand this, and just use a 32/32 
split and thus a 2^32 width for the fractional precision, which also makes 
for very easy, fast implementation. I don't really think that this would 
cause any issues whatsoever, really the differences between currencies 
would then merely be a matter of presentation, as 32 bits of fractional 
precision will not be exceeded in any near future situation.

On Friday, 30 November 2018 04:14:23 UTC+1, Bakul Shah wrote:
>
> FWIW, in some code I am writing, I considered using 
> fixed point decimal numbers but ended up defining a
> *currency* type which is an int64 for value + a separate
> unit + currency kind. Even if I use a unit of millicent, this
> will allow handling amounts close to $100 Trillion. I
> don't expect this limit to be a problem for my personal
> finances! Performance is not an issue for my use. I
> even store all the financial data in text files!
>
> Dealing with stuff such  as currency conversion, interest
> rates, stocks etc. gets a bit complicated due to their own
> precision needs but for that one can look at existing
> practices to do the right thing (which is, be able to accurately
> implement the rules your bank etc use).
>
> [Aside:
> Ideally this would be done using a *generic* currency
> type. Something like
>
> import "currency"
> type $ = currency.Type("$")
> type £ = currency.Type("£")
>
> var m1 = $(5)
> var m2 = $(10)
> var m3 = £(2)
>
> m1 + m2 // ok
> m2 + m3 // compile time error
> m1*m2 // compile time error
> m1*5 // ok
> m1+5 // compile time error
>
> I doubt go2 will get generics flexible enough for this!
> ]
>
> On Nov 28, 2018, at 10:47 PM, robert engels  > wrote:
>
> For those interesting in financial apps, I have released ‘fixed' at 
> https://github.com/robaho/fixed a high performance fixed-point math 
> library primarily designed for to work with currencies.
>
> The benchmarks: (Decimal is the shopspring library, big Int/Float are the 
> stdlib)
>
> BenchmarkAddFixed-8   20   0.83 ns/op
> 0 B/op  0 allocs/op
> BenchmarkAddDecimal-8  300   457 ns/op 
> 400 B/op 10 allocs/op
> BenchmarkAddBigInt-8  1   19.2 ns/op 
> 0 B/op  0 allocs/op
> BenchmarkAddBigFloat-82000   110 ns/op  
> 48 B/op  1 allocs/op
> BenchmarkMulFixed-8   1   12.4 ns/op 
> 0 B/op  0 allocs/op
> BenchmarkMulDecimal-8 200094.2 ns/op
> 80 B/op  2 allocs/op
> BenchmarkMulBigInt-8  1   22.0 ns/op 
> 0 B/op  0 allocs/op
> BenchmarkMulBigFloat-8300050.0 ns/op 
> 0 B/op  0 allocs/op
> BenchmarkDivFixed-8   1   19.3 ns/op 
> 0 B/op  0 allocs/op
> BenchmarkDivDecimal-8  100  1152 ns/op 
> 928 B/op 22 allocs/op
> BenchmarkDivBigInt-8  200068.4 ns/op
> 48 B/op  1 allocs/op
> BenchmarkDivBigFloat-81000   151 ns/op  
> 64 B/op  2 allocs/op
> BenchmarkCmpFixed-8   20   0.28 ns/op
> 0 B/op  0 allocs/op
> BenchmarkCmpDecimal-8 1   10.8 ns/op 
> 0 B/op  0 allocs/op
> BenchmarkCmpBigInt-8  28.37 ns/op
> 0 B/op  0 allocs/op
> BenchmarkCmpBigFloat-827.74 ns/op
> 0 B/op  0 allocs/op
> BenchmarkStringFixed-8200099.0 ns/op
> 16 B/op  1 allocs/op
> BenchmarkStringDecimal-8   500   326 ns/op 
> 144 B/op  5 allocs/op
> BenchmarkStringBigInt-8   1000   209 ns/op  
> 80 B/op  3 allocs/op
> BenchmarkStringBigFloat-8  300   571 ns/op 
> 272 B/op  8 allocs/op
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit 

Re: [go-nuts] [ANN] fixed point math library

2018-12-05 Thread Louki Sumirniy
I agree about the stutter naming. The name should at least specify the 
bitwidth of the integral and fractional parts, to follow the pattern of the 
fixed width types already existing. Fixed is easy enough to infer that it 
is a fixed point type, by the package. You could name the types based on 
the format convention - such as the precision used in various financial 
fields, and related to the currency, for example the difference between Yen 
and Dollars, the former not including a fractional value at all (The 
japanese are of course better at mathematics than most of the rest of the 
world, their approach is better and also used in several other currencies 
like the Serbian Dinar, though in Serbia they still have 2 decimal places 
in the accounting despite the non-existence of fractional dinars).

Also it makes sense to me that you would embed the type inside a struct, as 
these parameters would need to be available in order to provide multiple 
bit width parameter types (and would be necessary, for example, in forex 
trading there is  a name 'pip' meaning some number of  decimal places), and 
I think that obviously, if one were to instead use an alias for uint64 or 
whatever, that the users of the library could make the mistake of using 
inbuilt math operators on it and get entirely wrong results, whereas the 
struct type implicitly will compile-time refuse to play that game.

On Thursday, 29 November 2018 14:21:21 UTC+1, Jan Mercl wrote:
>
> On Thu, Nov 29, 2018 at 2:00 PM Robert Engels  > wrote:
>
>
> >> - To me type name 'fixed.Fixed' sounds like Javaism. Go code usually 
> tries to avoid such stutter: 'sort.Interface', 'big.Int' etc.
> > To me that’s a limitation of Go with small packages like this that only 
> have a single public struct. It is based on decimal.Decimal so I’m not the 
> only one who thinks this
>
> I don't think we are talking about the same thing here. Go idiom is to 
> name types such that they are not the same as the package qualifier (modulo 
> case) at the caller site. So the exported type should be 'Int', or 'Float' 
> or 'Real' or 'Number', etc., not 'FIxed' to avoid 'fixed.Fixed' at caller 
> site. `var n fixed.Number` looks better to me, for example, than `var n 
> fixed.Fixed`. The later actually does not even communicate any hint what 
> the type could possibly be.
>
> >> - A struct with a single field could be replaced by the field itself. 
> OTOH, it would enable coding errors by applying arithmetic operators to it 
> directly, so it's maybe justified in this case if that was the intention.
> > It was the intention. The Raw methods are there temporarily and will be 
> removed for direct serialization via a Writer. 
>
> Then it looks strange that to construct a Fixed from int64 one has to 
> write 'fixed.NewF(0).FromRaw(42)'. Check the big.{Int,Float,Rat) 
> constructors and setters, they are much more natural to use.
>
> >> - I'd prefer a single constructor 'New(int64)' and methods 'SetString', 
> 'SetFloat' etc.
> > Not possible. The caller doesn’t know the int64 value. Also, think of 
> how that would look in a chained math statement. Horrible. 
>
> It _is_ possible. You've misunderstood. New(n int64) returns a Fixed that 
> has the _value_ of n, which of course has a different underlying int64 bit 
> pattern in the private Fixed field. The caller want New(42) meaning 42 and 
> does not casre about the internal, scaled value, that's just an 
> implementation detail and no business of the caller. BTW: Chained math 
> statements where the operators are written as function calls, above chains 
> of length 2 are not seen very often. Longer ones, in many cases, well, 
> that's what I'd call horrible.
>
> >> I don't consider comparing performances of 64 bit integer arithmetic 
> and arbitrary sized arithmetic very useful.
> > Those are the alternatives to use when performing fixed place 
> arithmetic. In fact decimal.Decimal uses big Int... so it is included for 
> reference. 
>
> The point being made here is fixed size fitting to a machine word on a 64 
> bit CPU vs arbitrary sizes math libs implemented inevitably by multiple 
> word structs with pointers to backing storage and the necessary allocation 
> overhead. Apples to oranges. Not even in the same league.
>
> -- 
>
> -j
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: convert *byte to []byte

2018-12-05 Thread Louki Sumirniy
C/C++ way of working with these things requires specifying the start and 
end point using pointer arithmetic, which is pretty cumbersome considering 
how often it is needed. I think what Ian said is probably the most complete 
way to deal with it.

But probably you can also do it using the unsafe library, if you can grab 
the length vector from the C/C++ side, convert to an untyped unsafe 
pointer, and then back to array: [length]byte, and you won't have any 
issues from attempting to write past the allocated end of the buffer. Then 
you can use it like a regular slice by using name[:]. Of course if 
performance is critical you may not want to do that as it is likely to 
trigger a copy operation when you didn't mean it.

On Saturday, 1 December 2018 18:39:45 UTC+1, xiang liu wrote:
>
>
>
> Hi:
>
> I am using swig wrap a c++ module , the generated go code is like this:
>
> type  MediaFrame interface {
>  GetLength()  uint   
>  GetData()  (*byte)
> }
>
> I want to convert the *byte  to []byte,  How to do this?
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Strange behaviour of left shift

2018-12-05 Thread Louki Sumirniy
The implicit type inferred from an integer is always 'int'. This is a 64 
bit signed integer and thus has a maximum width of 63 bits. In Go, you 
should not assume sign or width of an untyped implicit conversion, as the 
'int' and 'uint' types default to the size of processor registers, which 
are 64 bits. Making this assumption will have undefined results depending 
on the platform.

I'm not sure why it works now but I encountered this problem with 
cross-compilation, as the program had an `int` that was assumed to be over 
32 bits long and it refused to compile. It appears to me that there must 
have been a recent change to implement 64 bit integers on 32 bit platforms.

I have a general policy with integers and Go, if I know it might be bigger 
than 32 bits, I specify the type. If it's a simple counter and unlikely to 
get anywhere near this, I can leave out the type spec. Also, to reduce 
confusion and ambiguity, if I am using shift operators I also specify 
unsigned. On the hardware level, most platforms are implementing bitshifts 
with multiplication. But sometimes not, also. The reason to use >> and << 
should be to use, if available, the hardware's shift operator, as it 
otherwise falls back to multiplication.

On Wednesday, 5 December 2018 17:36:32 UTC+1, Michel Levieux wrote:
>
> Hi guys,
>
> With a colleague of mine, we've run into a strange issue today. When we 
> look at package math, in const.go, we can see this line :
>
> MaxUint64 = 1<<64 - 1
>>
>>
> which I assume works pretty well. But if I do the same in a test main and 
> try to 'go run' it, with this line :
>
> const v = 1 << 64 - 1
>>
>>
> I get the following error :
>
> ./testmain.go:8:13: constant 18446744073709551615 overflows int
>>
>
> I think this is not a bug and we're just missing something here. Could 
> anyone explain this behaviour / point me to a documentation that explains 
> it?
>
> Thank you guys
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Updating a struct from goroutines

2018-09-25 Thread Louki Sumirniy
A map[string]interface{} can hold an arbitrary, string labeled collection 
of any types. You just need to have a type switch and cover all the 
possible inputs for it.

The concurrency is safe as long as only one process is writing to variables 
at once. If you need multiple concurrent processes to be able to write then 
you may need to consider what 'atomic' is in the system ensure there is 
mutex sync locks on them so you don't get writes changing data while 
another thread is accessing it. Putting sequence numbers on records to 
indicate the revision count also can help ensure you can resolve conflicts.

On Tuesday, 25 September 2018 20:39:07 UTC+2, Michael Ellis wrote:
>
> Hi, new gopher here. 
> I considered asking this on SO, but they (rightly, IMO) discourage "Is 
> this a good way to do it?" questions.  Hope that's ok here.
>
> By way of background, I'm porting a largish industrial control application 
> from Python to Go.  The Python version uses multiple processes (about a 
> dozen in all) communicating over ZeroMQ.  One process, called the 
> statehouse,  controls access to the application state.  The others obtain 
> copies and send updates over REQ sockets.  The data are serialized as JSON 
> objects that map nicely to Python dicts.
>
> Since there's no direct equivalent in Go to a Python dict that can hold a 
> mixture of arbitrary types,  I need to use a struct to represent the state. 
> No problem with that but I've been struggling with how to allow the 
> goroutines that will replace the Python processes to read and write to the 
> state struct with concurrency safety.  
>
> This morning I came up with an idea to send functions over a channel to 
> the main routine.  I put together a little test program and after some 
> refinements it looks promising.  Some rough benchmarking shows I can get a 
> million updates in under 1 second on a 2012 vintage Mac Mini.  That's more 
> than good enough for this application where the time between events is 
> usually more than 100 milliseconds.
>
> Here's the link to my test on the Go Playground: 
> https://play.golang.org/p/8iWvwnqBNYl . It runs there except that the 
> elapsed time comes back 0 and the prints from the second goroutine don't 
> show up. I think that's got something to do with the artificial clock in 
> the playground.  It works fine when I run it locally.  I've pasted the code 
> at the bottom of this message.
>
> So my big questions are:
>
>- Is this actually concurrency safe as long as all goroutines only use 
>the update mechanism to read and write?
>- Is there a more idiomatic way to do it that performs as well or 
>better?
>- What are the potential problems if this is scaled to a couple dozen 
>goroutines?
>- Does it sacrifice clarity for cleverness? (not that it's all that 
>clever, mind you, but I need to think about handing this off to my 
> client's 
>staff.)
>
>
> Thanks very much,
> Mike Ellis
>
> code follows ... 
>
> package main
>
> import (
>  "fmt"
>  "time"
> )
>
>
> // Big defines the application's state variables
> type Big struct {
>  A int
>  B string
>  /* and hundreds more */
> }
>
>
> // update is a struct that contains a function that updates a Big and
> // a signal channel to be closed when the update is complete. An update
> // may also be used to obtain a current copy of a Big by coding f to
> // do so.  (See gopher2 below.)
> type update struct {
>  done chan struct{}
>  ffunc(*Big)
> }
>
>
> // upch is a channel from which main receives updates.
> var upch = make(chan update)
>
>
> // gopher defines a function that updates a member of a Big and
> // sends updates via upch. After each send it waits for main to
> // close the update's done channel.
> func gopher() {
>  var newA int
>  f := func(b *Big) {
>  b.A = newA
>  }
>  for i := 0; i < n; i++ {
>  newA = i
>  u := update{make(chan struct{}), f}
>  upch <- u
>  <-u.done
>  }
> }
>
>
> // gopher2 uses an update struct to obtain a current copy of a Big
> // every 100 microseconds.
> func gopher2() {
>  var copied Big
>  f := func(b *Big) {
>  copied = *b
>  }
>  for {
>  time.Sleep(100 * time.Microsecond)
>  u := update{make(chan struct{}), f}
>  upch <- u
>  <-u.done
>  fmt.Println(copied)
>  }
> }
>
>
> // main creates a Big, launches gopher and waits on the update channel. 
> When
> // an update, u, arrives it runs u.f and then closes u.done.
> func main() {
>  var state = Big{-1, "foo"}
>  fmt.Println(state) // --> {-1, "foo"}
>  go gopher()
>  go gopher2()
>  start := time.Now()
>  for i := 0; i < n; i++ {
>  u := <-upch
>  u.f()
>  close(u.done)
>  }
>  perUpdate := time.Since(start).Nanoseconds() / int64(n) // Note: always 
> 0 in playground
>  fmt.Printf("%d updates, %d ns per update.\n", n, perUpdate)
>  fmt.Println(state) // --> {n-1, "foo"}
> }
>
>
> var n = 1000 // number of updates to send and receive
>
>
>
>  
>

-- 
You received this message because you are subscribed to the Google Groups 

[go-nuts] Re: ticker for poisson processes?

2018-09-25 Thread Louki Sumirniy
If you don't mind it involving a small background processing overhead, you 
could use a goroutine running a hashcash style iterative hash chain 
generation searching for a specified number of zero bits at one (or both) 
ends of the resultant hash before you run out of bits. This is extremely 
random when started from a proper random seed and you would have the 
process send a signal through a channel to trigger the ticker.

It wouldn't have to be that intensive a search, you could for example run 
the search itself on a regular ticker like every 100ms to drop the 
processing load, but within a fairly wide margin it will be quite random.

On Tuesday, 25 September 2018 11:34:14 UTC+2, David Wahlstedt wrote:
>
> Hi,
> What would be a nice way to implement a ticker that generates events 
> according to a Poisson process?
> The built-in Ticker in ticker.go uses a runtimeTimer that has a field 
> called period.
> I would like to implement a "random ticker" such that each tick interval 
> is random, using ExpFloat64() * d, with average duration d, instead of a 
> fixed interval.
> I could have a go routine that sleeps a random amount of time in a loop, 
> but it would be nice to use something similar to the ticker.
>
> BR,
> David
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] The If Statement: Switches are better except for a single condition (change my mind)

2018-09-24 Thread Louki Sumirniy
Goto isn't used often in Go because it has to be invoked on its own line 
inside a conditional statement block, which often can perform the same 
decision if you think it through, as well as moving shared elements into 
functions that share the receiver or have common interfaces, outside of the 
function. Goto based on a prior statement setting an error would be far 
more useful, which gets back to my conditional return idea (and then maybe 
this could be a pattern applied to these scope jumping functions in 
general, that they can be aborted by a false boolean. Goto is already bound 
within function scope anyway, so making it more useful couldn't do any real 
harm, and conditional branching would be the one.

It's kinda ironic to me that such a simple idea is so infrequent in high 
level languages yet everywhere from the assembler macro preprocessor on 
down. Break, continue, goto, return, all of them would become amazing with 
conditional execution.

For me, the novel if and for syntax was the biggest stand-out thing I 
noticed about Go, at the beginning. I had already got familiar with 
closures ,  and it took some time to absorb exactly what interfaces  are 
about. They  are amazing but the various cool features of if's 
pre-condition statement and the condition-only for (removing the need for 
the semicolons especially), these are a break outside of the very tired 
conventions of for if while wend foreach, etc etc, and let you structure 
the logic flow more intuitively and visually. Switches look nice and read 
well, but the keyword and block boundary raise the cost of casual use.

On Tuesday, 25 September 2018 04:15:02 UTC+2, Robert Engels wrote:
>
> Pretty sure that is what I said... duplicate the work in every case is 
> silly, thus the goto... if no work, no need for goto 
>
> > On Sep 24, 2018, at 9:10 PM, Dan Kortschak  > wrote: 
> > 
> > Rule: All rules are bad (including this one). 
> > 
> > goto is useful when there is a need to do some elaborate clean up (or 
> > other work) at the postamble of a function, but in many cases it 
> > becomes clearer to have the work done at the location (say in the 
> > switch in the example in this thread). Use of judgement is worthwhile 
> > as to whether this is true for any particular situation. 
> > 
> >> On Mon, 2018-09-24 at 20:11 -0500, robert engels wrote: 
> >> You should always return from the place of return, or goto 
> >> return_label, when a result/error needs to be formatted. 
> >> 
> >> See the Knuth paper I posted a while ago on using goto... 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "golang-nuts" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to golang-nuts...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] The If Statement: Switches are better except for a single condition (change my mind)

2018-09-24 Thread Louki Sumirniy
Using named return values and this construction you can drop all those 
returns in each case block to outside the block. You only need to spend an 
extra line if you have to break out of it by return or break.

On Monday, 24 September 2018 16:01:23 UTC+2, Lucio wrote:
>
> You never used:
>
> switch {
> case err == io.EOF:
>...
>return
> case err != nil:
>   ...
>   return
> default:
>   ...
> }
>
> or similar (the default portion has a few, slightly different options, 
> depending on preceding code)???
>
> Lucio.
>
>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] The If Statement: Switches are better except for a single condition (change my mind)

2018-09-24 Thread Louki Sumirniy
Ah, I wasn't quite clear on that. That does make them a lot even more 
useful.

On Monday, 24 September 2018 11:12:38 UTC+2, ohir wrote:
>
> On Mon, 24 Sep 2018 01:37:56 -0700 (PDT) 
> Louki Sumirniy > wrote: 
>
> > I am quite a fan of switch statements, they can make a list of responses 
> to 
> > a change in state very readable and orderly. 
> > But you have to remember a few  things about them. 
>
> > They don't evaluate in any definite order, 
>
> I did not quite follow the whole post but expression switch 
> **is evaluated in an exact order**: 
>
> [Switch Statements](https://golang.org/ref/spec#Switch_statements) 
> :: In an expression switch, the switch expression is evaluated and the 
> case 
> :: expressions, which need not be constants, are evaluated left-to-right 
> and 
> :: top-to-bottom; the first one that equals the switch expression triggers 
> :: execution of the statements of the associated case; the other cases are 
> :: skipped. If no case matches and there is a "default" case, its 
> statements 
> :: are executed. There can be at most one default case and it may appear 
> :: anywhere in the "switch" statement. A missing switch expression is 
> :: equivalent to the boolean value true. 
>
>
> -- 
> Wojciech S. Czarnecki 
>  << ^oo^ >> OHIR-RIPE 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] The If Statement: Switches are better except for a single condition (change my mind)

2018-09-24 Thread Louki Sumirniy
I am quite a fan of switch statements, they can make a list of responses to 
a change in state very readable and orderly. But you have to remember a few 
things about them. They don't evaluate in any definite order,  so any 
conditions that need more than one response need to use a fallthrough, 
which evens out the gap between if and switch.

But for precedent or exclusive conditions, and more than one condition in 
general, it's neater to use a switch:

if  {
 
} else {

}

versus

switch {
case 
default:

}

It takes the same number of lines and about the same number of characters 
but the switch makes the relationship between conditions more obvious (no 
fallthroughs, meaning each case is exclusive) and from there on, exclusive 
cases save one line compared to a more wordy chain of if {} else if {}.

In libraries I am writing at the moment, there is also another issue to 
deal with. Working with pointers means always being at risk of receiving an 
unallocated variable. As this would be a fallthrough case, it does not 
benefit from the use of a case, nor does it really do it detriment. But 
what I have found is that instead I can export the function of allocating 
on demand and perhaps registering the receiver was nil in a status/error, 
and a case with 3 lines of statement collapses to one assignment that hides 
the conditional in the function, drastically reducing the repetition. Even 
when the nil condition requires a different response (such as an array 
length query) it's not harmful to use this to avoid the nil panic.

These kinds of repeated test-and-calls are a reason why I have proposed 
conditional returns and a third switch section header that implies 
fallthrough instead of requiring this additionally. Conditional returns 
function like an if statement with a return, and fallthrough keyword for a 
case type would make this  conditional embedded in the function for 
triggering allocation mostly redundant since its case can be concisely 
specified to fall through, and it's more obvious what is meant compared to 
my hackish solution.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Generic alternatives: new basic types?

2018-09-23 Thread Louki Sumirniy
ies of Go compared to Java. 
>> The
>> >> Java designers understood languages far better, and from the start 
>> realized
>> >> that identity and reference equality were different concepts. Everyone 
>> in Go
>> >> land are debating these solved issues. Pick and chose what you want to
>> >> implement but there doesn’t really need to be a debate on how to do it.
>> >> 
>> >> Sent from my iPhone
>> >> 
>> >>>> On Sep 22, 2018, at 8:52 PM, Ian Denhardt > > wrote:
>> >>>> 
>> >>>> On Saturday, 22 September 2018 16:47:00 UTC+2, Louki Sumirniy wrote:
>> >>>> 
>> >>>>  I think the thing everyone who likes operator overloading like 
>> mainly
>> >>>>  is being able to do infix and postfix syntax, instead of only prefix
>> >>>>  (function).
>> >>> 
>> >>> My own reason for wanting this is not really about syntax, so much as
>> >>> being able to define functions etc. which e.g. check for equality,
>> >>> without having to write too versions -- one that uses `==` and one
>> >>> that calls some method custom types. The syntax isn't really the 
>> point;
>> >>> there's an underlying notion of equality that we want to be able to 
>> talk
>> >>> about for more than just built-in types. We could define an interface
>> >>> for this:
>> >>> 
>> >>>   // The Equatable interface wraps the basic Equals method.
>> >>>   //
>> >>>   // x.Equals(y) tests whether x and y are "the same." The predicate
>> >>>   // Equals should obey a few common sense rules:
>> >>>   //
>> >>>   // 1. It should be reflexive: x.Equals(x) should always return true
>> >>>   //(for any x).
>> >>>   // 2. It should be symmetric: x.Equals(y) should be the same as
>> >>>   //y.Equals(x)
>> >>>   // 3. It should be transitive: if x.Equals(y) and y.Equals(z), then
>> >>>   //x.Equals(z).
>> >>>   //
>> >>>   // It generally does not make sense for a type to implement
>> >>>   // Equatable where the type parameter T is something other than
>> >>>   // itself.
>> >>>   type Equatable(T) interface {
>> >>>   Equals(T) bool
>> >>>   }
>> >>> 
>> >>> What I am suggesting is merely that `==` desugars to a use of this
>> >>> interface.
>> >>> 
>> >>> An important litmus test for any operator we consider for overloading 
>> is
>> >>> whether we can come up with a clearly specified interface for it like
>> >>> the above. If not, it does not make sense to allow the operator to be
>> >>> overloaded, since it is not clear what overloaders should do. I 
>> believe
>> >>> this is the source of most of the problems with operator overloading 
>> in
>> >>> other languages.
>> >>> 
>> >>> I think if we stick to this things will stay under control; there's
>> >>> currently nothing stopping folks from defining an instance of
>> >>> io.Writer that does something utterly in conflict with what is 
>> described
>> >>> in its documentation -- but that hasn't seemed to be a problem in
>> >>> practice.
>> >>> 
>> >>> Quoting Michael Jones (2018-09-22 13:14:21)
>> >>>>  the reason i wrote something like "...operator overloading, but 
>> wait,
>> >>>>  don't get excited..." was to bring awareness of a core problem 
>> without
>> >>>>  (hopefully) having people bring the burden of experience. i say 
>> burden
>> >>>>  because bad experiences can shadow the 'why' that was good with the
>> >>>>  'how' that was bad. let the why foremost to "break these chains, 
>> rise
>> >>>>  up, and move beyond" as in Callicles' famous speech.
>> >>>>  the essential meaning of operator overloading and go interfaces and
>> >>>>  Smalltalk messaging is a way to bind code to intent. (in general
>> >>>> intent
>> >>>>  is named uniquely ("==") for simplicity but that is independent.)
>> >>>>  Generics raise the need a way to say how the standard intentions 
>> play
>> >>

  1   2   >