I didn't even think of the idea of using buffered channels, I was trying to 
not lean too far in towards that side of thing, but it is good you mention 
it, it would be simple to just pre-allocate a buffer and trigger the print 
call only when that buffer fills up (say like half a screenful, maybe 4kb 
to allow for stupidly heavy logging outputs.

As you point out, there is some threads and buffering going on with the 
writer already.

I do think, though, that while on one hand you are correct the load is not 
really other than shifted, that on the other hand, the scheduler can more 
cleanly keep the threads separated. In my use case it is only one main 
thread processing a lot of crypto-heavy virtual machine stuff, and the most 
important thing that needs low overhead (in thread) diversions, not that 
the load is reduced.

As I mentioned, I wrote a logger that does this deferral of processing 
until two hops down stream to the root, I am going to look at buffering it 
and actually I think it would be good to actually try to pace it by time 
instead of line-printing speed, and when the heavy debug printing is being 
used performance is not a concern  at all - but being single thread, the 
less time that thread spends talking to other threads, the better, very 
short loops, under 100ms most of the time, making actual calls to log. or 
fmt.Print functions adds more overhead to the main thread than loading a 
channel and dispatching it.

On my main workstation, there is another 11 cores and they can be busy 
doing things without slowing the central process, so long as they aren't 
needing to synchronise with it or put the loop on hold longer than 
necessary. Using a buffer and a fast ticker to fill and empty bulk sends to 
the output sounds like a sensible idea to me, as slicing strings, it is 
easy to design it to flow as a stream, they only need one bottleneck as 
they are streamed into the buffer.

On Sunday, 17 March 2019 10:15:29 UTC+1, Christophe Meessen wrote:
>
> What you did is decouple production from consumption. You can speed up the 
> production go routine if the rate is irregular. But if, on average, 
> consumption is too slow, the list will grow until out of memory. 
>
> If you want to speed up consumption, you may group the strings in one big 
> string and print once. This will reduce the rate of system calls to print 
> each string individually.
>
> Something like this (warning: raw untested code)
>
>
> buf := make([]byte, 0, 1024)
> ticker := time.NewTicker(1*time.Second)
> for {
>     select {
>     case ticker.C:
>         if len(buf) > 0 {
>             fmt.Print(string(buf))
>             buf := buf[:0]
>         }
>     case m := <-buffChan:
>         buf = append(buf, m...)
>         buf = append(buf, '\n')
>     }
> }
>
>
> Adjust the ticker period time and initial buffer size to what matches your 
> need. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to