Re: [go-nuts] times of stw in one gc cycle?

2021-06-12 Thread Rick Hudson
It won't go on forever.  The formal proofs are in the original Sapphire and 
distributed Train algorithm papers. Informally the proofs show that there 
is no way to create a new white object, no way to pass white object between 
threads more than a bounded number of times, reachable non-black objects 
are bounded, and each loop discovers and extinguishes at least one 
non-black reachable object.

Richard L. Hudson and J. Eliot B. Moss, ``Sapphire: Copying GC Without 
Stopping the World,'' *Concurrency and Computation: Practice and Experience*, 
Volume 15, Issue 3-5, pp. 223-261, John Wiley and Sons, 2003. 
*http://dx.doi.org/10.1002/cpe.712 
*.
On Monday, June 7, 2021 at 6:00:11 PM UTC-4 Ian Lance Taylor wrote:

> On Sun, Jun 6, 2021 at 4:19 AM xie cui  wrote:
> >
> > https://github.com/golang/go/blob/master/src/runtime/mgc.go#L858-L876
> > due to these code lines, stw in one gc cycle may happen more than 2 
> times. so stw times in one gc cycle could be 2(general), 3, 4,  and 
> even for ever?
>
> Theoretically, yes. In practice, this is not a problem.
>
> Ian
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/29b00794-d452-4706-8cf4-45570bddad13n%40googlegroups.com.


Re: [go-nuts] Question about the zero-value design: Why pointers zero value is not the zero value of what they points to?

2020-02-17 Thread Rick Hudson
> type Treenode struct {
>  left *Treenode
>  right *Treenode
> }

One could of course design a language where Treenode is called cons and 
left is called car and right is called cdr and (car nil) is nil and (cdr 
nil) is nil. You could implement such a language by putting 2 words of 0 at 
location 0 and add a write barrier or page protection to enforce nil 
immutable.

One could do all of this but Lisp did it a long time ago.


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/7b24eace-ef9a-4190-b066-315ef89c32be%40googlegroups.com.


Re: [go-nuts] CGO - Passing pointer to C

2019-12-04 Thread Rick Hudson
Breaking the Go CGO pointer rules comes up periodically and the rules 
have not changed. Applications have lived with the rules simply 
because breaking them results in revisiting the application code 
every time a new Go release comes out. Did the compiler improve and 
some object is now allocated on the stack instead of the heap? Did the 
runtime borrow some MESH [1] virtual memory page fragmentation 
techniques but improve them for Go by updating pointers to reducing 
TLB pressure? Is there value in moving objects from NVRAM to DRAM 
and updating pointers? And so forth and so on. Nobody knows if any of 
this will ever happen but the Go CGO pointer rules leave open the 
possibility.


[1] Bobby Powers, David Tench, Emery D. Berger, and Andrew McGregor. 2019. 
Mesh: Compacting Memory Management for C/C++ Applications. In 
Proceedings of the 40th ACM SIGPLAN Conference on Programming Language 
Design and Implementation (PLDI ’19), June 22ś26, 2019, Phoenix, AZ, USA. 
ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3314221.3314582


On Wednesday, December 4, 2019 at 11:14:17 AM UTC-5, Ian Lance Taylor wrote:
>
> On Wed, Dec 4, 2019 at 6:48 AM Robert Johnstone  > wrote: 
> > 
> > Thanks for the quick reply.  I had not considered the write barriers, 
> but if the Go objects are "live" longer than the held in C, it would work. 
>
> The write barriers do not look only at the pointer being stored, they 
> also look at the contents of memory being stored into.  That is why C 
> code must never store a Go pointer into Go memory. 
>
>
> > I definitely agree that there are risks associated with this approach. 
>  We are giving up some of the safety of Go.  Unfortunately, we are using 
> cgo for more than some computation, so we have live objects in C as well, 
> and so we cannot completely escape the manual memory management required. 
> > 
> > What is the state of plans for a moving garbage collector?  This would 
> definitely wreck havoc if any pointers to Go memory were held in C. 
>
> There are no current plans for a moving garbage collector. 
>
> I cannot promise that no other changes will break this approach. 
> Obviously we won't consider bug reports for code that breaks the cgo 
> rules. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/23885bd0-4802-40c7-b4c9-52541e92029f%40googlegroups.com.


[go-nuts] Re: Roadblock for user-implemented JIT compiler

2019-10-23 Thread Rick Hudson


One approach is to maintain a shadow stack holding the pointers in a place 
the GC already knows about, like an array allocated in the heap. This can 
be done in Go, the language. Dereferences would use a level of indirection. 
Perhaps one would pass an index into the array instead of the pointer and 
somehow know the location of the shadow stack from a VM structures. This 
way the stack contains no pointers _into the heap_ so the _GC_ is happy. 
You might still have to deal with pointers to stack allocated objects since 
stacks can be moved and so forth but that is not the problem being 
discussed.

Go the implementation, such as the Go 1.13, has a GC that does not move 
heap objects. This means that to keep a heap object live the GC only needs 
to know about a single pointer. That’s sort of handy since now you can push 
the pointer onto the shadow stack and also onto the call stack since as 
long as the shadow stack is visible the object will not be collected. I 
note that this involves a barrier on all pointer writes so it is more than 
just a change to the calling conventions. Reads on the other hand would be 
full speed and not require a level of indirection or barrier unless and 
until Go the implementation moved to a moving collector.

I would explore this approach first since all of the pieces are under your 
control. Developing an ABI for stack maps would include other people with 
differing agendas and would likely slow you down. Likewise forking would 
come with the usual maintenance/merge headaches.



On Wednesday, October 23, 2019 at 1:50:51 PM UTC+1, Max wrote:
>
> Hello gophers,
>
> My recent attempt at creating a JIT compiler in Go to speed up my 
> interpreter https://github.com/cosmos72/gomacro hit an early roadblock.
>
> In its current status, it can compile integer arithmetic and 
> struct/array/slice/pointer access for amd64 and arm64, but it cannot 
> allocate memory or call other functions, which severely limits its 
> usefulness (and is thus not yet used by gomacro).
>
> The reason is: there is a requirement that Go functions must have a "stack 
> frame descriptors registered with the runtime", in brief a "stack map" that 
> tells which bits on the stack are pointers and which ones are not.
> See https://github.com/golang/go/issues/20123 for details.
>
> But there is no API to associate a stack map to functions generated at 
> runtime and running on the Go stack - currently the only supported 
> mechanism to load Go code at runtime is to open a shared library file with 
> `plugin.Open()`
>
> Thus JIT-generated functions must avoid triggering the garbage collector, 
> as it would panic as described in the link above.
> In turn, this means they cannot:
> * allocate memory
> * call other functions
> * grow the stack
> or do anything else that may start the GC.
>
> Now, I understand *why* Go functions must currently have a stack map, and 
> I see at least two possible solutions:
>
> 1. implement an API to associate a stack map to functions generated at 
> runtime - possibly by forking the Go compiler and/or standard library
> 2. replace Go GC and allocator with an alternative that does not require 
> stack maps - for example Boehm GC https://www.hboehm.info/gc/
> Here too, forking the Go compiler and/or standard library if needed.
>
> My questions are:
>
> a. which one of the two solutions above is easier, and how long could it 
> take to a full-time expert?
> b. does anyone have an easier solution or workaround to achieve the same 
> goal?
>
> Regards,
> cosmos72
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/b5fed315-2e30-4c63-8f7e-75fd21feb853%40googlegroups.com.


[go-nuts] Re: How can I debug high garbage collection rate cause high CPU usage issue in golang

2019-06-05 Thread Rick Hudson
When you say "set up GC rate(10%) to reduce memory usage down to normal" 
what exactly did the program do? 
 
Compute (CPU) costs money and heap memory (DRAM) costs money. Minimizing 
the sum should be the goal. This requires one to have a model of the 
relative costs of CPU vs. RAM, HW folks balance these costs when they spec 
a machine so you could steal that model. Right now I'm on a 4 core (8 HW 
thread) machine with 16 GBytes of memory, so that's 2 GBytes per P. Cloud 
providers have price lists that can also be used to build a model. Once 
there is a model, set GODEBUG=gctrace=1 and adjust GOGC to find the balance 
that minimizes the sum of CPU and heap size based on your model. 

Alternatively your manager may give out bonuses for improving benchmarks 
that only look at CPU times. This produces a model where memory is free as 
long as one doesn't OOM. I've used that model but it brought little job 
satisfaction.

On Tuesday, June 4, 2019 at 8:55:39 AM UTC-4, Joseph Wang wrote:
>
> Hello everyone
>
> I just have a question about my golang code debugging. It's not specific 
> code question. But I never met this issue before. 
>
> The problem is like this. I replaced our back-end system's cache from 
> single node cache to groupcache that is a kind mem cache. Then I met high 
> memory usage issue at beginning. I use pprof and online resource then I set 
> up GC rate(10%) to reduce memory usage down to normal. But then this will 
> cause high CPU usage, because heavy GC operations. I use pprof to get the 
> hot spot is runtime.ScanObject() this function, which is out of scope of my 
> code base. 
>
> So I don't know if anyone can give me some suggestions about how to fix 
> this kind issue. I know the issue should come from my code base. But I 
> don't know how to find out the issue and fix it ASAP.
>
> Best,
>
> Joseph
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/c0503366-90f2-431a-aef5-9659f9b679ac%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: GC SW times on Heroku (Beta metrics)

2017-12-05 Thread Rick Hudson
Henrik,
Thanks for the kind offer but there isn't much the runtime team can do with
the logs since 1.9 isn't likely to be changed due to this issue.



On Tue, Dec 5, 2017 at 10:43 AM, Henrik Johansson <dahankz...@gmail.com>
wrote:

> I would gladly help with this but afaik Heroku only makes stable versions
> available:  https://github.com/heroku/heroku-buildpack-go/blob/
> master/data.json
> I guess I could deploy a docker container but I don't know if it changes
> everything and I doubt I have time before christmas at least.
>
> Maybe someone more versed in Herokus Go support can chime in on if it is
> possible.
>
> I will provide the logs from tonight though. Do you want them zipped here
> in the thread?
>
>
> tis 5 dec. 2017 kl 15:37 skrev Rick Hudson <r...@golang.org>:
>
>> Glad to have helped. The runtime team would be interested in seeing what
>> these pauses look like in the beta. If you have the time could you send
>> them to us after the beta comes out.
>>
>>
>> On Tue, Dec 5, 2017 at 9:06 AM, Henrik Johansson <dahankz...@gmail.com>
>> wrote:
>>
>>> Ok so it's not bad, thats good!
>>>
>>> The inital ~20 sec numbers come from the graphs that Herokus Go Metrics
>>> (Beta) provides.
>>> These must be sums in the given graph bucket which may for a 24H period
>>> add up to the high numbers I guess.
>>>
>>> I will let it run over night and see what it looks like tomorrow, thx
>>> for your thoughts on this!
>>>
>>> tis 5 dec. 2017 kl 14:58 skrev <r...@golang.org>:
>>>
>>>> The wall clock is the first set of numbers, the second set is CPU. So
>>>> 8P running for 8ms wall clock will result in 64ms CPU. The word "wall" was
>>>> dropped to keep the line short.
>>>>
>>>> There will be a beta out in the proverbial next few days that could
>>>> help reduce even these STW times. The original post talked about 20 second
>>>> and 400 and 900 ms pauses. From what I'm seeing here it is hard to
>>>> attribute them to GC STW pauses.
>>>>
>>>> Also the GC is taking up (a rounded) 0% of the CPU which is pretty good
>>>> (insert fancy new emoji).  It is also doing it with a budget of 10 or 11
>>>> MBtyes on a machine that likely has 8 GB of Ram. To further test whether
>>>> this is a GC issue or not try increasing GOGC until the MB goal on the
>>>> gctrace line is 10x or 100x larger. This will reduce GC frequency by 10x or
>>>> 100x and if your tail latency is a GC problem the 99%tile latency numbers
>>>> will become 99.9%tile or 99.99%tile numbers.
>>>>
>>>> On Tuesday, December 5, 2017 at 2:39:53 AM UTC-5, Henrik Johansson
>>>> wrote:
>>>>
>>>>> I am watching with childlike fascination...
>>>>> This is interesting perhaps:
>>>>>
>>>>> gc 130 @2834.158s 0%: 0.056+3.4+2.9 ms clock, 0.45+2.8/5.6/0+23 ms
>>>>> cpu, 8->8->4 MB, 9 MB goal, 8 P
>>>>> gc 131 @2834.178s 0%: 0.023+7.3+0.12 ms clock, 0.18+1.2/5.4/9.2+1.0 ms
>>>>> cpu, 9->9->5 MB, 10 MB
>>>>> <https://maps.google.com/?q=5+MB,+10+MB=gmail=g> goal, 8
>>>>> P
>>>>>
>>>>> ---> gc 132 @2836.882s 0%: 3.5+34+8.0 ms clock, 28+1.6/3.8/27+64 ms
>>>>> cpu, 10->11->4 MB, 11 MB
>>>>> <https://maps.google.com/?q=4+MB,+11+MB=gmail=g> goal, 8
>>>>> P
>>>>>
>>>>> gc 133 @2836.961s 0%: 0.022+14+1.0 ms clock, 0.18+2.1/12/0+8.4 ms cpu,
>>>>> 8->9->5 MB, 9 MB goal, 8 P
>>>>> gc 134 @2837.010s 0%: 7.0+18+0.16 ms clock, 56+14/21/1.6+1.2 ms cpu,
>>>>> 9->10->5 MB, 10 MB
>>>>> <https://maps.google.com/?q=5+MB,+10+MB=gmail=g> goal, 8
>>>>> P
>>>>>
>>>>> 28 + 64 ms SW (if I understand this correctly) to collect what 6-7 MB?
>>>>>
>>>>>
>>>>>
>>>>> tis 5 dec. 2017 kl 08:25 skrev Dave Cheney <da...@cheney.net>:
>>>>>
>>>> Oh yeah, I forgot someone added that a while back. That should work.
>>>>>>
>>>>>> On Tue, Dec 5, 2017 at 6:23 PM, Henrik Johansson <dahan...@gmail.com>
>>>>>> wrote:
>>>>>> > So it has to run the program? I thought I saw "logfile" scenario in
>>>>>> the
>>>>>> > examples?
>>>>>>

Re: [go-nuts] Re: GC SW times on Heroku (Beta metrics)

2017-12-05 Thread Rick Hudson
Glad to have helped. The runtime team would be interested in seeing what
these pauses look like in the beta. If you have the time could you send
them to us after the beta comes out.


On Tue, Dec 5, 2017 at 9:06 AM, Henrik Johansson 
wrote:

> Ok so it's not bad, thats good!
>
> The inital ~20 sec numbers come from the graphs that Herokus Go Metrics
> (Beta) provides.
> These must be sums in the given graph bucket which may for a 24H period
> add up to the high numbers I guess.
>
> I will let it run over night and see what it looks like tomorrow, thx for
> your thoughts on this!
>
> tis 5 dec. 2017 kl 14:58 skrev :
>
>> The wall clock is the first set of numbers, the second set is CPU. So 8P
>> running for 8ms wall clock will result in 64ms CPU. The word "wall" was
>> dropped to keep the line short.
>>
>> There will be a beta out in the proverbial next few days that could help
>> reduce even these STW times. The original post talked about 20 second and
>> 400 and 900 ms pauses. From what I'm seeing here it is hard to attribute
>> them to GC STW pauses.
>>
>> Also the GC is taking up (a rounded) 0% of the CPU which is pretty good
>> (insert fancy new emoji).  It is also doing it with a budget of 10 or 11
>> MBtyes on a machine that likely has 8 GB of Ram. To further test whether
>> this is a GC issue or not try increasing GOGC until the MB goal on the
>> gctrace line is 10x or 100x larger. This will reduce GC frequency by 10x or
>> 100x and if your tail latency is a GC problem the 99%tile latency numbers
>> will become 99.9%tile or 99.99%tile numbers.
>>
>> On Tuesday, December 5, 2017 at 2:39:53 AM UTC-5, Henrik Johansson wrote:
>>
>>> I am watching with childlike fascination...
>>> This is interesting perhaps:
>>>
>>> gc 130 @2834.158s 0%: 0.056+3.4+2.9 ms clock, 0.45+2.8/5.6/0+23 ms cpu,
>>> 8->8->4 MB, 9 MB goal, 8 P
>>> gc 131 @2834.178s 0%: 0.023+7.3+0.12 ms clock, 0.18+1.2/5.4/9.2+1.0 ms
>>> cpu, 9->9->5 MB, 10 MB
>>>  goal, 8 P
>>>
>>> ---> gc 132 @2836.882s 0%: 3.5+34+8.0 ms clock, 28+1.6/3.8/27+64 ms cpu,
>>> 10->11->4 MB, 11 MB
>>>  goal, 8 P
>>>
>>> gc 133 @2836.961s 0%: 0.022+14+1.0 ms clock, 0.18+2.1/12/0+8.4 ms cpu,
>>> 8->9->5 MB, 9 MB goal, 8 P
>>> gc 134 @2837.010s 0%: 7.0+18+0.16 ms clock, 56+14/21/1.6+1.2 ms cpu,
>>> 9->10->5 MB, 10 MB
>>>  goal, 8 P
>>>
>>> 28 + 64 ms SW (if I understand this correctly) to collect what 6-7 MB?
>>>
>>>
>>>
>>> tis 5 dec. 2017 kl 08:25 skrev Dave Cheney :
>>>
>> Oh yeah, I forgot someone added that a while back. That should work.

 On Tue, Dec 5, 2017 at 6:23 PM, Henrik Johansson 
 wrote:
 > So it has to run the program? I thought I saw "logfile" scenario in
 the
 > examples?
 >
 > GODEBUG=gctrace=1 godoc -index -http=:6060 2> stderr.log
 > cat stderr.log | gcvis
 >
 > I have shuffled the Heroku logs into Papertrail so I should be able to
 > extract the log lines from there.
 >
 >
 > tis 5 dec. 2017 kl 08:10 skrev Dave Cheney :
 >>
 >> Probably not for your scenario, gcviz assumes it can run your program
 >> as a child.
 >>

>>> >> On Tue, Dec 5, 2017 at 6:07 PM, Henrik Johansson 
>>>
>>>
 >> wrote:
 >> > I found https://github.com/davecheney/gcvis from +Dave Cheney is
 it a
 >> > good
 >> > choice for inspecting the gc logs?
 >> >
 >> > tis 5 dec. 2017 kl 07:57 skrev Henrik Johansson <
 dahan...@gmail.com>:
 >> >>
 >> >> I have just added the gc tracing and it looks like this more or
 less
 >> >> all
 >> >> the time:
 >> >>
 >> >> gc 78 @253.095s 0
 %:
 0.032+3.3+0.46 ms clock, 0.26+0.24/2.6/2.4+3.6 ms
 >> >> cpu,
 >> >> 11->12->4 MB, 12 MB
  goal, 8 P
 >> >> gc 79 @253.109s 0%: 0.021+2.1+0.17 ms clock,
 0.16+0.19/3.6/1.2+1.3 ms
 >> >> cpu,
 >> >> 9->9->4 MB, 10 MB
  goal, 8 P
 >> >> gc 80 @253.120s 0%: 0.022+2.8+2.2 ms clock,
 0.17+0.27/4.8/0.006+18 ms
 >> >> cpu,
 >> >> 8->8->4 MB, 9 MB goal, 8 P
 >> >> gc 81 @253.138s 0%: 0.019+2.3+0.10 ms clock,
 0.15+0.73/3.9/3.1+0.81 ms
 >> >> cpu, 9->9->5 MB, 10 MB
  goal, 8 P
 >> >>
 >> >> Heroku already reports a SW of 343 ms but I can't find it by
 manual
 >> >> inspection. I will download the logs later today and try to
 generate
 >> >> realistic load.
 >> >> What is the overhead of running like this, aside from the obvious
 extra
 >> >> logging?
 >> >> Are there any automatic tools to analyze these logs?
 >> >>

Re: [go-nuts] Latency spike during GC

2017-05-31 Thread Rick Hudson
gc 347 @6564.164s 0%: 0.89+518+1.0 ms clock, 28+3839/4091/3959+33 ms cpu,
23813->23979->12265 MB, 24423 MB goal, 32 P

What I'm seeing here is that you have 32 HW threads and you spend .89+518+1
or 520 ms wall clock in the GC. You also spend  28+3839+4091+3959+33 or
11950 ms CPU time out of total of 520*32 or 16640 available CPU cycles
while the GC is running. The GC will reserve 25% of the CPU to do its work.
That's 16640*.25 or 4160 ms. If the GC finds any of the remaining 24
threads idle it will aggressively enlist them to do GC work.

The graph only shows the 8 HW threads but not the other 24 so it is hard to
tell what is going on with them.

This may well be related to  issue 12812
 so you might want to read that
thread for more insight.

On Wed, May 31, 2017 at 10:25 AM, Xun Liu  wrote:

> $ go version
>
> go version go1.8 linux/amd64
>
> On Wednesday, May 31, 2017 at 7:13:38 AM UTC-7, Ian Lance Taylor wrote:
>>
>> [ +rlh, austin ]
>>
>> Which version of Go are you running?
>>
>> Ian
>>
>> On Tue, May 30, 2017 at 10:01 PM, Xun Liu  wrote:
>>
>>> Hi, we see a clear correlation between GC and latency spike in our Go
>>> server. The server uses fairly large amount of memory (20G) and does mostly
>>> CPU work. The server runs on a beefy box with 32 cores and the load is
>>> pretty light (average CPU 20-30%).  GC kicks in once every 10-20 seconds
>>> and whenever it runs we observe pretty big latency spike ranging from 30%
>>> to 100% across p50-p90 percentiles (e.g. p90 can jump from 100-120ms to
>>> 160-250ms)
>>>
>>> I captured a trace of a gc and noticed the following:
>>>
>>> 1. user gorountines seem run longer during gc. This is through ad-hoc
>>> check. I don't really know how to get stats to confirm this.
>>> The gc log is as following (tiny pauses but is very aggressive in assist
>>> and idle time)
>>> gc 347 @6564.164s 0%: 0.89+518+1.0 ms clock, 28+3839/4091/3959+33 ms
>>> cpu, 23813->23979->12265 MB, 24423 MB goal, 32 P
>>>
>>> 2. during gc, goroutines can queue up. In this particular case there is
>>> a stretch of time (~20ms) where we see many goroutines are GCWaiting. See
>>> below -- the second row is goroutines with light grey indicating GCWaiting
>>> count and light green Runnable.
>>>
>>>
>>>
>>> 
>>>
>>>
>>> 
>>>
>>> Any idea what's going on here? What can I do to reduce the spikes?
>>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to golang-nuts...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] runtime.GC - documentation

2016-11-29 Thread Rick Hudson
That is correct.

On Tuesday, November 29, 2016, Josh Hoak <jrh...@gmail.com> wrote:

> To clarify, the GC method has different behavior than how the background
> garbage collector works. GC calls gcStart with blocking mode, whereas the
> normal background garbage collector is called from proc.go
> <https://golang.org/src/runtime/proc.go> and malloc.go
> <https://golang.org/src/runtime/malloc.go> with non-blocking mode.  As I
> read the code, GC eschews the fancy concurrent behavior of the new garbage
> collector.
>
> On Tue, Nov 29, 2016 at 11:46 AM, Rick Hudson <r...@golang.org
> <javascript:_e(%7B%7D,'cvml','r...@golang.org');>> wrote:
>
>> The documentation is correct. The current runtime.GC() implementation
>> invokes a Stop The World (STW) GC that completes before runtime.GC()
>> returns. It is useful when doing benchmarking to avoid some of the
>> non-determinism caused by the GC.
>>
>>
>>
>>
>> On Tue, Nov 29, 2016 at 1:15 PM, Ian Lance Taylor <i...@golang.org
>> <javascript:_e(%7B%7D,'cvml','i...@golang.org');>> wrote:
>>
>>> [ +rlh, austin ]
>>>
>>> On Tue, Nov 29, 2016 at 7:29 AM, Carlos <uldericofi...@gmail.com
>>> <javascript:_e(%7B%7D,'cvml','uldericofi...@gmail.com');>> wrote:
>>> > Hi,
>>> >
>>> >
>>> > In https://golang.org/pkg/runtime/#GC it says:
>>> >
>>> >> It may also block the entire program.
>>> >
>>> >
>>> > Is this still correct? I understand that GC still pauses, but being
>>> under
>>> > 100us mark I wonder this affirmative still makes sense.
>>> >
>>> > All in all, if it does block, it will block no longer than 100us.
>>> >
>>> >
>>> >
>>> > - cc
>>> >
>>> > --
>>> > You received this message because you are subscribed to the Google
>>> Groups
>>> > "golang-nuts" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> an
>>> > email to golang-nuts+unsubscr...@googlegroups.com
>>> <javascript:_e(%7B%7D,'cvml','golang-nuts%2bunsubscr...@googlegroups.com');>
>>> .
>>> > For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to golang-nuts+unsubscr...@googlegroups.com
>> <javascript:_e(%7B%7D,'cvml','golang-nuts%2bunsubscr...@googlegroups.com');>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] runtime.GC - documentation

2016-11-29 Thread Rick Hudson
The documentation is correct. The current runtime.GC() implementation
invokes a Stop The World (STW) GC that completes before runtime.GC()
returns. It is useful when doing benchmarking to avoid some of the
non-determinism caused by the GC.




On Tue, Nov 29, 2016 at 1:15 PM, Ian Lance Taylor  wrote:

> [ +rlh, austin ]
>
> On Tue, Nov 29, 2016 at 7:29 AM, Carlos  wrote:
> > Hi,
> >
> >
> > In https://golang.org/pkg/runtime/#GC it says:
> >
> >> It may also block the entire program.
> >
> >
> > Is this still correct? I understand that GC still pauses, but being under
> > 100us mark I wonder this affirmative still makes sense.
> >
> > All in all, if it does block, it will block no longer than 100us.
> >
> >
> >
> > - cc
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "golang-nuts" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to golang-nuts+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.