[go-nuts] Re: Persistent memory support for Go

2019-04-06 Thread rlh
Out of curiosity what HW/OS is this being developed on? I need new HW and might as well get the same since it will make playing around with this smoother. On Wednesday, April 3, 2019 at 6:35:13 PM UTC-4, Jerrin Shaji George wrote: > > Hi, > > > > I am part of a small team at VMware working on

Re: [go-nuts] Memory limits

2018-11-21 Thread rlh
If this is important and wasn't fixed in 1.11 or at tip then please file a bug report with a reproducer at https://github.com/golang/go/issues. An issue number will result in the Go team investigating. On Friday, November 16, 2018 at 1:12:58 PM UTC-5, Robert Engels wrote: > > This article >

Re: [go-nuts] ROC (Request-Oriented Collector)

2018-05-15 Thread rlh via golang-nuts
The current plan is to polish and publish our learnings by the end of next month (June 2018). On Sunday, May 13, 2018 at 11:48:45 PM UTC-4, Ian Lance Taylor wrote: > > [ +rlh, austin] > > On Sun, May 13, 2018 at 11:24 AM, Tanya Borisova <tany...@gmail.com >

Re: [go-nuts] Re: GC SW times on Heroku (Beta metrics)

2017-12-05 Thread rlh
t;> >> cpu, 9->9->5 MB, 10 MB goal, 8 P >> >> >> >> >> >> Heroku already reports a SW of 343 ms but I can't find it by manual >> >> >> inspection. I will download the logs later today and try to generate >> >>

[go-nuts] Re: GC SW times on Heroku (Beta metrics)

2017-12-02 Thread rlh via golang-nuts
Hard telling what it going on. 35MB, even for 1 CPU, seems very small. Most modern system provision more than 1GB per HW thread though I've seen some provision as little as 512MB. GOGC (SetGCPercent) can be adjust so that the application uses more of the available RAM. Running with

[go-nuts] Re: Running Go binary on a 56 core VM

2017-09-03 Thread rlh via golang-nuts
Without building and measuring it is impossible to know which of these approaches, or even a third one where you simply run a single instance, is best for your application. Each approach has upsides and downsides. The GC believes GOMAXPROCS and will uses as much CPU as it believes is

[go-nuts] Re: 10x latency spikes during GC alloc assist phase

2017-07-26 Thread rlh
I would add to 14812 . The report should include the environment variables, HW , and RAM. The report should indicate if any environment variables are not set to the

Re: [go-nuts] Latency spike during GC

2017-06-12 Thread rlh via golang-nuts
dless if they are idle. >>>>>>>> >>>>>>>> Thanks, looking forward to seeing the percentile numbers and >>>>>>>> graphs. >>>>>>>> >>>>>>>> >>>>>>>> >>&

Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread rlh via golang-nuts
The Johnstone / Wilson paper "The memory fragmentation problem: solved?" [1] is the original source. Modern malloc systems including Google's TCMalloc, Hoard [2], and Intel's Scalable Malloc (aka Mcrt Malloc [3]) all owe much to that paper and along with other memory managers all segregate

[go-nuts] Re: Large GC pauses with large map

2017-04-21 Thread rlh
Lee, As far as I can tell this is resolved. Thanks for the discussion and for working with stackimpact to fix the root cause. On Friday, April 21, 2017 at 3:52:55 PM UTC-4, Keith Randall wrote: > > It is almost never a good idea to call runtime.GC explicitly. > It does block until a garbage

[go-nuts] Re: Large GC pauses with large map

2017-04-21 Thread rlh
How did you generate the GC pause graphs? Could also provide the output from "GODEBUG=gctrace=1 yourApp"? It would help confirm that it is a GC pause problem. Also some insight into the number of cores / HW threads and the value of GOMAXPROCS could eliminate some possibilities. A reproducer

Re: [go-nuts] Re: too many runtime.gcBgMarkStartWorkers ?

2017-01-01 Thread rlh
Will making tight loops preemptable (CL 10958 ) resolve this use case? Is it true that some of the goroutines are compute bound while others have to respond to some sort of stimulus? What response time do these goroutines required? 1ms, 10ms, 100ms,

[go-nuts] Re: too many runtime.gcBgMarkStartWorkers ?

2016-12-30 Thread rlh
The default is GOMAXPROCS == numCPU and the runtime is optimized and tested for this. There are use cases involving co-tenancy where setting GOMAXPROCS < numCPU tells the OS to limit HW allocation and improves overall throughput when several programs are running concurrently. Setting

[go-nuts] Re: Strange, intermittent panic issues

2016-12-06 Thread rlh
0xb01dfacedebac1e is a poison pill that usually indicates misuse of unsafe.Pointer. If there is any use of unsafe.Pointer or CGO in the program that would be a good place to start looking. You can google "0xb01dfacedebac1e" for more details. On Tuesday, December 6, 2016 at 5:00:37 PM

[go-nuts] Re: Is ROC part of 1.8?

2016-10-28 Thread rlh
We are still actively working on ROC. Unfortunately it will not be part of 1.8. On Thursday, October 27, 2016 at 1:17:44 PM UTC-4, Chandra Sekar S wrote: > > Is the request-oriented GC slated to be included in the 1.8 release? > > -- > Chandra Sekar.S > -- You received this message because

[go-nuts] Re: Why does a "concurrent" Go GC phase appear to be stop-the-world?

2016-10-19 Thread rlh
This is likely 23540 . On Wednesday, October 19, 2016 at 8:32:18 AM UTC-4, Will Sewell wrote: > > Hey, I previously posted this on StackOverflow, but I was told this > mailing list would be a better forum for discussion. > > I am attempting to

Re: [go-nuts] Excessive garbage collection

2016-10-18 Thread rlh
>From the trace (4->4->0) it looks like the app is allocating about 4MB every 10ms. The app also has little (0 rounded) reachable data, sometimes called heap ballast. Since there is little ballast the GC is attempting to keep the heap from growing beyond 5MB. The GC is using about 2% of the CPU

[go-nuts] Re: GC direct bitmap allocation and Intel Cache Allocation Technology

2016-08-01 Thread rlh
Thanks for pointing this out. While it isn't clear how this is applicable to the sweep free alloc work it does seem relevant to the mark worker's heap tracing which can charitable be described as a cache smashing machine. The mark worker, loads an object from a random location in memory,

[go-nuts] Re: How Go GC perform with 128GB+ heap

2016-08-01 Thread rlh
I think the high bit here is that the Go community is very aggressive about GC latency. Go has large users with large heaps, lots of goroutines, and SLOs similar to those being discussed here. When they run into GC related latency problems the Go team works with them to root cause and address