I tried with a more complex application and already got constant 100% CPU usage twice.
When I attach strace I see the same as with the minimal demo but with much more frecuency: [pid 42659] 23:36:49.035120 epoll_pwait(3, [], 128, 0, NULL, 140341803388040) = 0 [pid 42659] 23:36:49.035133 epoll_pwait(3, [], 128, 0, NULL, 18704162493558) = 0 [pid 42659] 23:36:49.035145 epoll_pwait(3, [], 128, 0, NULL, 140341803388040) = 0 [pid 42659] 23:36:49.035158 epoll_pwait(3, [], 128, 0, NULL, 18704162493558) = 0 [pid 42659] 23:36:49.035170 epoll_pwait(3, [], 128, 0, NULL, 140341803388040) = 0 [pid 42659] 23:36:49.035182 epoll_pwait(3, [], 128, 0, NULL, 18704162493558) = 0 [pid 42659] 23:36:49.035195 epoll_pwait(3, [], 128, 0, NULL, 140341803388040) = 0 [pid 42659] 23:36:49.035207 epoll_pwait(3, [], 128, 0, NULL, 18704162493558) = 0 [pid 42659] 23:36:49.035219 epoll_pwait(3, [], 128, 0, NULL, 140341803388040) = 0 [pid 42659] 23:36:49.035232 epoll_pwait(3, [], 128, 0, NULL, 18704162493558) = 0 [pid 42659] 23:36:49.035244 epoll_pwait(3, [], 128, 0, NULL, 140341803388040) = 0 [pid 42659] 23:36:49.035257 epoll_pwait(3, [], 128, 0, NULL, 18704162493558) = 0 [pid 42659] 23:36:49.035269 epoll_pwait(3, [], 128, 0, NULL, 140341803388040) = 0 [pid 42659] 23:36:49.035281 epoll_pwait(3, [], 128, 0, NULL, 18704162493558) = 0 [pid 42659] 23:36:49.035293 epoll_pwait(3, [], 128, 0, NULL, 140341803388040) = 0 [pid 42659] 23:36:49.035306 epoll_pwait(3, [], 128, 0, NULL, 18704162493558) = 0 The program is too large to post here. I have tested it with go race and it doesn't run into any problem. Any idea of what to look for? Thank you! El lunes, 11 de abril de 2022 a las 17:08:40 UTC+2, Santiago Corredoira escribió: > I tried with the stripped tmp version of "go run" and after a while I was > able to reproduce it too. It definitely took me more time but probably is > just random. > > I tried "$ GODEBUG=asyncpreemptoff=1 ./test" and I kept seeing the the > calls every 10ms. Is this expected to happen? > > I compiled the test with "$ go1.16 build test.go" and I tried hard to > reproduce it but I couldn't. It is very strange because this behaviour was > introduced in Go 1.14 and I only see it in Go 1.17. Also asyncpreemptoff=1 > seems to make no difference. > > I am going to work again with Go 1.18 by default to see if I get the 100% > CPU usage. > > El lunes, 11 de abril de 2022 a las 13:28:48 UTC+2, Brian Candler escribió: > >> On Monday, 11 April 2022 at 09:26:28 UTC+1 scorr...@gmail.com wrote: >> >>> and the output keeps growing like that with every new curl request until >>> eventually it starts constantly printing output like this: >>> >>>> >>>> [pid 143818] 10:02:22.697120 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.707609 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.718125 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.728753 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.739195 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.749618 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.760160 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.770669 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.781137 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.791581 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.802060 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.812540 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.822981 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> [pid 143818] 10:02:22.833468 nanosleep({tv_sec=0, tv_nsec=10000000}, >>>> NULL) = 0 >>>> >>>> >> That is normal, I believe, for a process with multiple runnable >> goroutines - one alarm/wakeup call every 10ms. >> >> What is odd is the 100% CPU utilisation. Can you attach an strace when >> that happens? >> >> As for the binary produced by "go run" - use "ps" to see where it is. >> For example: >> >> ==> sleepy.go <== >> package main >> >> import ( >> "sync" >> "time" >> ) >> >> func main() { >> var wg sync.WaitGroup >> wg.Add(1) >> go func() { >> time.Sleep(30 * time.Second) >> wg.Done() >> }() >> wg.Wait() >> } >> ==> end <== >> >> $ go run sleepy.go >> >> ... in another window ... >> $ ps auxwww | grep sleepy >> ubuntu 17743 0.0 0.1 1237424 12428 pts/1 Sl+ 11:23 0:00 go run >> sleepy.go >> ubuntu 17782 0.0 0.0 702468 1052 pts/1 Sl+ 11:23 0:00 >> /tmp/go-build2834561198/b001/exe/sleepy >> >> You can 'cp' it somewhere to keep it safe. >> >> If I compare this with the output of "go build sleepy.go", I see that the >> "go build" version is larger and unstripped: >> >> $ cp /tmp/go-build2834561198/b001/exe/sleepy sleepy-run >> $ file sleepy-run >> sleepy-run: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), >> statically linked, stripped >> $ go build sleepy.go >> $ file sleepy >> sleepy: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically >> linked, not stripped >> >> Then "strip sleepy" gives me a binary which is almost, but not quite, the >> same size as the one from "go run". So that gives you another way to try >> to reproduce the problem - although if the difference is just between >> stripped and unstripped, then it's most likely some sort of edge timing >> case. >> > -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/32850774-d5b1-45ad-af4d-134c2df519bdn%40googlegroups.com.