Those nanosleep calls are at 10ms intervals - i.e. 100 calls per second.  I 
think this is normal behaviour of the Go scheduler, which signals itself 
every 10ms to force a preemptive context switch.  That feature was 
introduced in go 1.14, at which point you were able to disable it at 
runtime setting environment variable GODEBUG=asyncpreemptoff=1 (I haven't 
tested whether you can still do this).

There were some problems when this feature was first introduced (see this 
<https://github.com/kubernetes/kubernetes/issues/92521> and this 
<https://github.com/cockroachdb/cockroach/pull/52348>) but improvements 
were made in 1.15 and 1.16.  I don't see anything in the 1.17 release notes 
<https://go.dev/doc/go1.17> which would affect this.

But in any case:
1. That won't cause CPU to go to 100%
2. I can't think why there should be any difference between "go run", 
versus "go build" and running the resulting executable.
3. I can't think why changing HTTP timeouts from 30 seconds to 1 second 
should make any difference.

Point (2) is going to make the problem hard to reproduce for other people.

To start with, I think you need to be more specific about:
- exactly what versions of go you're comparing ("go version"), and where 
you got them from (e.g. directly from go downloads page or from OS vendor 
packages)
- what exact Linux kernel you're running (uname -a)
- what exact version and distro of Linux you're using
- your CPU type (grep "model" /proc/cpuinfo | sort | uniq -c)
- anything that's potentially different about your environment - e.g. are 
you running Linux under an emulation layer like WSL (Windows)

On Sunday, 10 April 2022 at 23:02:40 UTC+1 scorr...@gmail.com wrote:

> Hi,
>
> On Linux, if I compile and run this simple program:
>
> package main
>
> import (
>          "fmt"
>          "net/http"
>          "time"
> )
>
> type x struct{}
>
> func (x) ServeHTTP(w http.ResponseWriter, r *http.Request) {
>      w.Write([]byte("aaaa"))
> }
>
> func main() {
> sp := &http.Server{
>      ReadHeaderTimeout: 5 * time.Second,
>      ReadTimeout: 5 * time.Second,
>      WriteTimeout: 5 * time.Second,
>      IdleTimeout: 30 * time.Second,
>      Addr: ":8080",
>      Handler: x{},
> }
>
>      fmt.Println(sp.ListenAndServe())
> }
>
> After a while I start to see constant CPU activity (1% to 7%) even when 
> the server is idle.
>
> If I run "sudo strace -f -c -p <pid>" I get constant very fast calls like:
>
> [pid 131277] 23:42:16 nanosleep({tv_sec=0, tv_nsec=10000000}, NULL) = 0
> [pid 131277] 23:42:16 nanosleep({tv_sec=0, tv_nsec=10000000}, NULL) = 0
> [pid 131277] 23:42:16 nanosleep({tv_sec=0, tv_nsec=10000000}, NULL) = 0
> [pid 131277] 23:42:16 nanosleep({tv_sec=0, tv_nsec=10000000}, NULL) = 0
> [pid 131277] 23:42:16 nanosleep({tv_sec=0, tv_nsec=10000000}, NULL) = 0
> ....
>
> It never stops. Sometimes the CPU goes to 100% for a while and then go 
> back down to 1%.. This happens only in one or two cores.
>
> This doesn't happen with Go 1.16. I have been checking all versions and it 
> starts in Go 1.17
>
> If I run the program with go run instead of compiling and running the 
> binary, I can't reproduce it.
>
> Sometimes it starts after a couple of http calls and sometimes I do a few 
> "wrk -d2 http://localhost:8080"; calls to trigger it (wrk is a web stress 
> tool). But as soon as it starts making the nanosleep calls, it doesn't 
> stop, no matter how long the server is iddle.
>
> If I remove the timeout values or set them to a short period, like one 
> second for example, I am also not able to reproduce it. But with longer 
> values it happens very quickly. I am surprised of not seing it reported.
>
> Do you know what can be going on?
>
> Thank you!
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/7b4b7f21-8587-4936-bc29-d5cc58206d18n%40googlegroups.com.

Reply via email to