fishy commented on a change in pull request #2497:
URL: https://github.com/apache/thrift/pull/2497#discussion_r779887382
##########
File path: lib/go/thrift/simple_server.go
##########
@@ -193,7 +195,13 @@ func (p *TSimpleServer) innerAccept() (int32, error) {
}
if client != nil {
p.wg.Add(1)
+ p.clients[client] = struct{}{}
go func() {
+ defer func() {
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ delete(p.clients, client)
Review comment:
> another potential issue with the approach: closing the socket in one
goroutine while another goroutine is reading from/writing to the socket might
cause race conditions?
from the doc of `net.Conn`:
```
// Close closes the connection.
// Any blocked Read or Write operations will be unblocked and return
errors.
Close() error
```
so I guess this is fine.
But in that case we don't even need a map? we can just add one additional
goroutine per client connection, basically:
1. In `TSimpleServer.Serve` we create a `context.WithCancel` and store both
the ctx and cancel function `TSimpleServer`
2. cancel that ctx in `TSimpleServer.Close`
3. in every client accept goroutine, we start another goroutine to be
unblocked by the server ctx to close the connection, basically changing
https://github.com/apache/thrift/blob/999e6e3bce217acb35b44440fd656cf169d47ed8/lib/go/thrift/simple_server.go#L196-L201
into:
```go
go func() {
defer p.wg.Done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go func() {
select {
case <-ctx.Done():
// client exited, do nothing
case <- p.ctx.Done():
// TSimpleServer.Close called, close
the client connection
client.Close()
}
}()
if err := p.processRequests(client); err != nil {
p.logger(fmt.Sprintf("error processing request:
%v", err))
}
}()
```
basically we trade a (kinda unbounded) map with one additional goroutine per
client connection.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]