zeroshade commented on code in PR #828:
URL: https://github.com/apache/arrow-adbc/pull/828#discussion_r1237185546


##########
go/adbc/driver/snowflake/record_reader.go:
##########
@@ -297,39 +298,41 @@ func newRecordReader(ctx context.Context, alloc 
memory.Allocator, ld gosnowflake
        }
 
        lastChannelIndex := len(chs) - 1
-       for i, b := range batches[1:] {
-               batch, batchIdx := b, i+1
-               chs[batchIdx] = make(chan arrow.Record, bufferSize)
-               group.Go(func() error {
-                       // close channels (except the last) so that Next can 
move on to the next channel properly
-                       if batchIdx != lastChannelIndex {
-                               defer close(chs[batchIdx])
-                       }
-
-                       rdr, err := batch.GetStream(ctx)
-                       if err != nil {
-                               return err
-                       }
-                       defer rdr.Close()
+       go func() {

Review Comment:
   You're right, we need to add the close for `ch` in flightsql looking at it 
now. 
   
   The shift to put the loop into a goroutine here is to prepare for adding a 
`SetLimit` to allow users to control how many goroutines get created. Right now 
we're going to create one goroutine for *every* batch returned by snowflake. I 
didn't realize just how many batches it could possibly be, so it could end up 
being hundreds of goroutines or more. In theory it's not a huge deal since 
goroutines are fairly lightweight and this will be more I/O bound than CPU 
bound, but it would probably still be good to allow a user to set a limit to 
the parallelism. 
   
   I'll update this PR for the FlightSQL impl to add the proper close for the 
channel. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to