dimakuz opened a new pull request, #717:
URL: https://github.com/apache/arrow-go/pull/717

   The zstdEncoderPool is used exclusively by EncodeAll(), which is a 
single-shot synchronous call that uses exactly one inner block encoder. 
However, zstd.NewWriter defaults concurrent to runtime.GOMAXPROCS, 
pre-allocating that many inner block encoders — each with its own ~1 MiB 
history buffer (ensureHist). On a 10-core machine, each pooled Encoder 
allocates 10 inner encoders when only 1 is ever used by EncodeAll.
   
   With WithEncoderConcurrency(1), each pooled encoder creates a single inner 
encoder, matching actual usage. The streaming Write/Close path is unaffected — 
it does not use the pool.
   
   Benchmark results (Apple M4 Pro, arm64, 256 KiB semi-random data):
   
       BenchmarkZstdPooledEncodeAll/Default-14        11000 B/op   5250 MB/s
       BenchmarkZstdPooledEncodeAll/Concurrency1-14     810 B/op   5500 MB/s
   
   14x less memory per operation, ~5% higher throughput from reduced GC 
pressure.
   
   In a parquet write workload (1 GiB Arrow data, ZSTD level 3), this reduced 
ensureHist allocations from 22 GiB to 7 GiB and madvise kernel CPU from 4.6s to 
2.3s (10% wall-time improvement).
   
   ### Rationale for this change
   
   High memory churn during parquet encoding
   
   ### What changes are included in this PR?
   
   Change to zstd encoder concurrency, a benchmark to reproduce results.
   
   ### Are these changes tested?
   
   Yes
   
   ### Are there any user-facing changes?
   
   No


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to