junyan-ling commented on code in PR #699: URL: https://github.com/apache/arrow-go/pull/699#discussion_r2914120429
########## arrow/array/builder_prealloc_bench_test.go: ########## @@ -0,0 +1,330 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package array_test + +import ( + "testing" + + "github.com/apache/arrow-go/v18/arrow/array" + "github.com/apache/arrow-go/v18/arrow/memory" +) + +// BenchmarkBuilder_AppendOne tests baseline single append performance +func BenchmarkBuilder_AppendOne_Int64(b *testing.B) { + mem := memory.NewGoAllocator() + builder := array.NewInt64Builder(mem) + defer builder.Release() + + b.ResetTimer() + for i := 0; i < b.N; i++ { + builder.Append(int64(i)) + } +} + +// BenchmarkBuilder_AppendBulk tests bulk append method +func BenchmarkBuilder_AppendBulk_Int64(b *testing.B) { + mem := memory.NewGoAllocator() + builder := array.NewInt64Builder(mem) + defer builder.Release() + + // Prepare data + const batchSize = 1000 + data := make([]int64, batchSize) + for i := range data { + data[i] = int64(i) + } + + b.ResetTimer() + for i := 0; i < b.N; i++ { + builder.AppendValues(data, nil) + } +} + +// BenchmarkBuilder_PreReserved tests with manual Reserve() +func BenchmarkBuilder_PreReserved_Int64(b *testing.B) { + mem := memory.NewGoAllocator() + + b.ResetTimer() + for i := 0; i < b.N; i++ { + b.StopTimer() + builder := array.NewInt64Builder(mem) + builder.Reserve(1000) Review Comment: I actually tried a similar pre-allocation approach and ran e2e production benchmarks for Parquet read. Interestingly, there was no measurable change in throughput. After digging into it, I realized the Parquet reader's hot path goes through single `Append(val)` calls in `ReadValuesDense`, so `AppendValues` / `AppendStringValues` never gets hit during Parquet reads. Looking at the actual callers of `BinaryBuilder.AppendValues` in the codebase, it seems like the main ones are `scalar/parse.go` and `arrjson/arrjson.go`. Both are relatively low-volume paths. So the benchmark scenario (bulk-appending a large pre-built slice) may not have many real-world counterparts in the current codebase, though external users building Arrow arrays from in-memory slices could definitely benefit. Not a blocker at all - just something worth calling out so expectations around the performance impact are well-calibrated. It might also be worth trimming the benchmark a bit to focus on the paths that directly exercise the change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
