CurtHagenlocher commented on code in PR #3654: URL: https://github.com/apache/arrow-adbc/pull/3654#discussion_r2500816828
########## csharp/src/Drivers/Databricks/CustomLZ4FrameReader.cs: ########## @@ -0,0 +1,83 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ + +using System; +using System.Buffers; +using System.IO; +using K4os.Compression.LZ4.Encoders; +using K4os.Compression.LZ4.Streams; +using K4os.Compression.LZ4.Streams.Frames; +using K4os.Compression.LZ4.Streams.Internal; + +namespace Apache.Arrow.Adbc.Drivers.Databricks +{ + /// <summary> + /// Custom LZ4 frame reader that uses a custom ArrayPool to support pooling of 4MB+ buffers. + /// This solves the issue where Databricks LZ4 frames declare maxBlockSize=4MB but .NET's + /// default ArrayPool only pools buffers up to 1MB, causing 900MB of fresh LOH allocations. + /// </summary> + internal sealed class CustomLZ4FrameReader : StreamLZ4FrameReader + { + /// <summary> + /// Custom ArrayPool that supports pooling of buffers up to 4MB. + /// This allows the 4MB buffers required by Databricks LZ4 frames to be pooled and reused. + /// maxArraysPerBucket=10 means we keep up to 10 buffers of each size in the pool. + /// </summary> + private static readonly ArrayPool<byte> LargeBufferPool = Review Comment: I'm going to echo my comment on the other PR and point out that this will keep the memory allocated for the remainder of the process lifetime. It's not quite as bad as `RecyclableMemoryStream` because I it has a default per-bucket limit and will discard memory past that point once it's freed, but with ten arrays per bucket and a maximum bucket size of 40MB, it could (in the worst case) still end up being over 70 MB. Assuming that testing doesn't show a problem, I'd recommend a similar strategy here as there -- which is to associate the pool with either the driver, the database or the connection so that it will eventually get freed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
