eric-wang-1990 opened a new pull request, #3649:
URL: https://github.com/apache/arrow-adbc/pull/3649

   ## Summary
   This PR consolidates LZ4 decompression code paths and ensures proper 
resource cleanup across both CloudFetch and non-CloudFetch readers in the 
Databricks C# driver.
   
   ## Changes
   - **Lz4Utilities.cs**
     - Add configurable buffer size parameter to `DecompressLz4()`
     - Add async variant `DecompressLz4Async()` for CloudFetch pipeline
     - Add proper `using` statements for MemoryStream disposal
     - Add default buffer size constant (80KB)
   
   - **CloudFetchDownloader.cs**
     - Update to use shared `Lz4Utilities.DecompressLz4Async()`
     - Reduce code duplication (~12 lines consolidated)
     - Improve telemetry with compression ratio calculation
   
   ## Benefits
   - **Code Quality**: Both code paths now share the same decompression 
implementation, reducing duplication
   - **Resource Management**: Explicit MemoryStream disposal improves memory 
hygiene (though GC would handle cleanup)
   - **Maintainability**: Single source of truth for LZ4 decompression logic
   - **Consistency**: Same error handling and telemetry patterns across both 
paths
   
   ## Technical Details
   - Default buffer size remains 80KB (81920 bytes) - no behavioral changes
   - Async version returns `(byte[] buffer, int length)` tuple for efficient 
MemoryStream wrapping in CloudFetch
   - Buffer validity preserved after MemoryStream disposal via reference 
counting
   - Maintains cancellation token support in async path
   - No performance impact - purely refactoring and cleanup
   
   ## Testing
   - Existing unit tests pass
   - No functional changes to decompression logic
   - Telemetry output remains consistent
   
   🤖 Generated with [Claude Code](https://claude.com/claude-code)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to