Djjanks commented on code in PR #14:
URL: https://github.com/apache/arrow-js/pull/14#discussion_r2113496698


##########
src/ipc/writer.ts:
##########
@@ -251,8 +269,12 @@ export class RecordBatchWriter<T extends TypeMap = any> 
extends ReadableInterop<
     }
 
     protected _writeRecordBatch(batch: RecordBatch<T>) {
+        if (this._compression != null) {
+            return this._writeCompressedRecordBatch(batch);
+        }
+
         const { byteLength, nodes, bufferRegions, buffers } = 
VectorAssembler.assemble(batch);
-        const recordBatch = new metadata.RecordBatch(batch.numRows, nodes, 
bufferRegions);
+        const recordBatch = new metadata.RecordBatch(batch.numRows, nodes, 
bufferRegions, null);

Review Comment:
   Initially, I considered integrating with the existing `_writeBodyBuffers`. 
However, I approached it differently, believing that separating the logic would 
make the code more transparent to readers.. If you add stride, the question 
immediately arises why two arrays are used for compressed data. And with the 
type `LengthPrefixedBuffer = [lengthPrefix: Uint8Array, body: Uint8Array]` it 
immediately becomes clear that compressed data has a slightly different data 
storage structure.
   
   However, I understand that having two separate methods (`_writeBodyBuffers` 
and `_writeCompressedBodyBuffers`) might require parallel maintenance. This may 
not be convenient... Your approach will not lead to a deterioration in the 
readability of the code?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to