mcvsubbu commented on a change in pull request #7930:
URL: https://github.com/apache/pinot/pull/7930#discussion_r772655331
##########
File path:
pinot-segment-local/src/main/java/org/apache/pinot/segment/local/io/writer/impl/BaseChunkSVForwardIndexWriter.java
##########
@@ -74,15 +71,13 @@ protected BaseChunkSVForwardIndexWriter(File file,
ChunkCompressionType compress
int numDocsPerChunk, int chunkSize, int sizeOfEntry, int version)
throws IOException {
Preconditions.checkArgument(version == DEFAULT_VERSION || version ==
CURRENT_VERSION);
- _file = file;
- _headerEntryChunkOffsetSize = getHeaderEntryChunkOffsetSize(version);
- _dataOffset = headerSize(totalDocs, numDocsPerChunk,
_headerEntryChunkOffsetSize);
_chunkSize = chunkSize;
_chunkCompressor = ChunkCompressorFactory.getCompressor(compressionType);
+ _headerEntryChunkOffsetSize = getHeaderEntryChunkOffsetSize(version);
+ _dataOffset = writeHeader(compressionType, totalDocs, numDocsPerChunk,
sizeOfEntry, version);
_chunkBuffer = ByteBuffer.allocateDirect(chunkSize);
- _dataChannel = new RandomAccessFile(file, "rw").getChannel();
- _header = _dataChannel.map(FileChannel.MapMode.READ_WRITE, 0, _dataOffset);
- writeHeader(compressionType, totalDocs, numDocsPerChunk, sizeOfEntry,
version);
+ _compressedBuffer = ByteBuffer.allocateDirect(chunkSize * 2);
Review comment:
Do we have these bounds available at run time (preferably as an API)?
Otherwise, instead of trying to optimize to something we don't know, we should
revert to code that we know works in production environment
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]