[ https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553322#comment-16553322 ]
ASF GitHub Bot commented on CASSANDRA-14556: -------------------------------------------- Github user iamaleksey commented on a diff in the pull request: https://github.com/apache/cassandra/pull/239#discussion_r204531887 --- Diff: src/java/org/apache/cassandra/db/streaming/CassandraOutgoingFile.java --- @@ -114,13 +155,51 @@ public void write(StreamSession session, DataOutputStreamPlus out, int version) CassandraStreamHeader.serializer.serialize(header, out, version); out.flush(); - CassandraStreamWriter writer = header.compressionInfo == null ? - new CassandraStreamWriter(sstable, header.sections, session) : - new CompressedCassandraStreamWriter(sstable, header.sections, - header.compressionInfo, session); + IStreamWriter writer; + if (shouldStreamFullSSTable()) + { + writer = new CassandraBlockStreamWriter(sstable, session, components); + } + else + { + writer = (header.compressionInfo == null) ? + new CassandraStreamWriter(sstable, header.sections, session) : + new CompressedCassandraStreamWriter(sstable, header.sections, + header.compressionInfo, session); + } writer.write(out); } + @VisibleForTesting + public boolean shouldStreamFullSSTable() + { + return isFullSSTableTransfersEnabled && isFullyContained; + } + + @VisibleForTesting + public boolean fullyContainedIn(List<Range<Token>> normalizedRanges, SSTableReader sstable) + { + if (normalizedRanges == null) + return false; + + RangeOwnHelper rangeOwnHelper = new RangeOwnHelper(normalizedRanges); + try (KeyIterator iter = new KeyIterator(sstable.descriptor, sstable.metadata())) + { + while (iter.hasNext()) + { + DecoratedKey key = iter.next(); + try + { + rangeOwnHelper.check(key); + } catch(RuntimeException e) --- End diff -- On a more general note, this is potentially quite an expensive thing to do, especially for big sstables with skinny partitions, and in some cases this will introduce a performance regression. The whole optimisation is realistically only useful for bootrstrap, decom, and rebuild, with LCS (which is still plenty useful and impactful and worth having). But it wouldn't normally kick in for regular repairs because of the full-cover requirement, and it won't normally kick in for STCS until CASSANDRA-10540 (range aware compaction) is implemented. In those cases having to read through the whole primary index is a perf regression that we shouldn't allow to happen. The easiest way to avoid it would be to store sstable's effective token ranges in sstable metadata in relation to the node's ranges, making this check essentially free. Otherwise we should probably disable complete sstable streaming for STCS tables, at least until CASSANDRA-10540 is implemented. That however wouldn't address the regression to regular streaming, so keeping ranges in the metadata would be my preferred way to go. > Optimize streaming path in Cassandra > ------------------------------------ > > Key: CASSANDRA-14556 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14556 > Project: Cassandra > Issue Type: Improvement > Components: Streaming and Messaging > Reporter: Dinesh Joshi > Assignee: Dinesh Joshi > Priority: Major > Labels: Performance > Fix For: 4.x > > > During streaming, Cassandra reifies the sstables into objects. This creates > unnecessary garbage and slows down the whole streaming process as some > sstables can be transferred as a whole file rather than individual > partitions. The objective of the ticket is to detect when a whole sstable can > be transferred and skip the object reification. We can also use a zero-copy > path to avoid bringing data into user-space on both sending and receiving > side. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org