[
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297595#comment-15297595
]
Stefania commented on CASSANDRA-11521:
--------------------------------------
Thank you for following up Benedict.
Regarding holding {{OpOrder}} and isolation, it would be nice to offer
isolation at the partition level. I don't think we can offer total isolation if
we release sstables periodically but, if we release them only at partition
boundaries, then isolation at the partition level should be possible. To recap,
one option is to copy the entire memtable sub-maps initially, but this
increases memory used and we may hold partitions that are no longer relevant if
in the meantime the memtable is flushed and gets picked up when we periodically
refresh sstables. Another option is to copy a partition, or reference it but
this is quite hard, or hold the OpOrder but only when a specific partition is
about to be iterated by the sstables merge iterator.
Regarding referencing sstables, the ticket you are referring to is probably
CASSANDRA-11552 and the problem is clear now. I don't know how to reproduce it
or where the bug exactly is yet, but I understand that if we call
{{CFS.selectAndReference()}} rather than {{CFS.select()}} (because we no longer
hold the {{OpOrder}}), then we might spin trying to reference sstables due to a
bug that is causing sstables to be released when they are still visible. I will
try and debug further if I can reproduce it.
> Implement streaming for bulk read requests
> ------------------------------------------
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
> Issue Type: Sub-task
> Components: Local Write-Read Paths
> Reporter: Stefania
> Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer
> and eliminating the need to query individual pages one by one.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)