[
https://issues.apache.org/jira/browse/CASSANDRA-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17722803#comment-17722803
]
Jeff Jirsa commented on CASSANDRA-8460:
---------------------------------------
I think a lot of people would still find it useful, however, I think since
2014, the way most people think about storage has changed.
Tiering to spinning disk or cheaper block devices is fine. It's a win. It's
easy to reason about - probably just implement it via compaction and all the
read and write path stay exactly the same.
But I think the industry trends would suggest this is suboptimal - moving this
to a fast object store (e.g. s3) would be even better. It's lower cost / higher
durability, and it allows for other things "eventually", like sharing one
sstable between replicas (or eventually erasure encoding pieces of data).
That turns this ticket from ~easy to ~hard, because you also have to touch the
read path (or, more likely, change / add a new sstablereader that can read from
object storage, and then figure out how you want to upload to object storage).
So "is there interest", probably, but in an s3 version of this feature, vs
spinning disk.
> Make it possible to move non-compacting sstables to slow/big storage in DTCS
> ----------------------------------------------------------------------------
>
> Key: CASSANDRA-8460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8460
> Project: Cassandra
> Issue Type: Improvement
> Components: Local/Compaction
> Reporter: Marcus Eriksson
> Assignee: Lerh Chuan Low
> Priority: Normal
> Labels: doc-impacting, dtcs
> Fix For: 5.x
>
>
> It would be nice if we could configure DTCS to have a set of extra data
> directories where we move the sstables once they are older than
> max_sstable_age_days.
> This would enable users to have a quick, small SSD for hot, new data, and big
> spinning disks for data that is rarely read and never compacted.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]