[ 
https://issues.apache.org/jira/browse/CASSANDRA-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414484#comment-16414484
 ] 

Jon Haddad edited comment on CASSANDRA-8460 at 3/26/18 9:05 PM:
----------------------------------------------------------------

Hey [~Lerh Low], sorry for the delay. I've been running some tests on lvm cache 
using fio for benchmarking rather than C*. Cassandra adds a layer of complexity 
that won't help when it comes to raw benchmarks.

I ran some tests on EBS, SSD (i2.large), and EBS using SSD as a cache volume. I 
ran with this simple configuration to start:
{code:java}
[global]
size=10G
runtime=30m
directory=/bench/
bs=4k

[random-read]
rw=randread
numjobs=4

[sequential-write]
rw=write
{code}
||Metric||EBS||SSD||EBS + Cache||
|Random Read IOPS|1509|5748|5347|
|Random Read Bandwidth|6MB/s|22MB/s|21MB/s|
|Seq Write IOPS|40K|145K|39K|
|Seq Write Bandwidth|163MB/s|580MB/s|156MB/s|

I've set up the cache as writethrough, meaning we're going to be bottlenecked 
on the slow disk for writes. Here's the setup:
{code:java}
root@ip-172-31-45-143:~# lvs -a
  LV              VG   Attr       LSize   Pool    Origin         Data%  Meta%  
Move Log Cpy%Sync Convert
  [cache]         test Cwi---C--- 700.00g                        7.25   0.55    
        0.00
  [cache_cdata]   test Cwi-ao---- 700.00g
  [cache_cmeta]   test ewi-ao----  40.00g
  [lvol0_pmspare] test ewi-------  40.00g
  origin          test Cwi-aoC---   1.50t [cache] [origin_corig] 7.25   0.55    
        0.00
  [origin_corig]  test owi-aoC---   1.50t
{code}
Generally speaking, TWCS uses considerably less I/O than any other strategy, 
and it works fine with spinning disks on EBS already, so I'm inclined to 
_personally_ lean towards using LVM rather than adding this as a Cassandra 
feature. It doesn't require any additional configuration once the volume is set 
up, and as I mentioned previously it's been baked into the Linux kernel for a 
long time now. I haven't researched what's available on Windows, so that's 
something to keep in mind.

I'm not opposed to research, or new features, but this seems to be to me to be 
adding complexity to solve a problem that's already been solved.


was (Author: rustyrazorblade):
Hey [~Lerh Low], sorry for the delay.  I've been running some tests on lvm 
cache using fio for benchmarking rather than C*.  Cassandra adds a layer of 
complexity that won't help when it comes to raw benchmarks.

I ran some tests on EBS, SSD (i2.large), and EBS using SSD as a cache volume.  
I ran with this simple configuration to start:

{code}
[global]
size=10G
runtime=30m
directory=/bench/
bs=4k

[random-read]
rw=randread
numjobs=4

[sequential-write]
rw=write
{code}

||Metric||EBS||SSD||EBS + Cache||
|Random Read IOPS|1509|5748|5347|
|Random Read Bandwidth|6MB/s|22MB/s|21MB/s|
|Seq Write IOPS|40K|145K|39K|
|Seq Write Bandwidth|163MB/s|580MB/s|156MB/s|

I've set up the cache as writethrough, meaning we're going to be bottlenecked 
on the slow disk for writes.  Here's the setup:

{code}
root@ip-172-31-45-143:~# lvs -a
  LV              VG   Attr       LSize   Pool    Origin         Data%  Meta%  
Move Log Cpy%Sync Convert
  [cache]         test Cwi---C--- 700.00g                        7.25   0.55    
        0.00
  [cache_cdata]   test Cwi-ao---- 700.00g
  [cache_cmeta]   test ewi-ao----  40.00g
  [lvol0_pmspare] test ewi-------  40.00g
  origin          test Cwi-aoC---   1.50t [cache] [origin_corig] 7.25   0.55    
        0.00
  [origin_corig]  test owi-aoC---   1.50t
{code}

Generally speaking, TWCS uses considerably less I/O than any other strategy, 
and it works fine with spinning disks on EBS already, so I'm inclined to 
_personally_ lean towards using LVM.  It doesn't require any additional 
configuration once the volume is set up, and as I mentioned previously it's 
been baked into the Linux kernel for a long time now.  I haven't researched 
what's available on Windows, so that's something to keep in mind.

I'm not opposed to research, or new features, but this seems to be to me to be 
adding complexity to solve a problem that's already been solved.  

> Make it possible to move non-compacting sstables to slow/big storage in DTCS
> ----------------------------------------------------------------------------
>
>                 Key: CASSANDRA-8460
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8460
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Marcus Eriksson
>            Assignee: Lerh Chuan Low
>            Priority: Major
>              Labels: doc-impacting, dtcs
>             Fix For: 4.x
>
>
> It would be nice if we could configure DTCS to have a set of extra data 
> directories where we move the sstables once they are older than 
> max_sstable_age_days. 
> This would enable users to have a quick, small SSD for hot, new data, and big 
> spinning disks for data that is rarely read and never compacted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to