I agree with Jeff - I usually advise teams to cap their density around 3TB, 
especially with TWCS.  Read heavy workloads tend to use smaller datasets and 
ring size ends up being a function of performance tuning.    

Since 2.2 bootstrap can now be resumed, which helps quite a bit with the 
streaming problem, see CASSANDRA-8838.

Jon


> On Mar 9, 2018, at 7:39 AM, Jeff Jirsa <jji...@gmail.com> wrote:
> 
> 1.5 TB sounds very very conservative - 3-4T is where I set the limit at past 
> jobs. Have heard of people doing twice that (6-8T). 
> 
> -- 
> Jeff Jirsa
> 
> 
> On Mar 8, 2018, at 11:09 PM, Niclas Hedhman <nic...@apache.org 
> <mailto:nic...@apache.org>> wrote:
> 
>> I am curious about the side comment; "Depending on your usecase you may not
>> want to have a data density over 1.5 TB per node."
>> 
>> Why is that? I am planning much bigger than that, and now you give me
>> pause...
>> 
>> 
>> Cheers
>> Niclas
>> 
>> On Wed, Mar 7, 2018 at 6:59 PM, Rahul Singh <rahul.xavier.si...@gmail.com 
>> <mailto:rahul.xavier.si...@gmail.com>> wrote:
>> Are you putting both the commitlogs and the Sstables on the adds? Consider 
>> moving your snapshots often if that’s also taking up space. Maybe able to 
>> save some space before you add drives.
>> 
>> You should be able to add these new drives and mount them without an issue. 
>> Try to avoid different number of data dirs across nodes. It makes automation 
>> of operational processes a little harder.
>> 
>> As an aside, Depending on your usecase you may not want to have a data 
>> density over 1.5 TB per node.
>> 
>> --
>> Rahul Singh
>> rahul.si...@anant.us <mailto:rahul.si...@anant.us>
>> 
>> Anant Corporation
>> 
>> On Mar 7, 2018, 1:26 AM -0500, Eunsu Kim <eunsu.bil...@gmail.com 
>> <mailto:eunsu.bil...@gmail.com>>, wrote:
>>> Hello,
>>> 
>>> I use 5 nodes to create a cluster of Cassandra. (SSD 1TB)
>>> 
>>> I'm trying to mount an additional disk(SSD 1TB) on each node because each 
>>> disk usage growth rate is higher than I expected. Then I will add the the 
>>> directory to data_file_directories in cassanra.yaml
>>> 
>>> Can I get advice from who have experienced this situation?
>>> If we go through the above steps one by one, will we be able to complete 
>>> the upgrade without losing data?
>>> The replication strategy is SimpleStrategy, RF 2.
>>> 
>>> Thank you in advance
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
>>> <mailto:user-unsubscr...@cassandra.apache.org>
>>> For additional commands, e-mail: user-h...@cassandra.apache.org 
>>> <mailto:user-h...@cassandra.apache.org>
>>> 
>> 
>> 
>> 
>> -- 
>> Niclas Hedhman, Software Developer
>> http://zest.apache.org <http://zest.apache.org/> - New Energy for Java

Reply via email to