There is no shuffling as the servers go up and down. Cassandra doesn’t do that. 

However, rf=2 is atypical and sometime problematic.

If you read or write with quorum / two / all, you’ll get unavailables during 
the restart

If you read or write with cl one, you’ll potentially not see data previously 
written (with or without the restart).

This is all just normal eventual consistency stuff, but be sure you understand 
it - rf3 may be a better choice

On restart, be sure you shut down cleanly - nodetool flush and then immediately 
nodetool drain.  Beyond that I’d expect you to be fine.

-- 
Jeff Jirsa


> On Mar 8, 2018, at 9:52 PM, Eunsu Kim <eunsu.bil...@gmail.com> wrote:
> 
> There are currently 50000 writes per second. I was worried that the server 
> downtime would be quite long during disk mount operations.
> If the data shuffling that occurs when the server goes down or up is working 
> as expected, I seem to be an unnecessary concern.
> 
> 
>> On 9 Mar 2018, at 2:19 PM, Jeff Jirsa <jji...@gmail.com> wrote:
>> 
>> I see no reason to believe you’d lose data doing this - why do you suspect 
>> you may? 
>> 
>> -- 
>> Jeff Jirsa
>> 
>> 
>>> On Mar 8, 2018, at 8:36 PM, Eunsu Kim <eunsu.bil...@gmail.com> wrote:
>>> 
>>> The auto_snapshot setting is disabled. And the directory architecture on 
>>> the five nodes will match exactly.
>>> 
>>> (Cassandra/Server shutdown -> Mount disk -> Add directory to 
>>> data_file_directories -> Start Cassandra) * 5 rolling
>>> 
>>> Is it possible to add disks without losing data by doing the above 
>>> procedure?
>>> 
>>> 
>>> 
>>>> On 7 Mar 2018, at 7:59 PM, Rahul Singh <rahul.xavier.si...@gmail.com> 
>>>> wrote:
>>>> 
>>>> Are you putting both the commitlogs and the Sstables on the adds? Consider 
>>>> moving your snapshots often if that’s also taking up space. Maybe able to 
>>>> save some space before you add drives.
>>>> 
>>>> You should be able to add these new drives and mount them without an 
>>>> issue. Try to avoid different number of data dirs across nodes. It makes 
>>>> automation of operational processes a little harder.
>>>> 
>>>> As an aside, Depending on your usecase you may not want to have a data 
>>>> density over 1.5 TB per node.
>>>> 
>>>> --
>>>> Rahul Singh
>>>> rahul.si...@anant.us
>>>> 
>>>> Anant Corporation
>>>> 
>>>>> On Mar 7, 2018, 1:26 AM -0500, Eunsu Kim <eunsu.bil...@gmail.com>, wrote:
>>>>> Hello,
>>>>> 
>>>>> I use 5 nodes to create a cluster of Cassandra. (SSD 1TB)
>>>>> 
>>>>> I'm trying to mount an additional disk(SSD 1TB) on each node because each 
>>>>> disk usage growth rate is higher than I expected. Then I will add the the 
>>>>> directory to data_file_directories in cassanra.yaml
>>>>> 
>>>>> Can I get advice from who have experienced this situation?
>>>>> If we go through the above steps one by one, will we be able to complete 
>>>>> the upgrade without losing data?
>>>>> The replication strategy is SimpleStrategy, RF 2.
>>>>> 
>>>>> Thank you in advance
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>>>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>>>> 
>>> 
>>> 
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org

Reply via email to