I have found that in (limited) practice that it's fairly hard to estimate
due to compression and compaction behaviour. I think measuring and
extrapolating (with an understanding of the datastructures) is the most
effective.

Tim

Sent from my phone
On 6 Dec 2013 20:54, "John Sanda" <john.sa...@gmail.com> wrote:

> I have done that, but it only gets me so far because the cluster and app
> that manages it is run by 3rd parties. Ideally, I would like to provide my
> end users with a formula or heuristic for establishing some sort of
> baselines that at least gives them a general idea for planning. Generating
> data as you have suggested and as I have done is helpful, but it is hard
> for users to extrapolate out from that.
>
>
> On Fri, Dec 6, 2013 at 3:47 PM, Jacob Rhoden <jacob.rho...@me.com> wrote:
>
>> Not sure what your end setup will be, but I would probably just spin up a
>> cluster and fill it with typical data to and measure the size on disk.
>>
>> ______________________________
>> Sent from iPhone
>>
>> On 7 Dec 2013, at 6:08 am, John Sanda <john.sa...@gmail.com> wrote:
>>
>> I am trying to do some disk capacity planning. I have been referring the
>> datastax docs[1] and this older blog post[2]. I have a column family with
>> the following,
>>
>> row key - 4 bytes
>> column name - 8 bytes
>> column value - 8 bytes
>> max number of non-deleted columns per row - 20160
>>
>> Is there an effective way to calculate the sizes (or at least a decent
>> approximation) of the bloom filters and partition indexes on disk?
>>
>> [1] Calculating user data 
>> size<http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html?pagename=docs&version=1.2&file=index#cassandra/architecture/../../cassandra/architecture/architecturePlanningUserData_t.html>
>> [2] Cassandra Storage Sizing <http://btoddb-cass-storage.blogspot.com/>
>>
>> --
>>
>> - John
>>
>>
>
>
> --
>
> - John
>

Reply via email to