ok so it would be better to cut those large rows, inserting rows with row+
monthId, or row+week, and then all the corresponding columns inside, it
will drastically reduce rows size, but to retrieve results overlapping
between weeks or month, I have to to a multiget, less simple than a get

thx for answer

2012/6/18 aaron morton <aa...@thelastpickle.com>

> It's not an exact science. Some general guidelines though:
>
> * A row normally represents an entity
> * Rows wider than the thrift_max_message_length_in_mb (16MB) cannot be
> retrieved in a single call
> * Wide rows (in the 10's of MB) become can make repair do more work than
> is needed.
> * Rows wider than in_memory_compaction_limit_in_mb (64) make compaction
> run slower
>
>
> Cheers
>
>   -----------------
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 19/06/2012, at 5:18 AM, Cyril Auburtin wrote:
>
> In what extent, having possibly large rows, (many columns (sorted as
> timeststamp, or geohash or ...) will be nefast for a muli-node ring.
> I guess a row can be read/write just on one node, if yes it's more likely
> to fail, (than having one row per timestamp ..)
>
> thanks for explanations
>
>
>

Reply via email to