Re: Partition key with 300K rows can it be queried and distributed using Spark

2019-01-17 Thread Goutham reddy
Thanks Jeff, yes we have 18 columns in total. But my question was does
spark can retrieve data by partitioning 300k data into spark nodes?

On Thu, Jan 17, 2019 at 1:30 PM Jeff Jirsa  wrote:

> The reason big rows are painful in Cassandra is that by default, we index
> it every 64kb. With 300k objects, it may or may not have a lot of those
> little index blocks/objects. How big is each row?
>
> If you try to read it and it's very wide, you may see heap pressure / GC.
> If so, you could try changing the column index size from 64k to something
> larger (128k, 256k, etc) - small point reads will be more disk IO, but less
> heap pressure.
>
>
>
> On Thu, Jan 17, 2019 at 12:15 PM Goutham reddy 
> wrote:
>
>> Hi,
>> As each partition key can hold up to 2 Billion rows, even then it is an
>> anti-pattern to have such huge data set for one partition key in our case
>> it is 300k rows only, but when trying to query for one particular key we
>> are getting timeout exception. If I use Spark to get the 300k rows for a
>> particular key does it solve the problem of timeouts and distribute the
>> data across the spark nodes or will it still throw timeout exceptions. Can
>> you please help me with the best practice to retrieve the data for the key
>> with 300k rows. Any help is highly appreciated.
>>
>> Regards
>> Goutham.
>>
> --
Regards
Goutham Reddy


Re: Partition key with 300K rows can it be queried and distributed using Spark

2019-01-17 Thread Jeff Jirsa
The reason big rows are painful in Cassandra is that by default, we index
it every 64kb. With 300k objects, it may or may not have a lot of those
little index blocks/objects. How big is each row?

If you try to read it and it's very wide, you may see heap pressure / GC.
If so, you could try changing the column index size from 64k to something
larger (128k, 256k, etc) - small point reads will be more disk IO, but less
heap pressure.



On Thu, Jan 17, 2019 at 12:15 PM Goutham reddy 
wrote:

> Hi,
> As each partition key can hold up to 2 Billion rows, even then it is an
> anti-pattern to have such huge data set for one partition key in our case
> it is 300k rows only, but when trying to query for one particular key we
> are getting timeout exception. If I use Spark to get the 300k rows for a
> particular key does it solve the problem of timeouts and distribute the
> data across the spark nodes or will it still throw timeout exceptions. Can
> you please help me with the best practice to retrieve the data for the key
> with 300k rows. Any help is highly appreciated.
>
> Regards
> Goutham.
>


Re: Partition key with 300K rows can it be queried and distributed using Spark

2019-01-17 Thread Nitan Kainth
Not sure about spark data distribution but yeah spark can be used to retrieve 
such data from Cassandra.


Regards,
Nitan
Cell: 510 449 9629

> On Jan 17, 2019, at 2:15 PM, Goutham reddy  wrote:
> 
> Hi,
> As each partition key can hold up to 2 Billion rows, even then it is an 
> anti-pattern to have such huge data set for one partition key in our case it 
> is 300k rows only, but when trying to query for one particular key we are 
> getting timeout exception. If I use Spark to get the 300k rows for a 
> particular key does it solve the problem of timeouts and distribute the data 
> across the spark nodes or will it still throw timeout exceptions. Can you 
> please help me with the best practice to retrieve the data for the key with 
> 300k rows. Any help is highly appreciated.
> 
> Regards
> Goutham.


Partition key with 300K rows can it be queried and distributed using Spark

2019-01-17 Thread Goutham reddy
Hi,
As each partition key can hold up to 2 Billion rows, even then it is an
anti-pattern to have such huge data set for one partition key in our case
it is 300k rows only, but when trying to query for one particular key we
are getting timeout exception. If I use Spark to get the 300k rows for a
particular key does it solve the problem of timeouts and distribute the
data across the spark nodes or will it still throw timeout exceptions. Can
you please help me with the best practice to retrieve the data for the key
with 300k rows. Any help is highly appreciated.

Regards
Goutham.