Help in c* Data modelling

2017-07-22 Thread techpyaasa .
Hi ,

We have a table like below :

CREATE TABLE ks.cf ( accountId bigint, pid bigint, dispName text, status
> int, PRIMARY KEY (accountId, pid) ) WITH CLUSTERING ORDER BY (pid ASC);



We would like to have following queries possible on the above table:

select * from site24x7.wm_current_status where uid=1 and mid=1;
select * from site24x7.wm_current_status where uid=1 order by dispName asc;
select * from site24x7.wm_current_status where uid=1 and status=0 order by
dispName asc;

I know first query is possible by default , but I want the last 2 queries
also to work.

So can some one please let me know how can I achieve the same in
cassandra(c*-2.1.17). I'm ok with applying indexes etc,

Thanks
TechPyaasa


Re: Understanding gossip and seeds

2017-07-22 Thread Daniel Hölbling-Inzko
Seeds are there to bootstrap a node for the very first time when it's has
zero knowledge about the ring.

I think I also read somewhere that seed nodes are periodically queried for
some sanity checks and therefore one should not include too many nodes in
the seed list.
kurt greaves  schrieb am Sa. 22. Juli 2017 um 01:48:

> Haven't checked the code but pretty sure it's because it will always use
> the known state stored in the system tables. the seeds in the yaml are
> mostly for initial set up, used to discover the rest of the nodes in the
> ring.
>
> Once that's done there is little reason to refer to them again, unless
> forced.
>


Re: Cassandra seems slow when having many read operations

2017-07-22 Thread benjamin roth
Chunk size:
For us it made a 20x difference in read io. But it depends a lot on the use
case.

Am 22.07.2017 08:32 schrieb "Fay Hou [Storage Service] ­" <
fay...@coupang.com>:

> Hey Felipe:
>
> When you say increased memory from 16GB to 24GB, I think you meant you
> increased heap to 24GB. do you use cms or g1gc?
>  did you change any other parameters?
> As for the chunk size, we found change 64kb to 16kb didn't make a
> difference in low key cache rate environment
>
>
>
> On Fri, Jul 21, 2017 at 9:27 PM, benjamin roth  wrote:
>
>> Apart from all that you can try to reduce the compression chunk size from
>> the default 64kb to 16kb or even down to 4kb. This can help a lot if your
>> read io on disk is very high and the page cache is not efficient.
>>
>> Am 21.07.2017 23:03 schrieb "Petrus Gomes" :
>>
>>> Thanks a lot to share the result.
>>>
>>> Boa Sorte.
>>> ;-)
>>> Take care.
>>> Petris Silva
>>>
>>> On Fri, Jul 21, 2017 at 12:19 PM, Felipe Esteves <
>>> felipe.este...@b2wdigital.com> wrote:
>>>
 Hi, Petrus,

 Seems we've solved the problem, but it wasn't relationed to repair the
 cluster or disk latency.
 I've increased the memory available for Cassandra from 16GB to 24GB and
 the performance was much improved!
 The main symptom we've observed in Opscenter was a
 significantly decrease in total compactions graph.

 Felipe Esteves

 Tecnologia

 felipe.este...@b2wdigital.com 



 2017-07-15 3:23 GMT-03:00 Petrus Gomes :

> Hi Felipe,
>
> Yes, try it and let us know how it goes.
>
> Thanks,
> Petrus Silva.
>
> On Fri, Jul 14, 2017 at 11:37 AM, Felipe Esteves <
> felipe.este...@b2wdigital.com> wrote:
>
>> Hi Petrus, thanks for the feedback.
>>
>> I couldn't found the percent repaired in nodetool info, C* version is
>> 2.1.8, maybe it's something newer than that?
>>
>> I'm analyzing this thread about num_token.
>>
>> Compaction is "compaction_throughput_mb_per_sec: 16", I don't get
>> pending compactions in Opscenter.
>>
>> One point I've noticed, is that Opscenter show "OS: Disk Latency" max
>> with high values when the problem occurs, but it doesn't reflect in 
>> server
>> directly monitoring, in these tools the IO and latency of disks seems ok.
>> But seems to me that "read repair attempted" is a bit high, maybe it
>> will explain the latency in reads. I will try to run a repair on cluster 
>> to
>> see how it goes.
>>
>> Felipe Esteves
>>
>> Tecnologia
>>
>> felipe.este...@b2wdigital.com 
>>
>> Tel.: (21) 3504-7162 ramal 57162
>>
>> Skype: felipe2esteves
>>
>> 2017-07-13 15:02 GMT-03:00 Petrus Gomes :
>>
>>> How is your Percent Repaired  when you run " nodetool info" ?
>>>
>>> Search for :
>>> "reduced num_token = improved performance ??" topic.
>>> The people were discussing that.
>>>
>>> How is your compaction is configured?
>>>
>>> Could you run the same process in command line to have a measurement?
>>>
>>> Thanks,
>>> Petrus Silva
>>>
>>>
>>>
>>> On Thu, Jul 13, 2017 at 7:49 AM, Felipe Esteves <
>>> felipe.este...@b2wdigital.com> wrote:
>>>
 Hi,

 I have a Cassandra 2.1 cluster running on AWS that receives high
 read loads, jumping from 100k requests to 400k requests, for example. 
 Then
 it normalizes and later cames another high throughput.

 To the application, it appears that Cassandra is slow. However, cpu
 and disk use is ok in every instance, row cache is enabled and with 
 almost
 100% hit rate.

 The logs from Cassandra instances doesn't have any errors, nor
 tombstone messages or something liked that. It's mostly compactions and
 G1GC operations.

 Any hints on where to investigate more?


 Felipe Esteves





>>>
>>> --
>>>
>>> Esta mensagem pode conter informações confidenciais e somente o
>>> indivíduo ou entidade a quem foi destinada pode utilizá-la. A 
>>> transmissão
>>> incorreta da mensagem não acarreta a perda de sua confidencialidade. 
>>> Caso
>>> esta mensagem tenha sido recebida por engano, solicitamos que o fato 
>>> seja
>>> comunicado ao remetente e que a mensagem seja eliminada de seu sistema
>>> imediatamente. É vedado a qualquer pessoa que não seja o destinatário 
>>> usar,
>>> revelar, distribuir ou copiar qualquer parte desta mensagem. Ambiente de
>>> comunicação sujeito a monitoramento.
>>>
>>> This message may include confidential 

Re: Cassandra seems slow when having many read operations

2017-07-22 Thread Fay Hou [Storage Service] ­
Hey Felipe:

When you say increased memory from 16GB to 24GB, I think you meant you
increased heap to 24GB. do you use cms or g1gc?
 did you change any other parameters?
As for the chunk size, we found change 64kb to 16kb didn't make a
difference in low key cache rate environment



On Fri, Jul 21, 2017 at 9:27 PM, benjamin roth  wrote:

> Apart from all that you can try to reduce the compression chunk size from
> the default 64kb to 16kb or even down to 4kb. This can help a lot if your
> read io on disk is very high and the page cache is not efficient.
>
> Am 21.07.2017 23:03 schrieb "Petrus Gomes" :
>
>> Thanks a lot to share the result.
>>
>> Boa Sorte.
>> ;-)
>> Take care.
>> Petris Silva
>>
>> On Fri, Jul 21, 2017 at 12:19 PM, Felipe Esteves <
>> felipe.este...@b2wdigital.com> wrote:
>>
>>> Hi, Petrus,
>>>
>>> Seems we've solved the problem, but it wasn't relationed to repair the
>>> cluster or disk latency.
>>> I've increased the memory available for Cassandra from 16GB to 24GB and
>>> the performance was much improved!
>>> The main symptom we've observed in Opscenter was a
>>> significantly decrease in total compactions graph.
>>>
>>> Felipe Esteves
>>>
>>> Tecnologia
>>>
>>> felipe.este...@b2wdigital.com 
>>>
>>>
>>>
>>> 2017-07-15 3:23 GMT-03:00 Petrus Gomes :
>>>
 Hi Felipe,

 Yes, try it and let us know how it goes.

 Thanks,
 Petrus Silva.

 On Fri, Jul 14, 2017 at 11:37 AM, Felipe Esteves <
 felipe.este...@b2wdigital.com> wrote:

> Hi Petrus, thanks for the feedback.
>
> I couldn't found the percent repaired in nodetool info, C* version is
> 2.1.8, maybe it's something newer than that?
>
> I'm analyzing this thread about num_token.
>
> Compaction is "compaction_throughput_mb_per_sec: 16", I don't get
> pending compactions in Opscenter.
>
> One point I've noticed, is that Opscenter show "OS: Disk Latency" max
> with high values when the problem occurs, but it doesn't reflect in server
> directly monitoring, in these tools the IO and latency of disks seems ok.
> But seems to me that "read repair attempted" is a bit high, maybe it
> will explain the latency in reads. I will try to run a repair on cluster 
> to
> see how it goes.
>
> Felipe Esteves
>
> Tecnologia
>
> felipe.este...@b2wdigital.com 
>
> Tel.: (21) 3504-7162 ramal 57162
>
> Skype: felipe2esteves
>
> 2017-07-13 15:02 GMT-03:00 Petrus Gomes :
>
>> How is your Percent Repaired  when you run " nodetool info" ?
>>
>> Search for :
>> "reduced num_token = improved performance ??" topic.
>> The people were discussing that.
>>
>> How is your compaction is configured?
>>
>> Could you run the same process in command line to have a measurement?
>>
>> Thanks,
>> Petrus Silva
>>
>>
>>
>> On Thu, Jul 13, 2017 at 7:49 AM, Felipe Esteves <
>> felipe.este...@b2wdigital.com> wrote:
>>
>>> Hi,
>>>
>>> I have a Cassandra 2.1 cluster running on AWS that receives high
>>> read loads, jumping from 100k requests to 400k requests, for example. 
>>> Then
>>> it normalizes and later cames another high throughput.
>>>
>>> To the application, it appears that Cassandra is slow. However, cpu
>>> and disk use is ok in every instance, row cache is enabled and with 
>>> almost
>>> 100% hit rate.
>>>
>>> The logs from Cassandra instances doesn't have any errors, nor
>>> tombstone messages or something liked that. It's mostly compactions and
>>> G1GC operations.
>>>
>>> Any hints on where to investigate more?
>>>
>>>
>>> Felipe Esteves
>>>
>>>
>>>
>>>
>>>
>>
>> --
>>
>> Esta mensagem pode conter informações confidenciais e somente o
>> indivíduo ou entidade a quem foi destinada pode utilizá-la. A transmissão
>> incorreta da mensagem não acarreta a perda de sua confidencialidade. Caso
>> esta mensagem tenha sido recebida por engano, solicitamos que o fato seja
>> comunicado ao remetente e que a mensagem seja eliminada de seu sistema
>> imediatamente. É vedado a qualquer pessoa que não seja o destinatário 
>> usar,
>> revelar, distribuir ou copiar qualquer parte desta mensagem. Ambiente de
>> comunicação sujeito a monitoramento.
>>
>> This message may include confidential information and only the
>> intended addresses have the right to use it as is, or any part of it. A
>> wrong transmission does not break its confidentiality. If you've received
>> it because of a mistake or erroneous transmission, please notify the 
>> sender
>> and delete it from your system immediately. This communication 
>> environment