Ldap/AD Authentication

2022-09-15 Thread Abdul Patel
Hi All,

Do we have any open source ldap/AD pkgs/software for caasandra?
I see instacluster has some but seems thats paid one.


Cassandra 4.0 upgrade from Cassandra 3x

2022-02-10 Thread Abdul Patel
Hi
apart from  standard upgrade process any thing specific needs ti be handled
separately for this upgrade process?

Any changes needed at client side w.r.t drivers?


Log4j vulnerability

2021-12-11 Thread Abdul Patel
Hi all,

Any idea if any of open source Cassandra versions are impacted with log4j
vulnerability which was reported on dec 9th


Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Abdul Patel
Twcs is best for TTL not for excipilitly delete correct?


On Friday, September 17, 2021, Abdul Patel  wrote:

> 48hrs deletion is deleting older data more than 48hrs .
> LCS was used as its more of an write once and read many application.
>
> On Friday, September 17, 2021, Bowen Song  wrote:
>
>> Congratulation! You've just found out the cause of it. Does all data get
>> deletes 48 hours after they are inserted? If so, are you sure LCS is the
>> right compaction strategy for this table? TWCS sounds like a much better
>> fit for this purpose.
>> On 17/09/2021 19:16, Abdul Patel wrote:
>>
>> Thanks.
>> Application deletes data every 48hrs of older data.
>> Auto compaction works but as space is full ..errorlog only says not
>> enough space to run compaction.
>>
>>
>> On Friday, September 17, 2021, Bowen Song  wrote:
>>
>>> If major compaction is failing due to disk space constraint, you could
>>> copy the files to another server and run a major compaction there instead
>>> (i.e.: start cassandra on new server but not joining the existing cluster).
>>> If you must replace the node, at least use the
>>> '-Dcassandra.replace_address=...' parameter instead of 'nodetool
>>> decommission' and then re-add, because the later changes the token ranges
>>> on the node, and that makes troubleshooting harder.
>>>
>>> 22GB of data amplifies to nearly 300GB sounds very impossible to me,
>>> there must be something else going on. Have you turned off auto compaction?
>>> Did you change the default parameters (namely, the 'fanout_size') for LCS?
>>> If this doesn't give you a clue, have a look at the SSTable data files, do
>>> you notice anything unusual? For example, too many small files, or some
>>> files are extraordinarily large. Also have a look at the logs, is there
>>> anything unusual? Also, do you know the application logic? Does it do a
>>> lots of delete or update (including 'upsert')? Writes with TTL? Does the
>>> table has a default TTL?
>>> On 17/09/2021 13:45, Abdul Patel wrote:
>>>
>>> Close 300 gb data. Nodetool decommission/removenode and added back one
>>> node ans it came back to 22Gb.
>>> Cant run major compaction as no space much left.
>>>
>>> On Friday, September 17, 2021, Bowen Song  wrote:
>>>
>>>> Okay, so how big exactly is the data on disk? You said removing and
>>>> adding a new node gives you 20GB on disk, was that done via the
>>>> '-Dcassandra.replace_address=...' parameter? If not, the new node will
>>>> almost certainly have a different token range and not directly comparable
>>>> to the existing node if you have uneven partitions or small number of
>>>> partitions in the table. Also, try major compaction, it's a lot easier than
>>>> replacing a node.
>>>>
>>>>
>>>> On 17/09/2021 12:28, Abdul Patel wrote:
>>>>
>>>> Yes i checked and cleared all snapshots and also i had incremental
>>>> backups in backup folder ..i removed the same .. its purely data..
>>>>
>>>>
>>>> On Friday, September 17, 2021, Bowen Song  wrote:
>>>>
>>>>> Assuming your total disk space is a lot bigger than 50GB in size
>>>>> (accounting for disk space amplification, commit log, logs, OS data, 
>>>>> etc.),
>>>>> I would suspect the disk space is being used by something else. Have you
>>>>> checked that the disk space is actually being used by the cassandra data
>>>>> directory? If so, have a look at 'nodetool listsnapshots' command output 
>>>>> as
>>>>> well.
>>>>>
>>>>>
>>>>> On 17/09/2021 05:48, Abdul Patel wrote:
>>>>>
>>>>>> Hello
>>>>>>
>>>>>> We have cassandra with leveledcompaction strategy, recently found
>>>>>> filesystem almost 90% full but the data was only 10m records.
>>>>>> Manual compaction will work? As not sure its recommended and space is
>>>>>> also constraint ..tried removing and adding one node and now data is at
>>>>>> 20GB which looks appropropiate.
>>>>>> So is only solution to reclaim space is remove/add node?
>>>>>>
>>>>>


Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Abdul Patel
48hrs deletion is deleting older data more than 48hrs .
LCS was used as its more of an write once and read many application.

On Friday, September 17, 2021, Bowen Song  wrote:

> Congratulation! You've just found out the cause of it. Does all data get
> deletes 48 hours after they are inserted? If so, are you sure LCS is the
> right compaction strategy for this table? TWCS sounds like a much better
> fit for this purpose.
> On 17/09/2021 19:16, Abdul Patel wrote:
>
> Thanks.
> Application deletes data every 48hrs of older data.
> Auto compaction works but as space is full ..errorlog only says not enough
> space to run compaction.
>
>
> On Friday, September 17, 2021, Bowen Song  wrote:
>
>> If major compaction is failing due to disk space constraint, you could
>> copy the files to another server and run a major compaction there instead
>> (i.e.: start cassandra on new server but not joining the existing cluster).
>> If you must replace the node, at least use the
>> '-Dcassandra.replace_address=...' parameter instead of 'nodetool
>> decommission' and then re-add, because the later changes the token ranges
>> on the node, and that makes troubleshooting harder.
>>
>> 22GB of data amplifies to nearly 300GB sounds very impossible to me,
>> there must be something else going on. Have you turned off auto compaction?
>> Did you change the default parameters (namely, the 'fanout_size') for LCS?
>> If this doesn't give you a clue, have a look at the SSTable data files, do
>> you notice anything unusual? For example, too many small files, or some
>> files are extraordinarily large. Also have a look at the logs, is there
>> anything unusual? Also, do you know the application logic? Does it do a
>> lots of delete or update (including 'upsert')? Writes with TTL? Does the
>> table has a default TTL?
>> On 17/09/2021 13:45, Abdul Patel wrote:
>>
>> Close 300 gb data. Nodetool decommission/removenode and added back one
>> node ans it came back to 22Gb.
>> Cant run major compaction as no space much left.
>>
>> On Friday, September 17, 2021, Bowen Song  wrote:
>>
>>> Okay, so how big exactly is the data on disk? You said removing and
>>> adding a new node gives you 20GB on disk, was that done via the
>>> '-Dcassandra.replace_address=...' parameter? If not, the new node will
>>> almost certainly have a different token range and not directly comparable
>>> to the existing node if you have uneven partitions or small number of
>>> partitions in the table. Also, try major compaction, it's a lot easier than
>>> replacing a node.
>>>
>>>
>>> On 17/09/2021 12:28, Abdul Patel wrote:
>>>
>>> Yes i checked and cleared all snapshots and also i had incremental
>>> backups in backup folder ..i removed the same .. its purely data..
>>>
>>>
>>> On Friday, September 17, 2021, Bowen Song  wrote:
>>>
>>>> Assuming your total disk space is a lot bigger than 50GB in size
>>>> (accounting for disk space amplification, commit log, logs, OS data, etc.),
>>>> I would suspect the disk space is being used by something else. Have you
>>>> checked that the disk space is actually being used by the cassandra data
>>>> directory? If so, have a look at 'nodetool listsnapshots' command output as
>>>> well.
>>>>
>>>>
>>>> On 17/09/2021 05:48, Abdul Patel wrote:
>>>>
>>>>> Hello
>>>>>
>>>>> We have cassandra with leveledcompaction strategy, recently found
>>>>> filesystem almost 90% full but the data was only 10m records.
>>>>> Manual compaction will work? As not sure its recommended and space is
>>>>> also constraint ..tried removing and adding one node and now data is at
>>>>> 20GB which looks appropropiate.
>>>>> So is only solution to reclaim space is remove/add node?
>>>>>
>>>>


Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Abdul Patel
Thanks.
Application deletes data every 48hrs of older data.
Auto compaction works but as space is full ..errorlog only says not enough
space to run compaction.


On Friday, September 17, 2021, Bowen Song  wrote:

> If major compaction is failing due to disk space constraint, you could
> copy the files to another server and run a major compaction there instead
> (i.e.: start cassandra on new server but not joining the existing cluster).
> If you must replace the node, at least use the
> '-Dcassandra.replace_address=...' parameter instead of 'nodetool
> decommission' and then re-add, because the later changes the token ranges
> on the node, and that makes troubleshooting harder.
>
> 22GB of data amplifies to nearly 300GB sounds very impossible to me, there
> must be something else going on. Have you turned off auto compaction? Did
> you change the default parameters (namely, the 'fanout_size') for LCS? If
> this doesn't give you a clue, have a look at the SSTable data files, do you
> notice anything unusual? For example, too many small files, or some files
> are extraordinarily large. Also have a look at the logs, is there anything
> unusual? Also, do you know the application logic? Does it do a lots of
> delete or update (including 'upsert')? Writes with TTL? Does the table has
> a default TTL?
>
> On 17/09/2021 13:45, Abdul Patel wrote:
>
> Close 300 gb data. Nodetool decommission/removenode and added back one
> node ans it came back to 22Gb.
> Cant run major compaction as no space much left.
>
> On Friday, September 17, 2021, Bowen Song  wrote:
>
>> Okay, so how big exactly is the data on disk? You said removing and
>> adding a new node gives you 20GB on disk, was that done via the
>> '-Dcassandra.replace_address=...' parameter? If not, the new node will
>> almost certainly have a different token range and not directly comparable
>> to the existing node if you have uneven partitions or small number of
>> partitions in the table. Also, try major compaction, it's a lot easier than
>> replacing a node.
>>
>>
>> On 17/09/2021 12:28, Abdul Patel wrote:
>>
>> Yes i checked and cleared all snapshots and also i had incremental
>> backups in backup folder ..i removed the same .. its purely data..
>>
>>
>> On Friday, September 17, 2021, Bowen Song  wrote:
>>
>>> Assuming your total disk space is a lot bigger than 50GB in size
>>> (accounting for disk space amplification, commit log, logs, OS data, etc.),
>>> I would suspect the disk space is being used by something else. Have you
>>> checked that the disk space is actually being used by the cassandra data
>>> directory? If so, have a look at 'nodetool listsnapshots' command output as
>>> well.
>>>
>>>
>>> On 17/09/2021 05:48, Abdul Patel wrote:
>>>
>>>> Hello
>>>>
>>>> We have cassandra with leveledcompaction strategy, recently found
>>>> filesystem almost 90% full but the data was only 10m records.
>>>> Manual compaction will work? As not sure its recommended and space is
>>>> also constraint ..tried removing and adding one node and now data is at
>>>> 20GB which looks appropropiate.
>>>> So is only solution to reclaim space is remove/add node?
>>>>
>>>


Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Abdul Patel
Close 300 gb data. Nodetool decommission/removenode and added back one node
ans it came back to 22Gb.
Cant run major compaction as no space much left.

On Friday, September 17, 2021, Bowen Song  wrote:

> Okay, so how big exactly is the data on disk? You said removing and adding
> a new node gives you 20GB on disk, was that done via the
> '-Dcassandra.replace_address=...' parameter? If not, the new node will
> almost certainly have a different token range and not directly comparable
> to the existing node if you have uneven partitions or small number of
> partitions in the table. Also, try major compaction, it's a lot easier than
> replacing a node.
>
>
> On 17/09/2021 12:28, Abdul Patel wrote:
>
> Yes i checked and cleared all snapshots and also i had incremental backups
> in backup folder ..i removed the same .. its purely data..
>
>
> On Friday, September 17, 2021, Bowen Song  wrote:
>
>> Assuming your total disk space is a lot bigger than 50GB in size
>> (accounting for disk space amplification, commit log, logs, OS data, etc.),
>> I would suspect the disk space is being used by something else. Have you
>> checked that the disk space is actually being used by the cassandra data
>> directory? If so, have a look at 'nodetool listsnapshots' command output as
>> well.
>>
>>
>> On 17/09/2021 05:48, Abdul Patel wrote:
>>
>>> Hello
>>>
>>> We have cassandra with leveledcompaction strategy, recently found
>>> filesystem almost 90% full but the data was only 10m records.
>>> Manual compaction will work? As not sure its recommended and space is
>>> also constraint ..tried removing and adding one node and now data is at
>>> 20GB which looks appropropiate.
>>> So is only solution to reclaim space is remove/add node?
>>>
>>


Re: High disk usage casaandra 3.11.7

2021-09-17 Thread Abdul Patel
Yes i checked and cleared all snapshots and also i had incremental backups
in backup folder ..i removed the same .. its purely data..


On Friday, September 17, 2021, Bowen Song  wrote:

> Assuming your total disk space is a lot bigger than 50GB in size
> (accounting for disk space amplification, commit log, logs, OS data, etc.),
> I would suspect the disk space is being used by something else. Have you
> checked that the disk space is actually being used by the cassandra data
> directory? If so, have a look at 'nodetool listsnapshots' command output as
> well.
>
>
> On 17/09/2021 05:48, Abdul Patel wrote:
>
>> Hello
>>
>> We have cassandra with leveledcompaction strategy, recently found
>> filesystem almost 90% full but the data was only 10m records.
>> Manual compaction will work? As not sure its recommended and space is
>> also constraint ..tried removing and adding one node and now data is at
>> 20GB which looks appropropiate.
>> So is only solution to reclaim space is remove/add node?
>>
>


High disk usage casaandra 3.11.7

2021-09-16 Thread Abdul Patel
Hello

We have cassandra with leveledcompaction strategy, recently found
filesystem almost 90% full but the data was only 10m records.
Manual compaction will work? As not sure its recommended and space is also
constraint ..tried removing and adding one node and now data is at 20GB
which looks appropropiate.
So is only solution to reclaim space is remove/add node?


Re: [RELEASE] Apache Cassandra 4.0-rc2 released

2021-06-30 Thread Abdul Patel
Thanks Patrick!!

On Wed, Jun 30, 2021, 7:54 PM Patrick McFadin  wrote:

> Hi Abdul,
>
> This is a release candidate so what the project is proposing as production
> ready. We're asking the community to test however possible and provide any
> feedback you have. If there are no issues found, this will turn into the GA
> release.
>
> Patrick
>
> On Wed, Jun 30, 2021 at 1:09 PM Abdul Patel  wrote:
>
>> Great news!!
>> Is this production ready?
>>
>> On Wed, Jun 30, 2021, 3:56 PM Patrick McFadin  wrote:
>>
>>> Congrats to everyone that worked on this iteration. If you haven't
>>> looked at the CHANGES.txt there were some great catches in RC1. Just like
>>> it should happen!
>>>
>>> On Wed, Jun 30, 2021 at 12:29 PM Mick Semb Wever  wrote:
>>>
>>>>
>>>> The Cassandra team is pleased to announce the release of Apache
>>>> Cassandra version 4.0-rc2.
>>>>
>>>> Apache Cassandra is a fully distributed database. It is the right
>>>> choice when you need scalability and high availability without compromising
>>>> performance.
>>>>  http://cassandra.apache.org/
>>>>
>>>> Downloads of source and binary distributions are listed in our download
>>>> section:
>>>>  http://cassandra.apache.org/download/
>>>>
>>>> This version is a release candidate[1] on the 4.0 series. As always,
>>>> please pay attention to the release notes[2] and let us know[3] if you were
>>>> to encounter any problem.
>>>>
>>>> Please note, the bintray location is now replaced with the ASF's JFrog
>>>> Artifactory location: https://apache.jfrog.io/artifactory/cassandra/
>>>>
>>>> Enjoy!
>>>>
>>>> [1]: CHANGES.txt
>>>> https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-4.0-rc2
>>>> [2]: NEWS.txt
>>>> https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-4.0-rc2
>>>> [3]: https://issues.apache.org/jira/browse/CASSANDRA
>>>>
>>>


Re: [RELEASE] Apache Cassandra 4.0-rc2 released

2021-06-30 Thread Abdul Patel
Great news!!
Is this production ready?

On Wed, Jun 30, 2021, 3:56 PM Patrick McFadin  wrote:

> Congrats to everyone that worked on this iteration. If you haven't looked
> at the CHANGES.txt there were some great catches in RC1. Just like it
> should happen!
>
> On Wed, Jun 30, 2021 at 12:29 PM Mick Semb Wever  wrote:
>
>>
>> The Cassandra team is pleased to announce the release of Apache Cassandra
>> version 4.0-rc2.
>>
>> Apache Cassandra is a fully distributed database. It is the right choice
>> when you need scalability and high availability without compromising
>> performance.
>>  http://cassandra.apache.org/
>>
>> Downloads of source and binary distributions are listed in our download
>> section:
>>  http://cassandra.apache.org/download/
>>
>> This version is a release candidate[1] on the 4.0 series. As always,
>> please pay attention to the release notes[2] and let us know[3] if you were
>> to encounter any problem.
>>
>> Please note, the bintray location is now replaced with the ASF's JFrog
>> Artifactory location: https://apache.jfrog.io/artifactory/cassandra/
>>
>> Enjoy!
>>
>> [1]: CHANGES.txt
>> https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-4.0-rc2
>> [2]: NEWS.txt
>> https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-4.0-rc2
>> [3]: https://issues.apache.org/jira/browse/CASSANDRA
>>
>


Encryption at rest

2020-06-24 Thread Abdul Patel
Team,

Do we have option in open source to do encryption at rest in cassandra ?


Re: Running select against cassandra

2020-02-06 Thread Abdul Patel
Thanks all for valuable inputs.
I agree we nees to have query defined then plan the schema of table , but
the server is live for 2 yrs now in production and this is new requiremnt
so changing schema is not a  option and secondary index is also bad idea.

I was thinking to go with materialized view or see how select perform in
non prod and see which fares better.
So wanted to see if we ca. Do anything other than that in existing schema.
Also copy option was discussed but copy doest support where clause.


On Thursday, February 6, 2020, Reid Pinchback 
wrote:

> I defer to Sean’s comment on materialized views.  I’m more familiar with
> DynamoDB on that front, where you do this pretty routinely.  I was curious
> so I went looking. This appears to be the C* Jira that points to many of
> the problem points:
>
>
>
> https://issues.apache.org/jira/browse/CASSANDRA-13826
>
>
>
> Abdul, you’d probably want to refer to that or similar info.  Could be
> that the more practical resolution is to just have the client write the
> data twice, if there are two very different query patterns to support.
> Writes usually have quite low latency in C*, so double-writing may be less
> of a performance hit, and later drag on memory on I/O, than a query model
> that makes you browse through more data than necessary.
>
>
>
> *From: *"Durity, Sean R" 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Thursday, February 6, 2020 at 4:24 PM
> *To: *"user@cassandra.apache.org" 
> *Subject: *RE: [EXTERNAL] Re: Running select against cassandra
>
>
>
> *Message from External Sender*
>
> Reid is right. You build the tables to easily answer the queries you want.
> So, start with the query! I inferred a query for you based on what you
> mentioned. If my inference is wrong, the table structure is likely wrong,
> too.
>
>
>
> So, what kind of query do you want to run?
>
>
>
> (NOTE: a select count(*) that is not restricted to within a single
> partition is a very bad option. Don’t do that)
>
>
>
> The query for my table below is simply:
>
> select user_count [, other columns] from users_by_day where date = ? and
> hour = ? and minute = ?
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Reid Pinchback 
> *Sent:* Thursday, February 6, 2020 4:10 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: [EXTERNAL] Re: Running select against cassandra
>
>
>
> Abdul,
>
>
>
> When in doubt, have a query model that immediately feeds you exactly what
> you are looking for. That’s kind of the data model philosophy that you want
> to shoot for as much as feasible with C*.
>
>
>
> The point of Sean’s table isn’t the similarity to yours, it is how he has
> it keyed because it suits a partition structure much better aligned with
> what you want to request.  So I’d say yes, if a materialized view is how
> you want to achieve a denormalized state where the query model directly
> supports giving you want you want to query for, that sounds like an
> appropriate option to consider.  You might want a composite partition key
> for having an efficient selection of narrow time ranges.
>
>
>
> *From: *Abdul Patel 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Thursday, February 6, 2020 at 2:42 PM
> *To: *"user@cassandra.apache.org" 
> *Subject: *Re: [EXTERNAL] Re: Running select against cassandra
>
>
>
> *Message from External Sender*
>
> this is the schema similar to what we have , they want to get user
> connected  - concurrent count for every say 1-5 minutes.
>
> i am thinking will simple select will have performance issue or we can go
> for materialized views ?
>
>
>
> CREATE TABLE  usr_session (
>
> userid bigint,
>
> session_usr text,
>
> last_access_time timestamp,
>
> login_time timestamp,
>
> status int,
>
> PRIMARY KEY (userid, session_usr)
>
> ) WITH CLUSTERING ORDER BY (session_usr ASC)
>
>
>
>
>
> On Thu, Feb 6, 2020 at 2:09 PM Durity, Sean R 
> wrote:
>
> Do you only need the current count or do you want to keep the historical
> counts also? By active users, does that mean some kind of user that the
> application tracks (as opposed to the Cassandra user connected to the
> cluster)?
>
>
>
> I would consider a table like this for tracking active users through time:
>
>
>
> Create table users_by_day (
>
> app_date date,
>
> hour integer,
>
> minute integer,
>
> user_count integer,
>
> longest_login_user text,
>
> longest_login_seconds integer,
>
> last_login datetime,
>
> last_login_user text )
>
> primary key (app_date,

Re: [EXTERNAL] Re: Running select against cassandra

2020-02-06 Thread Abdul Patel
this is the schema similar to what we have , they want to get user
connected  - concurrent count for every say 1-5 minutes.
i am thinking will simple select will have performance issue or we can go
for materialized views ?

CREATE TABLE  usr_session (

userid bigint,

session_usr text,

last_access_time timestamp,

login_time timestamp,

status int,

PRIMARY KEY (userid, session_usr)

) WITH CLUSTERING ORDER BY (session_usr ASC)



On Thu, Feb 6, 2020 at 2:09 PM Durity, Sean R 
wrote:

> Do you only need the current count or do you want to keep the historical
> counts also? By active users, does that mean some kind of user that the
> application tracks (as opposed to the Cassandra user connected to the
> cluster)?
>
>
>
> I would consider a table like this for tracking active users through time:
>
>
>
> Create table users_by_day (
>
> app_date date,
>
> hour integer,
>
> minute integer,
>
> user_count integer,
>
> longest_login_user text,
>
> longest_login_seconds integer,
>
> last_login datetime,
>
> last_login_user text )
>
> primary key (app_date, hour, minute);
>
>
>
> Then, your reporting can easily select full days or a specific, one-minute
> slice. Of course, the app would need to have a timer and write out the
> data. I would also suggest a TTL on the data so that you only keep what you
> need (a week, a year, whatever). Of course, if your reporting requires
> different granularities, you could consider a different time bucket for the
> table (by hour, by week, etc.)
>
>
>
>
>
> Sean Durity – Staff Systems Engineer, Cassandra
>
>
>
> *From:* Abdul Patel 
> *Sent:* Thursday, February 6, 2020 1:54 PM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Running select against cassandra
>
>
>
> Its sort of user connected, app team needa number of active users
> connected say  every 1 to 5 mins.
>
> The timeout at app end is 120ms.
>
>
>
>
>
> On Thursday, February 6, 2020, Michael Shuler 
> wrote:
>
> You'll have to be more specific. What is your table schema and what is the
> SELECT query? What is the normal response time?
>
> As a basic guide for your general question, if the query is something sort
> of irrelevant that should be stored some other way, like a total row count,
> or most any SELECT that requires ALLOW FILTERING, you're doing it wrong and
> should re-evaluate your data model.
>
> 1 query per minute is a minuscule fraction of the basic capacity of
> queries per minute that a Cassandra cluster should be able to handle with
> good data modeling and table-relevant query. All depends on the data model
> and query.
>
> Michael
>
> On 2/6/20 12:20 PM, Abdul Patel wrote:
>
> Hi,
>
> Is it advisable to run select query to fetch every minute to grab data
> from cassandra for reporting purpose, if no then whats the alternative?
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>
> --
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>


Re: Running select against cassandra

2020-02-06 Thread Abdul Patel
Also is materialized view good for production?
We are on 3.11.4

On Thursday, February 6, 2020, Abdul Patel  wrote:

> Its sort of user connected, app team needa number of active users
> connected say  every 1 to 5 mins.
> The timeout at app end is 120ms.
>
>
>
> On Thursday, February 6, 2020, Michael Shuler 
> wrote:
>
>> You'll have to be more specific. What is your table schema and what is
>> the SELECT query? What is the normal response time?
>>
>> As a basic guide for your general question, if the query is something
>> sort of irrelevant that should be stored some other way, like a total row
>> count, or most any SELECT that requires ALLOW FILTERING, you're doing it
>> wrong and should re-evaluate your data model.
>>
>> 1 query per minute is a minuscule fraction of the basic capacity of
>> queries per minute that a Cassandra cluster should be able to handle with
>> good data modeling and table-relevant query. All depends on the data model
>> and query.
>>
>> Michael
>>
>> On 2/6/20 12:20 PM, Abdul Patel wrote:
>>
>>> Hi,
>>>
>>> Is it advisable to run select query to fetch every minute to grab data
>>> from cassandra for reporting purpose, if no then whats the alternative?
>>>
>>>
>>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>


Re: Running select against cassandra

2020-02-06 Thread Abdul Patel
Its sort of user connected, app team needa number of active users connected
say  every 1 to 5 mins.
The timeout at app end is 120ms.



On Thursday, February 6, 2020, Michael Shuler 
wrote:

> You'll have to be more specific. What is your table schema and what is the
> SELECT query? What is the normal response time?
>
> As a basic guide for your general question, if the query is something sort
> of irrelevant that should be stored some other way, like a total row count,
> or most any SELECT that requires ALLOW FILTERING, you're doing it wrong and
> should re-evaluate your data model.
>
> 1 query per minute is a minuscule fraction of the basic capacity of
> queries per minute that a Cassandra cluster should be able to handle with
> good data modeling and table-relevant query. All depends on the data model
> and query.
>
> Michael
>
> On 2/6/20 12:20 PM, Abdul Patel wrote:
>
>> Hi,
>>
>> Is it advisable to run select query to fetch every minute to grab data
>> from cassandra for reporting purpose, if no then whats the alternative?
>>
>>
>>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Running select against cassandra

2020-02-06 Thread Abdul Patel
Hi,

Is it advisable to run select query to fetch every minute to grab data from
cassandra for reporting purpose, if no then whats the alternative?


Re: Cassandra 4 alpha/alpha2

2019-10-31 Thread Abdul Patel
Looks like i am messing up or missing something ..will revisit again

On Thursday, October 31, 2019, Stefan Miklosovic <
stefan.mikloso...@instaclustr.com> wrote:

> Hi,
>
> I have tested both alpha and alpha2 and 3.11.5 on Centos 7.7.1908 and
> all went fine (I have some custom images for my own purposes).
>
> Update between alpha and alpha2 was just about mere version bump.
>
> Cheers
>
> On Thu, 31 Oct 2019 at 20:40, Abdul Patel  wrote:
> >
> > Hey Everyone
> >
> > Did anyone was successfull to install either alpha or alpha2 version for
> cassandra 4.0?
> > Found 2 issues :
> > 1> cassandra-env.sh:
> > JAVA_VERSION varianle is not defined.
> > Jvm-server.options file is not defined.
> >
> > This is fixable and after adding those , the error for cassandra-env.sh
> errora went away.
> >
> > 2> second and major issue the cassandea binary when i try to start says
> syntax error.
> >
> > /bin/cassandea: line 198:exec: : not found.
> >
> > Anyone has any idea on second issue?
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: Cassandra 4 alpha/alpha2

2019-10-31 Thread Abdul Patel
Centos 7.6


On Thursday, October 31, 2019, Jon Haddad  wrote:

> What artifact did you use and what OS are you on?
>
> On Thu, Oct 31, 2019 at 12:40 PM Abdul Patel  wrote:
>
>> Hey Everyone
>>
>> Did anyone was successfull to install either alpha or alpha2 version for
>> cassandra 4.0?
>> Found 2 issues :
>> 1> cassandra-env.sh:
>> JAVA_VERSION varianle is not defined.
>> Jvm-server.options file is not defined.
>>
>> This is fixable and after adding those , the error for cassandra-env.sh
>> errora went away.
>>
>> 2> second and major issue the cassandea binary when i try to start says
>> syntax error.
>>
>> /bin/cassandea: line 198:exec: : not found.
>>
>> Anyone has any idea on second issue?
>>
>>


Cassandra 4 alpha/alpha2

2019-10-31 Thread Abdul Patel
Hey Everyone

Did anyone was successfull to install either alpha or alpha2 version for
cassandra 4.0?
Found 2 issues :
1> cassandra-env.sh:
JAVA_VERSION varianle is not defined.
Jvm-server.options file is not defined.

This is fixable and after adding those , the error for cassandra-env.sh
errora went away.

2> second and major issue the cassandea binary when i try to start says
syntax error.

/bin/cassandea: line 198:exec: : not found.

Anyone has any idea on second issue?


Re: [RELEASE] Apache Cassandra 4.0-alpha1 released

2019-10-24 Thread Abdul Patel
Have anyone used this yet?
Any intial thoughts?
When will be full and final ready fornprod version coming in?

On Sunday, September 8, 2019, Michael Shuler  wrote:

> The Cassandra team is pleased to announce the release of Apache Cassandra
> version 4.0-alpha1.
>
> Apache Cassandra is a fully distributed database. It is the right choice
> when you need scalability and high availability without compromising
> performance.
>
>  http://cassandra.apache.org/
>
> Downloads of source and binary distributions for 4.0-alpha1:
>
>
> http://www.apache.org/dyn/closer.lua/cassandra/4.0-alpha1/
> apache-cassandra-4.0-alpha1-bin.tar.gz
>
> http://www.apache.org/dyn/closer.lua/cassandra/4.0-alpha1/
> apache-cassandra-4.0-alpha1-src.tar.gz
>
> Debian and Redhat configurations
>
>  sources.list:
>  deb http://www.apache.org/dist/cassandra/debian 40x main
>
>  yum config:
>  baseurl=https://www.apache.org/dist/cassandra/redhat/40x/
>
> See http://cassandra.apache.org/download/ for full install instructions.
> Since this is the first alpha release, it will not be present on the
> download page.
>
> This is an ALPHA version! It is not intended for production use, however
> the project would appreciate your testing and feedback to make the final
> release better. As always, please pay attention to the release notes[2] and
> Let us know[3] if you were to encounter any problem.
>
> Enjoy!
>
> [1]: CHANGES.txt: https://gitbox.apache.org/repo
> s/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/
> tags/cassandra-4.0-alpha1
> [2]: NEWS.txt: https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_
> plain;f=NEWS.txt;hb=refs/tags/cassandra-4.0-alpha1
> [3]: JIRA: https://issues.apache.org/jira/browse/CASSANDRA
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: Nodetool snapshot

2019-09-19 Thread Abdul Patel
Thanks , i guess i have both.
So can we have either or?
If i keep auto_snapshot? Can i remove nodetool snapshot?
Woest case scenario if i wish to restore snapshot which one will be best
option?
Also if we restore snapshot do we need to have snapshot on all nodes?


On Thursday, September 19, 2019, Jeff Jirsa  wrote:

> You probably have auto_snapshot enabled, which takes snapshots when you do
> certain things. You can disable that if you dont need it, but it protects
> you against things like accidentally dropping / truncating a table.
>
> You may also be doing snapshots manually - if you do this, you can
> 'nodetool clearsnapshot' to free up space.
>
>
> On Thu, Sep 19, 2019 at 10:54 AM Abdul Patel  wrote:
>
>> Hey All,
>>
>> I found recentmy that the nodetool snapshot golder is creating almost
>> 120GB of filea when my actual keyspace folder has 20GB only.
>> Do we need to change any paramater to avoid this?
>> Is this normal?
>> I have 3.11.4 version
>>
>


Nodetool snapshot

2019-09-19 Thread Abdul Patel
Hey All,

I found recentmy that the nodetool snapshot golder is creating almost 120GB
of filea when my actual keyspace folder has 20GB only.
Do we need to change any paramater to avoid this?
Is this normal?
I have 3.11.4 version


Logon trigger

2019-08-01 Thread Abdul Patel
Hi All,

As per normal databases have logon trigger, do we have similar option in cassandra.
Would like to implement ip restrictions in cassandra, or any other method?


Re: All time blocked in nodetool tpstats

2019-04-10 Thread Abdul Patel
Do we have any recommendations on concurrents reads ans writes settings?
Mine is 18 node 3 dc cluster with 20 core cpu

On Wednesday, April 10, 2019, Anthony Grasso 
wrote:

> Hi Abdul,
>
> Usually we get no noticeable improvement at tuning concurrent_reads and
> concurrent_writes above 128. I generally try to keep current_reads to no
> higher than 64 and concurrent_writes to no higher than 128. In creasing
> the values beyond that you might start running into issues where the kernel
> IO scheduler and/or the disk become saturated. As Paul mentioned, it will
> depend on the size of your nodes though.
>
> If the client is timing out, it is likely that the node that is selected
> as the coordinator for the read has a resource contention somewhere. The
> root cause is usually due to a number of things going on though. As Paul
> mentioned, one of the issues could be the query design. It is worth
> investigating if a particular read query is timing out.
>
> I would also inspect the Cassandra logs and garbage collection logs on the
> node where you are seeing the timeouts. The things to look out for is high
> garbage collection frequency, long garbage collection pauses, and high
> tombstone read warnings.
>
> Regards,
> Anthony
>
> On Thu, 11 Apr 2019 at 06:01, Abdul Patel  wrote:
>
>> Yes the queries are all select queries as they are more of read intensive
>> app.
>> Last night i rebooted cluster and today they are fine .(i know its
>> temporary) as i still is all time blocked values.
>> I am thinking of incresiing concurrent
>>
>> On Wednesday, April 10, 2019, Paul Chandler  wrote:
>>
>>> Hi Abdul,
>>>
>>> When I have seen dropped messages, I normally double check to ensure the
>>> node not CPU bound.
>>>
>>> If you have a high CPU idle value, then it is likely that tuning the
>>> thread counts will help.
>>>
>>> I normally start with concurrent_reads and concurrent_writes, so in your
>>> case as reads are being dropped then increase concurrent_reads, I normally
>>> change it to 96 to start with, but it will depend on size of your nodes.
>>>
>>> Otherwise it might be badly designed queries, have you investigated
>>> which queries are producing the client timeouts?
>>>
>>> Regards
>>>
>>> Paul Chandler
>>>
>>>
>>>
>>> > On 9 Apr 2019, at 18:58, Abdul Patel  wrote:
>>> >
>>> > Hi,
>>> >
>>> > My nodetool tpstats arw showing all time blocked high numbers a d also
>>> read dropped messages as 400 .
>>> > Client is expeirince high timeouts.
>>> > Checked few online forums they recommend to increase,
>>> native_transport_max_threads.
>>> > As of jow its commented with 128 ..
>>> > Is it adviabke to increase this and also can this fix timeout issue?
>>> >
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>>
>>>


All time blocked in nodetool tpstats

2019-04-10 Thread Abdul Patel
Yes the queries are all select queries as they are more of read intensive
app.
Last night i rebooted cluster and today they are fine .(i know its
temporary) as i still is all time blocked values.
I am thinking of incresiing concurrent

On Wednesday, April 10, 2019, Paul Chandler  wrote:

> Hi Abdul,
>
> When I have seen dropped messages, I normally double check to ensure the
> node not CPU bound.
>
> If you have a high CPU idle value, then it is likely that tuning the
> thread counts will help.
>
> I normally start with concurrent_reads and concurrent_writes, so in your
> case as reads are being dropped then increase concurrent_reads, I normally
> change it to 96 to start with, but it will depend on size of your nodes.
>
> Otherwise it might be badly designed queries, have you investigated which
> queries are producing the client timeouts?
>
> Regards
>
> Paul Chandler
>
>
>
> > On 9 Apr 2019, at 18:58, Abdul Patel  wrote:
> >
> > Hi,
> >
> > My nodetool tpstats arw showing all time blocked high numbers a d also
> read dropped messages as 400 .
> > Client is expeirince high timeouts.
> > Checked few online forums they recommend to increase,
> native_transport_max_threads.
> > As of jow its commented with 128 ..
> > Is it adviabke to increase this and also can this fix timeout issue?
> >
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: All time blocked in nodetool tpstats

2019-04-10 Thread Abdul Patel
Yes the queries are all select queries as they are more of read intensive
app.
Last night i rebooted cluster and today they are fine .(i know its
temporary) as i still is all time blocked values.
I am thinking of incresiing concurrent reads and writes to 256 and native
transport threads to 256 and see how it performs.

On Wednesday, April 10, 2019, Paul Chandler  wrote:

> Hi Abdul,
>
> When I have seen dropped messages, I normally double check to ensure the
> node not CPU bound.
>
> If you have a high CPU idle value, then it is likely that tuning the
> thread counts will help.
>
> I normally start with concurrent_reads and concurrent_writes, so in your
> case as reads are being dropped then increase concurrent_reads, I normally
> change it to 96 to start with, but it will depend on size of your nodes.
>
> Otherwise it might be badly designed queries, have you investigated which
> queries are producing the client timeouts?
>
> Regards
>
> Paul Chandler
>
>
>
> > On 9 Apr 2019, at 18:58, Abdul Patel  wrote:
> >
> > Hi,
> >
> > My nodetool tpstats arw showing all time blocked high numbers a d also
> read dropped messages as 400 .
> > Client is expeirince high timeouts.
> > Checked few online forums they recommend to increase,
> native_transport_max_threads.
> > As of jow its commented with 128 ..
> > Is it adviabke to increase this and also can this fix timeout issue?
> >
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


All time blocked in nodetool tpstats

2019-04-09 Thread Abdul Patel
Hi,

My nodetool tpstats arw showing all time blocked high numbers a d also read
dropped messages as 400 .
Client is expeirince high timeouts.
Checked few online forums they recommend to increase,
native_transport_max_threads.
As of jow its commented with 128 ..
Is it adviabke to increase this and also can this fix timeout issue?


Re: Cassandra config in table

2019-02-25 Thread Abdul Patel
Thanks!

On Monday, February 25, 2019, Jeff Jirsa  wrote:

> Not in any released version, but something similar to that is coming in 4.0
>
> --
> Jeff Jirsa
>
>
> > On Feb 25, 2019, at 7:22 AM, Abdul Patel  wrote:
> >
> > Do we have any sustem table which stores all config details which we
> have in yaml or cassandra env.sh?
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Cassandra config in table

2019-02-25 Thread Abdul Patel
Do we have any sustem table which stores all config details which we have
in yaml or cassandra env.sh?


Re: Ansible scripts for Cassandra to help with automation needs

2019-02-14 Thread Abdul Patel
One idea will be to rolling restart of complete cluster , that script will
be huge help.
Just read a blog too that last pickle group has come up with a tool called
'cstart' something which can help in rolling restart.


On Thursday, February 14, 2019, Jeff Jirsa  wrote:

>
>
>
> On Feb 13, 2019, at 9:51 PM, Kenneth Brotman 
> wrote:
>
> I want to generate a variety of Ansible scripts to share with the Apache
> Cassandra community.  I’ll put them in a Github repository.  Just email me
> offline what scripts would help the most.
>
>
>
> Does this exist already?  I can’t find it.  Let me know if it does.
>
>
> Not aware of any repo that does this, but it’s a good idea
>
>
>
> If not, let’s put it together for the community.  Maybe we’ll end up with
> a download right on the Apache Cassandra web site or packaged with future
> releases of Cassandra.
>
>
>
> Kenneth Brotman
>
>
>
> P.S.  Terraform is next!
>
>


Read and write trasanction per sec

2019-02-10 Thread Abdul Patel
Hi

Is there a way to calculate or mesaure the read/ sec and write / sec in
cassandra?
I have prometheus tool to capture metrics it has read count ans write count
not sure if its relevant to the clusster capacity to measure the read n
write sec processing.


Cassandra 4.0

2018-10-23 Thread Abdul Patel
Hi all,

Any idea when 4.0 is planned to release?


Nodetool info for heap usage

2018-10-22 Thread Abdul Patel
Hi All,

Is nodetool info information is accurate to monitor memory usage, intially
with 3.1.0 we had monitoring nodetool infor for heap usage and it never
reported this information as high,after upgrading to 3.11.2 we started
getting high usage using nodetool info   later upgraded to 3.11.3 and same
behaviour.
Just wanted make sure if monutoring heap memory usage via nodetool info
correct or its actually memory leak issue in 3.11.2 anf 3.11.3?


Re: Tracing in cassandra

2018-10-12 Thread Abdul Patel
Yes with range queries its timing out, one question was the where condition
is primary key rather than clustering key.

On Friday, October 12, 2018, Nitan Kainth  wrote:

> Did it still timeout?
>
> Sent from my iPhone
>
> On Oct 12, 2018, at 1:11 PM, Abdul Patel  wrote:
>
> With limit 11 this is query..
> Select * from table where status=0 and tojen(user_id)  >=token(126838) and
> token(user_id)  <= token
> On Friday, October 12, 2018, Abdul Patel  wrote:
>
>> Let me try with limit 11 ..we have 18 node cluster ..no nodes down..
>>
>> On Friday, October 12, 2018, Nitan Kainth  wrote:
>>
>>> Try query with partition key selection in where clause. But time for
>>> limit 11 shouldn’t fail. Are all nodes up? Do you see any corruption in ay
>>> sstable?
>>>
>>> Sent from my iPhone
>>>
>>> On Oct 12, 2018, at 11:40 AM, Abdul Patel  wrote:
>>>
>>> Sean,
>>>
>>> here it is :
>>> CREATE TABLE Keyspave.tblname (
>>> user_id bigint,
>>> session_id text,
>>> application_guid text,
>>> last_access_time timestamp,
>>> login_time timestamp,
>>> status int,
>>> terminated_by text,
>>> update_time timestamp,
>>> PRIMARY KEY (user_id, session_id)
>>> ) WITH CLUSTERING ORDER BY (session_id ASC)
>>>
>>> also they see timeouts with limit 11 as well, so is it better to remove
>>> with limit option ? or whats best to query such schema?
>>>
>>> On Fri, Oct 12, 2018 at 11:05 AM Durity, Sean R <
>>> sean_r_dur...@homedepot.com> wrote:
>>>
>>>> Cross-partition = multiple partitions
>>>>
>>>>
>>>>
>>>> Simple example:
>>>>
>>>> Create table customer (
>>>>
>>>> Customerid int,
>>>>
>>>> Name text,
>>>>
>>>> Lastvisit date,
>>>>
>>>> Phone text,
>>>>
>>>> Primary key (customerid) );
>>>>
>>>>
>>>>
>>>> Query
>>>>
>>>> Select customerid from customer limit 5000;
>>>>
>>>>
>>>>
>>>> The query is asking for 5000 different partitions to be selected across
>>>> the cluster. This is a very EXPENSIVE query for Cassandra, especially as
>>>> the number of nodes goes up. Typically, you want to query a single
>>>> partition. Read timeouts are usually caused by queries that are selecting
>>>> many partitions or a very large partition. That is why a schema for the
>>>> involved table could help.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Sean Durity
>>>>
>>>>
>>>>
>>>> *From:* Abdul Patel 
>>>> *Sent:* Friday, October 12, 2018 10:04 AM
>>>> *To:* user@cassandra.apache.org
>>>> *Subject:* [EXTERNAL] Re: Tracing in cassandra
>>>>
>>>>
>>>>
>>>> Cpuld you elaborate cross partition query?
>>>>
>>>> On Friday, October 12, 2018, Durity, Sean R <
>>>> sean_r_dur...@homedepot.com> wrote:
>>>>
>>>> I suspect you are doing a cross-partition query, which will not scale
>>>> well (as you can see). What is the schema for the table involved?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Sean Durity
>>>>
>>>>
>>>>
>>>> *From:* Abdul Patel 
>>>> *Sent:* Thursday, October 11, 2018 5:54 PM
>>>> *To:* a...@instaclustr.com
>>>> *Cc:* user@cassandra.apache.org
>>>> *Subject:* [EXTERNAL] Re: Tracing in cassandra
>>>>
>>>>
>>>>
>>>> Query :
>>>>
>>>> SELECT * FROM keysoace.tablenameWHERE user_id = 390797583 LIMIT 5000;
>>>>
>>>> -Error: ReadTimeout: Error from server: code=1200 [Coordinator node
>>>> timed out waiting for replica nodes' responses] message="Operation timed
>>>> out - received only 0 responses." info={'received_responses': 0,
>>>> 'required_responses': 1, 'consistency': 'ONE'}
>>>>
>>>>
>>>>
>>>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70bd7c0-cd9e-11e8-8e99-15807bff4dfd
>>>> |
>>>> Parsing SELECT * FROM keysoace.tablenameWHERE user_id = 390797583 LIMIT
>>>> 5000; | 10.54.145

Re: Tracing in cassandra

2018-10-12 Thread Abdul Patel
With limit 11 this is query..
Select * from table where status=0 and tojen(user_id)  >=token(126838) and
token(user_id)  <= token wrote:

> Let me try with limit 11 ..we have 18 node cluster ..no nodes down..
>
> On Friday, October 12, 2018, Nitan Kainth  wrote:
>
>> Try query with partition key selection in where clause. But time for
>> limit 11 shouldn’t fail. Are all nodes up? Do you see any corruption in ay
>> sstable?
>>
>> Sent from my iPhone
>>
>> On Oct 12, 2018, at 11:40 AM, Abdul Patel  wrote:
>>
>> Sean,
>>
>> here it is :
>> CREATE TABLE Keyspave.tblname (
>> user_id bigint,
>> session_id text,
>> application_guid text,
>> last_access_time timestamp,
>> login_time timestamp,
>> status int,
>> terminated_by text,
>> update_time timestamp,
>> PRIMARY KEY (user_id, session_id)
>> ) WITH CLUSTERING ORDER BY (session_id ASC)
>>
>> also they see timeouts with limit 11 as well, so is it better to remove
>> with limit option ? or whats best to query such schema?
>>
>> On Fri, Oct 12, 2018 at 11:05 AM Durity, Sean R <
>> sean_r_dur...@homedepot.com> wrote:
>>
>>> Cross-partition = multiple partitions
>>>
>>>
>>>
>>> Simple example:
>>>
>>> Create table customer (
>>>
>>> Customerid int,
>>>
>>> Name text,
>>>
>>> Lastvisit date,
>>>
>>> Phone text,
>>>
>>> Primary key (customerid) );
>>>
>>>
>>>
>>> Query
>>>
>>> Select customerid from customer limit 5000;
>>>
>>>
>>>
>>> The query is asking for 5000 different partitions to be selected across
>>> the cluster. This is a very EXPENSIVE query for Cassandra, especially as
>>> the number of nodes goes up. Typically, you want to query a single
>>> partition. Read timeouts are usually caused by queries that are selecting
>>> many partitions or a very large partition. That is why a schema for the
>>> involved table could help.
>>>
>>>
>>>
>>>
>>>
>>> Sean Durity
>>>
>>>
>>>
>>> *From:* Abdul Patel 
>>> *Sent:* Friday, October 12, 2018 10:04 AM
>>> *To:* user@cassandra.apache.org
>>> *Subject:* [EXTERNAL] Re: Tracing in cassandra
>>>
>>>
>>>
>>> Cpuld you elaborate cross partition query?
>>>
>>> On Friday, October 12, 2018, Durity, Sean R 
>>> wrote:
>>>
>>> I suspect you are doing a cross-partition query, which will not scale
>>> well (as you can see). What is the schema for the table involved?
>>>
>>>
>>>
>>>
>>>
>>> Sean Durity
>>>
>>>
>>>
>>> *From:* Abdul Patel 
>>> *Sent:* Thursday, October 11, 2018 5:54 PM
>>> *To:* a...@instaclustr.com
>>> *Cc:* user@cassandra.apache.org
>>> *Subject:* [EXTERNAL] Re: Tracing in cassandra
>>>
>>>
>>>
>>> Query :
>>>
>>> SELECT * FROM keysoace.tablenameWHERE user_id = 390797583 LIMIT 5000;
>>>
>>> -Error: ReadTimeout: Error from server: code=1200 [Coordinator node
>>> timed out waiting for replica nodes' responses] message="Operation timed
>>> out - received only 0 responses." info={'received_responses': 0,
>>> 'required_responses': 1, 'consistency': 'ONE'}
>>>
>>>
>>>
>>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70bd7c0-cd9e-11e8-8e99-15807bff4dfd
>>> |
>>> Parsing SELECT * FROM keysoace.tablenameWHERE user_id = 390797583 LIMIT
>>> 5000; | 10.54.145.32 |   4020 |
>>> Native-Transport-Requests-3
>>>
>>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70bfed0-cd9e-11e8-8e99-15807bff4dfd
>>> |
>>> Preparing statement
>>> | 10.54.145.32 |   5065 |
>>> Native-Transport-Requests-3
>>>
>>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c25e0-cd9e-11e8-8e99-15807bff4dfd
>>> |
>>> Executing
>>> single-partition query on roles | 10.54.145.32 |   6171
>>> |   ReadStage-2
>>>
>>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c4cf0-cd9e-11e8-8e99-15807bff4dfd
>>> |
>>>Acq

Re: Tracing in cassandra

2018-10-12 Thread Abdul Patel
Let me try with limit 11 ..we have 18 node cluster ..no nodes down..

On Friday, October 12, 2018, Nitan Kainth  wrote:

> Try query with partition key selection in where clause. But time for limit
> 11 shouldn’t fail. Are all nodes up? Do you see any corruption in ay
> sstable?
>
> Sent from my iPhone
>
> On Oct 12, 2018, at 11:40 AM, Abdul Patel  wrote:
>
> Sean,
>
> here it is :
> CREATE TABLE Keyspave.tblname (
> user_id bigint,
> session_id text,
> application_guid text,
> last_access_time timestamp,
> login_time timestamp,
> status int,
> terminated_by text,
> update_time timestamp,
> PRIMARY KEY (user_id, session_id)
> ) WITH CLUSTERING ORDER BY (session_id ASC)
>
> also they see timeouts with limit 11 as well, so is it better to remove
> with limit option ? or whats best to query such schema?
>
> On Fri, Oct 12, 2018 at 11:05 AM Durity, Sean R <
> sean_r_dur...@homedepot.com> wrote:
>
>> Cross-partition = multiple partitions
>>
>>
>>
>> Simple example:
>>
>> Create table customer (
>>
>> Customerid int,
>>
>> Name text,
>>
>> Lastvisit date,
>>
>> Phone text,
>>
>> Primary key (customerid) );
>>
>>
>>
>> Query
>>
>> Select customerid from customer limit 5000;
>>
>>
>>
>> The query is asking for 5000 different partitions to be selected across
>> the cluster. This is a very EXPENSIVE query for Cassandra, especially as
>> the number of nodes goes up. Typically, you want to query a single
>> partition. Read timeouts are usually caused by queries that are selecting
>> many partitions or a very large partition. That is why a schema for the
>> involved table could help.
>>
>>
>>
>>
>>
>> Sean Durity
>>
>>
>>
>> *From:* Abdul Patel 
>> *Sent:* Friday, October 12, 2018 10:04 AM
>> *To:* user@cassandra.apache.org
>> *Subject:* [EXTERNAL] Re: Tracing in cassandra
>>
>>
>>
>> Cpuld you elaborate cross partition query?
>>
>> On Friday, October 12, 2018, Durity, Sean R 
>> wrote:
>>
>> I suspect you are doing a cross-partition query, which will not scale
>> well (as you can see). What is the schema for the table involved?
>>
>>
>>
>>
>>
>> Sean Durity
>>
>>
>>
>> *From:* Abdul Patel 
>> *Sent:* Thursday, October 11, 2018 5:54 PM
>> *To:* a...@instaclustr.com
>> *Cc:* user@cassandra.apache.org
>> *Subject:* [EXTERNAL] Re: Tracing in cassandra
>>
>>
>>
>> Query :
>>
>> SELECT * FROM keysoace.tablenameWHERE user_id = 390797583 LIMIT 5000;
>>
>> -Error: ReadTimeout: Error from server: code=1200 [Coordinator node
>> timed out waiting for replica nodes' responses] message="Operation timed
>> out - received only 0 responses." info={'received_responses': 0,
>> 'required_responses': 1, 'consistency': 'ONE'}
>>
>>
>>
>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70bd7c0-cd9e-11e8-8e99-15807bff4dfd
>> |
>> Parsing SELECT * FROM keysoace.tablenameWHERE user_id = 390797583 LIMIT
>> 5000; | 10.54.145.32 |   4020 |
>> Native-Transport-Requests-3
>>
>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70bfed0-cd9e-11e8-8e99-15807bff4dfd
>> |
>> Preparing statement
>> | 10.54.145.32 |   5065 |
>> Native-Transport-Requests-3
>>
>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c25e0-cd9e-11e8-8e99-15807bff4dfd
>> |
>> Executing
>> single-partition query on roles | 10.54.145.32 |   6171
>> |   ReadStage-2
>>
>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c4cf0-cd9e-11e8-8e99-15807bff4dfd
>> |
>>Acquiring
>> sstable references | 10.54.145.32 |   6362
>> |   ReadStage-2
>>
>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c4cf1-cd9e-11e8-8e99-15807bff4dfd
>> |
>> Skipped 0/2 non-slice-intersecting sstables, included 0 due to tombstones |
>> 10.54.145.32 |   6641 |
>> ReadStage-2
>>
>> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c4cf2-cd9e-11e8-8e99-15807bff4dfd
>> |
>>   Key cache hit
>> for sstable 346 | 10.54.145.32 |   6955
>> |   ReadStage-2
>>
>> e70ac650

Re: [EXTERNAL] Re: Tracing in cassandra

2018-10-12 Thread Abdul Patel
Sean,

here it is :
CREATE TABLE Keyspave.tblname (
user_id bigint,
session_id text,
application_guid text,
last_access_time timestamp,
login_time timestamp,
status int,
terminated_by text,
update_time timestamp,
PRIMARY KEY (user_id, session_id)
) WITH CLUSTERING ORDER BY (session_id ASC)

also they see timeouts with limit 11 as well, so is it better to remove
with limit option ? or whats best to query such schema?

On Fri, Oct 12, 2018 at 11:05 AM Durity, Sean R 
wrote:

> Cross-partition = multiple partitions
>
>
>
> Simple example:
>
> Create table customer (
>
> Customerid int,
>
> Name text,
>
> Lastvisit date,
>
> Phone text,
>
> Primary key (customerid) );
>
>
>
> Query
>
> Select customerid from customer limit 5000;
>
>
>
> The query is asking for 5000 different partitions to be selected across
> the cluster. This is a very EXPENSIVE query for Cassandra, especially as
> the number of nodes goes up. Typically, you want to query a single
> partition. Read timeouts are usually caused by queries that are selecting
> many partitions or a very large partition. That is why a schema for the
> involved table could help.
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Abdul Patel 
> *Sent:* Friday, October 12, 2018 10:04 AM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Tracing in cassandra
>
>
>
> Cpuld you elaborate cross partition query?
>
> On Friday, October 12, 2018, Durity, Sean R 
> wrote:
>
> I suspect you are doing a cross-partition query, which will not scale well
> (as you can see). What is the schema for the table involved?
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Abdul Patel 
> *Sent:* Thursday, October 11, 2018 5:54 PM
> *To:* a...@instaclustr.com
> *Cc:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Tracing in cassandra
>
>
>
> Query :
>
> SELECT * FROM keysoace.tablenameWHERE user_id = 390797583 LIMIT 5000;
>
> -Error: ReadTimeout: Error from server: code=1200 [Coordinator node timed
> out waiting for replica nodes' responses] message="Operation timed out -
> received only 0 responses." info={'received_responses': 0,
> 'required_responses': 1, 'consistency': 'ONE'}
>
>
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd |
> e70bd7c0-cd9e-11e8-8e99-15807bff4dfd
> |
> Parsing SELECT * FROM keysoace.tablenameWHERE user_id = 390797583 LIMIT
> 5000; | 10.54.145.32 |   4020 |
> Native-Transport-Requests-3
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd |
> e70bfed0-cd9e-11e8-8e99-15807bff4dfd
> |
> Preparing statement |
> 10.54.145.32 |   5065 |
> Native-Transport-Requests-3
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd |
> e70c25e0-cd9e-11e8-8e99-15807bff4dfd |
>   
> Executing
> single-partition query on roles | 10.54.145.32 |   6171
> |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd |
> e70c4cf0-cd9e-11e8-8e99-15807bff4dfd
> |
> Acquiring sstable references | 10.54.145.32 |   6362
> |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd |
> e70c4cf1-cd9e-11e8-8e99-15807bff4dfd
> |
> Skipped 0/2 non-slice-intersecting sstables, included 0 due to tombstones |
> 10.54.145.32 |   6641 |
> ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd |
> e70c4cf2-cd9e-11e8-8e99-15807bff4dfd
> |
> Key cache hit for sstable 346 | 10.54.145.32 |   6955
> |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd |
> e70c4cf3-cd9e-11e8-8e99-15807bff4dfd
> |
>Bloom filter allows skipping sstable 347 | 10.54.145.32
> |   7202 |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd |
> e70c7400-cd9e-11e8-8e99-15807bff4dfd
> |
>   Merged data
> from memtables and 2 sstables | 10.54.145.32 |   7386
> |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd |
> e70c7401-cd9e-11e8-8e99-15807bff4dfd
> |
> Read 1 live and 0 tombstone cells | 10.54.145.32 |   7519
> |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd |
> e70c7402-cd9e-11e8-8e99-15807bff4dfd
> |
> Executing single-partition query on roles | 10.54.145.32 |   7826
> |   ReadStage-4
>
> e70ac650-cd9e-11e8-8e99-15807bf

Re: Tracing in cassandra

2018-10-12 Thread Abdul Patel
Cpuld you elaborate cross partition query?

On Friday, October 12, 2018, Durity, Sean R 
wrote:

> I suspect you are doing a cross-partition query, which will not scale well
> (as you can see). What is the schema for the table involved?
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Abdul Patel 
> *Sent:* Thursday, October 11, 2018 5:54 PM
> *To:* a...@instaclustr.com
> *Cc:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Tracing in cassandra
>
>
>
> Query :
>
> SELECT * FROM keysoace.tablenameWHERE user_id = 390797583 LIMIT 5000;
>
> -Error: ReadTimeout: Error from server: code=1200 [Coordinator node timed
> out waiting for replica nodes' responses] message="Operation timed out -
> received only 0 responses." info={'received_responses': 0,
> 'required_responses': 1, 'consistency': 'ONE'}
>
>
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70bd7c0-cd9e-11e8-8e99-15807bff4dfd
> |
> Parsing SELECT * FROM keysoace.tablenameWHERE user_id = 390797583 LIMIT
> 5000; | 10.54.145.32 |   4020 |
> Native-Transport-Requests-3
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70bfed0-cd9e-11e8-8e99-15807bff4dfd
> |
> Preparing statement |
> 10.54.145.32 |   5065 |
> Native-Transport-Requests-3
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c25e0-cd9e-11e8-8e99-15807bff4dfd
> |
> Executing
> single-partition query on roles | 10.54.145.32 |   6171
> |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c4cf0-cd9e-11e8-8e99-15807bff4dfd
> |
>Acquiring
> sstable references | 10.54.145.32 |   6362
> |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c4cf1-cd9e-11e8-8e99-15807bff4dfd
> |
> Skipped 0/2 non-slice-intersecting sstables, included 0 due to tombstones |
> 10.54.145.32 |   6641 |
> ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c4cf2-cd9e-11e8-8e99-15807bff4dfd
> |
>   Key cache hit
> for sstable 346 | 10.54.145.32 |   6955
> |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c4cf3-cd9e-11e8-8e99-15807bff4dfd
> |
>Bloom filter allows
> skipping sstable 347 | 10.54.145.32 |   7202
> |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c7400-cd9e-11e8-8e99-15807bff4dfd
> |
> Merged data from memtables and 2 sstables
> | 10.54.145.32 |   7386 |
> ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c7401-cd9e-11e8-8e99-15807bff4dfd
> |
>   Read 1 live and 0
> tombstone cells | 10.54.145.32 |   7519
> |   ReadStage-2
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c7402-cd9e-11e8-8e99-15807bff4dfd
> |
>   Executing single-partition
> query on roles | 10.54.145.32 |   7826 |
> ReadStage-4
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c7403-cd9e-11e8-8e99-15807bff4dfd
> |
>Acquiring
> sstable references | 10.54.145.32 |   7924
> |   ReadStage-4
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c7404-cd9e-11e8-8e99-15807bff4dfd
> |
> Skipped 0/2 non-slice-intersecting sstables, included 0 due to tombstones |
> 10.54.145.32 |   8060 |
> ReadStage-4
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c7405-cd9e-11e8-8e99-15807bff4dfd
> |
>   Key cache hit
> for sstable 346 | 10.54.145.32 |   8137
> |   ReadStage-4
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c7406-cd9e-11e8-8e99-15807bff4dfd
> |
>Bloom filter allows skipping sstable
> 347 | 10.54.145.32 |   8187 |
> ReadStage-4
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c7407-cd9e-11e8-8e99-15807bff4dfd
> |
>   Merged data from memtables
> and 2 sstables | 10.54.145.32 |   8318
> |   ReadStage-4
>
> e70ac650-cd9e-11e8-8e99-15807bff4dfd | e70c9b10-cd9e-11e8-8e99-15807bff4dfd
> |
>   Read 1 live and 0
> tombstone cells | 10.54.145.32 |   8941
&

Tracing in cassandra

2018-10-11 Thread Abdul Patel
Hi,

We have multiple timeouts with select queries.
I ran trace to see why its timing out ..do we jave a good document to
interpretate whats causing the queries to timeout?


Re: Connections info

2018-10-05 Thread Abdul Patel
Thanks will try both options

On Friday, October 5, 2018, Alain RODRIGUEZ  wrote:

> Hello Abdul,
>
> I was caught by a different topic while answering, sending the message
> over, even though it's similar to Romain's solution.
>
> There is the metric mentioned above, or to have more details such as the
> app node IP, you can use:
>
> $ sudo netstat -tupawn | grep 9042 | grep ESTABLISHED
>
> tcp0  0 ::::*9042*   :::*
>   LISTEN  -
>
> tcp0  0 ::::*9042*   ::::51486
> ESTABLISHED -
>
> tcp0  0 ::::*9042*   ::::37624
> ESTABLISHED -
> [...]
>
> tcp0  0 ::::*9042*   ::::49108
> ESTABLISHED -
>
> or to count them:
>
> $ sudo netstat -tupawn | grep 9042 | grep ESTABLISHED | wc -l
>
> 113
>
> I'm not sure about the '-tupawn' options, it gives me the format I need
> and I never wondered much about it I must say. Maybe some of the options
> are useless.
>
> Sending this command through ssh would allow you to gather the information
> in one place. You can also run similar commands on the clients (Apps) toI
> hope that helps.
>
> C*heers
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> Le ven. 5 oct. 2018 à 06:28, Max C.  a écrit :
>
>> Looks like the number of connections is available in JMX as:
>>
>> org.apache.cassandra.metrics:type=Client,name=connectedNativeClients
>>
>> http://cassandra.apache.org/doc/4.0/operating/metrics.html
>>
>> "Number of clients connected to this nodes native protocol server”
>>
>> As for where they’re coming from — I’m not sure how to get that from
>> JMX.  Maybe you’ll have to use “lsof” or something to get that.
>>
>> - Max
>>
>> On Oct 4, 2018, at 8:57 pm, Abdul Patel  wrote:
>>
>> Hi All,
>>
>> Can we get number of users connected to each node in cassandra?
>> Also can we get from whixh app node they are connecting from?
>>
>>
>>


Connections info

2018-10-04 Thread Abdul Patel
Hi All,

Can we get number of users connected to each node in cassandra?
Also can we get from whixh app node they are connecting from?


Re: Multi dc reaper

2018-09-29 Thread Abdul Patel
Is the multidc reaper for load balancing if one goes dpwn another node can
take care of shchedule repairs or we can actuly schedule repairs at dc
level woth seperate reaper instances.
I am planning to have 3 reaper instances in 3 dc .


On Friday, September 28, 2018, Abdul Patel  wrote:

> Hi
>
> I have 18 node 3 dc cluster, trying to use reaper multi dc concept using
> datacenteravailabiloty =EACH
> But is there differnt steps as i start the first instance and add cluster
> it repairs for full clustrr rather than dc.
> Am i missing any steps?
> Also the contact points on this scebario should be only relevabt to that
> dc?
>


Re: Cassandra system table diagram

2018-09-21 Thread Abdul Patel
Thanks

On Friday, September 21, 2018, Rahul Singh 
wrote:

> I think his question was related specifically to the system tables. KDM is
> a good tool for designing the tables but not necessarily for viewing the
> system tables.
>
> Abdul, try out a tool called DB Schema Visualizer. It supports Cassandra
>
> Rahul Singh
> Chief Executive Officer
> m 202.905.2818
>
> Anant Corporation
> 1010 Wisconsin Ave NW, Suite 250
> <https://maps.google.com/?q=1010+Wisconsin+Ave+NW,+Suite+250+%0D%0AWashington,+D.C.+20007=gmail=g>
> Washington, D.C. 20007
>
> We build and manage digital business technology platforms.
> On Sep 19, 2018, 11:49 PM +0200, Joseph Arriola ,
> wrote:
>
> Hi, I recomend this
>
> http://kdm.dataview.org/
>
> its free and it has the implementation of good design practices that you
> can find in the DataStax courses.
>
>
>
>
> 2018-09-19 11:26 GMT-06:00 Abdul Patel :
>
>> Hi,
>>
>> Do we have somehwere cassandra system tables relation diagram?
>> Or just system table diagram?
>>
>
>


Cassandra system table diagram

2018-09-19 Thread Abdul Patel
Hi,

Do we have somehwere cassandra system tables relation diagram?
Or just system table diagram?


Re: Improve data load performance

2018-08-15 Thread Abdul Patel
User in dev env with 4 node cluster , 50k records with inserts of 70k
characters (json in text)
This will happen daily in some intervals not yet defined on a single table.
Its within 1 data center

On Wednesday, August 15, 2018, Durity, Sean R 
wrote:

> Might also help to know:
>
> Size of cluster
>
> How much data is being loaded (# of inserts/actual data size)
>
> Single table or multiple tables?
>
> Is this a one-time or occasional load or more frequently?
>
> Is the data located in the same physical data center as the cluster? (any
> network latency?)
>
>
>
> On the client side, prepared statements and ExecuteAsync can really speed
> things up.
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* Elliott Sims 
> *Sent:* Wednesday, August 15, 2018 1:13 PM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Re: Improve data load performance
>
>
>
> Step one is always to measure your bottlenecks.  Are you spending a lot of
> time compacting?  Garbage collecting?  Are you saturating CPU?  Or just a
> few cores?  Or I/O?  Are repairs using all your I/O?  Are you just running
> out of write threads?
>
>
>
> On Wed, Aug 15, 2018 at 5:48 AM, Abdul Patel  wrote:
>
> Application team is trying to load data with leveled compaction and its
> taking 1hr to load , what are  best options to load data faster ?
>
>
>
> On Tuesday, August 14, 2018, @Nandan@ 
> wrote:
>
> Bro, Please explain your question as much as possible.
> This is not a single line Q session where we will able to understand
> your in-depth queries in a single line.
> For better and suitable reply, Please ask a question and elaborate what
> steps you took for your question and what issue are you getting and all..
>
> I hope I am making it clear. Don't take it personally.
>
>
>
> Thanks
>
>
>
> On Wed, Aug 15, 2018 at 8:25 AM Abdul Patel  wrote:
>
> How can we improve data load performance?
>
>
>
> --
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>


Re: Improve data load performance

2018-08-15 Thread Abdul Patel
I didnt see any such bottlenecks , they are testing to write json file as
an text in cassandra which is slow ..rest of performance looks good?
Regarding write threads where i can chexk how many configured and if there
is bittleneck?

On Wednesday, August 15, 2018, Elliott Sims  wrote:

> Step one is always to measure your bottlenecks.  Are you spending a lot of
> time compacting?  Garbage collecting?  Are you saturating CPU?  Or just a
> few cores?  Or I/O?  Are repairs using all your I/O?  Are you just running
> out of write threads?
>
> On Wed, Aug 15, 2018 at 5:48 AM, Abdul Patel  wrote:
>
>> Application team is trying to load data with leveled compaction and its
>> taking 1hr to load , what are  best options to load data faster ?
>>
>>
>> On Tuesday, August 14, 2018, @Nandan@ 
>> wrote:
>>
>>> Bro, Please explain your question as much as possible.
>>> This is not a single line Q session where we will able to understand
>>> your in-depth queries in a single line.
>>> For better and suitable reply, Please ask a question and elaborate what
>>> steps you took for your question and what issue are you getting and all..
>>>
>>> I hope I am making it clear. Don't take it personally.
>>>
>>> Thanks
>>>
>>> On Wed, Aug 15, 2018 at 8:25 AM Abdul Patel  wrote:
>>>
>>>> How can we improve data load performance?
>>>
>>>
>


Re: Improve data load performance

2018-08-15 Thread Abdul Patel
Application team is trying to load data with leveled compaction and its
taking 1hr to load , what are  best options to load data faster ?


On Tuesday, August 14, 2018, @Nandan@ 
wrote:

> Bro, Please explain your question as much as possible.
> This is not a single line Q session where we will able to understand
> your in-depth queries in a single line.
> For better and suitable reply, Please ask a question and elaborate what
> steps you took for your question and what issue are you getting and all..
>
> I hope I am making it clear. Don't take it personally.
>
> Thanks
>
> On Wed, Aug 15, 2018 at 8:25 AM Abdul Patel  wrote:
>
>> How can we improve data load performance?
>
>


Improve data load performance

2018-08-14 Thread Abdul Patel
How can we improve data load performance?


90million reads

2018-08-14 Thread Abdul Patel
Currently our cassandra prod is 18 node 3 dc cluster and application does
55 million reads per day and want to add load and make it 90 millon reads
per day.they need a guestimate of resources which we need to bump without
testing ..on top of my head we can increase heap and  native trasport value
..any other paramters i should be concern?


Re: Reaper 1.2 released

2018-07-25 Thread Abdul Patel
Was abke start it but unable to start any repair manually it says
POST/repair_run
Unit conflits with exiting in clustername

On Wednesday, July 25, 2018, Abdul Patel  wrote:

> Ignore , alter and create permission were missing ..will msg if i actually
> see an showstopper
>
> On Wednesday, July 25, 2018, Abdul Patel  wrote:
>
>> I am trying to uograde to 1.2.2 version of reaper the instance isnt
>> starting and giving error that unable to create table snapshot ..do we need
>> to create it under reaper-db?
>>
>> On Wednesday, July 25, 2018, Steinmaurer, Thomas <
>> thomas.steinmau...@dynatrace.com> wrote:
>>
>>> Jon,
>>>
>>>
>>>
>>> eager trying it out.  Just FYI. Followed the installation
>>> instructions on http://cassandra-reaper.io/docs/download/install/
>>> Debian-based.
>>>
>>>
>>>
>>> 1) Importing the key results in:
>>>
>>>
>>>
>>> XXX:~$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys
>>> 2895100917357435
>>>
>>> Executing: /tmp/tmp.tP0KAKG6iT/gpg.1.sh --keyserver
>>>
>>> keyserver.ubuntu.com
>>>
>>> --recv-keys
>>>
>>> 2895100917357435
>>>
>>> gpg: requesting key 17357435 from hkp server keyserver.ubuntu.com
>>>
>>> ?: [fd 4]: read error: Connection reset by peer
>>>
>>> gpgkeys: HTTP fetch error 7: couldn't connect: eof
>>>
>>> gpg: no valid OpenPGP data found.
>>>
>>> gpg: Total number processed: 0
>>>
>>> gpg: keyserver communications error: keyserver unreachable
>>>
>>> gpg: keyserver communications error: public key not found
>>>
>>> gpg: keyserver receive failed: public key not found
>>>
>>>
>>>
>>> I had to change the keyserver URL then the import worked:
>>>
>>>
>>>
>>> XXX:~$ sudo apt-key adv --keyserver *hkp://keyserver.ubuntu.com:80
>>> <http://keyserver.ubuntu.com:80>* --recv-keys 2895100917357435
>>>
>>> Executing: /tmp/tmp.JwPNeUkm6x/gpg.1.sh --keyserver
>>>
>>> hkp://keyserver.ubuntu.com:80
>>>
>>> --recv-keys
>>>
>>> 2895100917357435
>>>
>>> gpg: requesting key 17357435 from hkp server keyserver.ubuntu.com
>>>
>>> gpg: key 17357435: public key "TLP Reaper packages <
>>> rea...@thelastpickle.com>" imported
>>>
>>> gpg: Total number processed: 1
>>>
>>> gpg:   imported: 1  (RSA: 1)
>>>
>>>
>>>
>>>
>>>
>>> 2) Running apt-get update fails with:
>>>
>>>
>>>
>>> XXX:~$ sudo apt-get update
>>>
>>> Ign:1 https://dl.bintray.com/thelastpickle/reaper-deb wheezy InRelease
>>>
>>> Ign:2 https://dl.bintray.com/thelastpickle/reaper-deb wheezy Release
>>>
>>> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
>>> Packages
>>>
>>> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
>>> Packages
>>>
>>> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
>>> Packages
>>>
>>> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>>> Translation-en_US
>>>
>>> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>>> Translation-en
>>>
>>> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
>>> Packages
>>>
>>> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
>>> Packages
>>>
>>> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
>>> Packages
>>>
>>> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>>> Translation-en_US
>>>
>>> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>>> Translation-en
>>>
>>> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
>>> Packages
>>>
>>> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
>>> Packages
>>>
>>> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
>>> Packages
>>>
>>> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>>> Translation-en_US
>>>
>>> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wh

Re: Reaper 1.2 released

2018-07-25 Thread Abdul Patel
Ignore , alter and create permission were missing ..will msg if i actually
see an showstopper

On Wednesday, July 25, 2018, Abdul Patel  wrote:

> I am trying to uograde to 1.2.2 version of reaper the instance isnt
> starting and giving error that unable to create table snapshot ..do we need
> to create it under reaper-db?
>
> On Wednesday, July 25, 2018, Steinmaurer, Thomas <
> thomas.steinmau...@dynatrace.com> wrote:
>
>> Jon,
>>
>>
>>
>> eager trying it out.  Just FYI. Followed the installation instructions
>> on http://cassandra-reaper.io/docs/download/install/ Debian-based.
>>
>>
>>
>> 1) Importing the key results in:
>>
>>
>>
>> XXX:~$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys
>> 2895100917357435
>>
>> Executing: /tmp/tmp.tP0KAKG6iT/gpg.1.sh --keyserver
>>
>> keyserver.ubuntu.com
>>
>> --recv-keys
>>
>> 2895100917357435
>>
>> gpg: requesting key 17357435 from hkp server keyserver.ubuntu.com
>>
>> ?: [fd 4]: read error: Connection reset by peer
>>
>> gpgkeys: HTTP fetch error 7: couldn't connect: eof
>>
>> gpg: no valid OpenPGP data found.
>>
>> gpg: Total number processed: 0
>>
>> gpg: keyserver communications error: keyserver unreachable
>>
>> gpg: keyserver communications error: public key not found
>>
>> gpg: keyserver receive failed: public key not found
>>
>>
>>
>> I had to change the keyserver URL then the import worked:
>>
>>
>>
>> XXX:~$ sudo apt-key adv --keyserver *hkp://keyserver.ubuntu.com:80
>> <http://keyserver.ubuntu.com:80>* --recv-keys 2895100917357435
>>
>> Executing: /tmp/tmp.JwPNeUkm6x/gpg.1.sh --keyserver
>>
>> hkp://keyserver.ubuntu.com:80
>>
>> --recv-keys
>>
>> 2895100917357435
>>
>> gpg: requesting key 17357435 from hkp server keyserver.ubuntu.com
>>
>> gpg: key 17357435: public key "TLP Reaper packages <
>> rea...@thelastpickle.com>" imported
>>
>> gpg: Total number processed: 1
>>
>> gpg:   imported: 1  (RSA: 1)
>>
>>
>>
>>
>>
>> 2) Running apt-get update fails with:
>>
>>
>>
>> XXX:~$ sudo apt-get update
>>
>> Ign:1 https://dl.bintray.com/thelastpickle/reaper-deb wheezy InRelease
>>
>> Ign:2 https://dl.bintray.com/thelastpickle/reaper-deb wheezy Release
>>
>> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
>> Packages
>>
>> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
>> Packages
>>
>> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
>> Packages
>>
>> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>> Translation-en_US
>>
>> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>> Translation-en
>>
>> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
>> Packages
>>
>> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
>> Packages
>>
>> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
>> Packages
>>
>> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>> Translation-en_US
>>
>> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>> Translation-en
>>
>> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
>> Packages
>>
>> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
>> Packages
>>
>> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
>> Packages
>>
>> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>> Translation-en_US
>>
>> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>> Translation-en
>>
>> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
>> Packages
>>
>> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
>> Packages
>>
>> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
>> Packages
>>
>> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>> Translation-en_US
>>
>> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
>> Translation-en
>>
>> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
>> Packages
>>
>> Ign:4 https://dl.bintray.com

Re: Reaper 1.2 released

2018-07-25 Thread Abdul Patel
I am trying to uograde to 1.2.2 version of reaper the instance isnt
starting and giving error that unable to create table snapshot ..do we need
to create it under reaper-db?

On Wednesday, July 25, 2018, Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:

> Jon,
>
>
>
> eager trying it out.  Just FYI. Followed the installation instructions
> on http://cassandra-reaper.io/docs/download/install/ Debian-based.
>
>
>
> 1) Importing the key results in:
>
>
>
> XXX:~$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys
> 2895100917357435
>
> Executing: /tmp/tmp.tP0KAKG6iT/gpg.1.sh --keyserver
>
> keyserver.ubuntu.com
>
> --recv-keys
>
> 2895100917357435
>
> gpg: requesting key 17357435 from hkp server keyserver.ubuntu.com
>
> ?: [fd 4]: read error: Connection reset by peer
>
> gpgkeys: HTTP fetch error 7: couldn't connect: eof
>
> gpg: no valid OpenPGP data found.
>
> gpg: Total number processed: 0
>
> gpg: keyserver communications error: keyserver unreachable
>
> gpg: keyserver communications error: public key not found
>
> gpg: keyserver receive failed: public key not found
>
>
>
> I had to change the keyserver URL then the import worked:
>
>
>
> XXX:~$ sudo apt-key adv --keyserver *hkp://keyserver.ubuntu.com:80
> * --recv-keys 2895100917357435
>
> Executing: /tmp/tmp.JwPNeUkm6x/gpg.1.sh --keyserver
>
> hkp://keyserver.ubuntu.com:80
>
> --recv-keys
>
> 2895100917357435
>
> gpg: requesting key 17357435 from hkp server keyserver.ubuntu.com
>
> gpg: key 17357435: public key "TLP Reaper packages <
> rea...@thelastpickle.com>" imported
>
> gpg: Total number processed: 1
>
> gpg:   imported: 1  (RSA: 1)
>
>
>
>
>
> 2) Running apt-get update fails with:
>
>
>
> XXX:~$ sudo apt-get update
>
> Ign:1 https://dl.bintray.com/thelastpickle/reaper-deb wheezy InRelease
>
> Ign:2 https://dl.bintray.com/thelastpickle/reaper-deb wheezy Release
>
> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
> Packages
>
> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
> Packages
>
> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
> Packages
>
> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en_US
>
> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en
>
> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
> Packages
>
> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
> Packages
>
> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
> Packages
>
> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en_US
>
> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en
>
> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
> Packages
>
> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
> Packages
>
> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
> Packages
>
> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en_US
>
> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en
>
> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
> Packages
>
> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
> Packages
>
> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
> Packages
>
> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en_US
>
> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en
>
> Ign:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
> Packages
>
> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
> Packages
>
> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
> Packages
>
> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en_US
>
> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en
>
> Err:3 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main amd64
> Packages
>
>   Received HTTP code 403 from proxy after CONNECT
>
> Ign:4 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main i386
> Packages
>
> Ign:5 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main all
> Packages
>
> Ign:6 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en_US
>
> Ign:7 https://dl.bintray.com/thelastpickle/reaper-deb wheezy/main
> Translation-en
>
> Ign:8 http://lnz-apt-cacher.dynatrace.vmta/apt xenial InRelease
>
> Hit:9 http://pl.archive.ubuntu.com/ubuntu xenial InRelease
>
> Hit:10 http://lnz-apt-cacher.dynatrace.vmta/apt xenial Release
>
> Hit:11 http://pl.archive.ubuntu.com/ubuntu xenial-backports InRelease
>
> Hit:13 http://pl.archive.ubuntu.com/ubuntu xenial-security InRelease
>
> Hit:14 http://pl.archive.ubuntu.com/ubuntu xenial-updates 

Re: 3.11.2 memory leak

2018-07-22 Thread Abdul Patel
Any idea when 3.11.3 is coming in?

On Tuesday, June 19, 2018, kurt greaves  wrote:

> At this point I'd wait for 3.11.3. If you can't, you can get away with
> backporting a few repair fixes or just doing sub range repairs on 3.11.2
>
> On Wed., 20 Jun. 2018, 01:10 Abdul Patel,  wrote:
>
>> Hi All,
>>
>> Do we kmow whats the stable version for now if u wish to upgrade ?
>>
>> On Tuesday, June 5, 2018, Steinmaurer, Thomas <
>> thomas.steinmau...@dynatrace.com> wrote:
>>
>>> Jeff,
>>>
>>>
>>>
>>> FWIW, when talking about https://issues.apache.org/
>>> jira/browse/CASSANDRA-13929, there is a patch available since March
>>> without getting further attention.
>>>
>>>
>>>
>>> Regards,
>>>
>>> Thomas
>>>
>>>
>>>
>>> *From:* Jeff Jirsa [mailto:jji...@gmail.com]
>>> *Sent:* Dienstag, 05. Juni 2018 00:51
>>> *To:* cassandra 
>>> *Subject:* Re: 3.11.2 memory leak
>>>
>>>
>>>
>>> There have been a few people who have reported it, but nobody (yet) has
>>> offered a patch to fix it. It would be good to have a reliable way to
>>> repro, and/or an analysis of a heap dump demonstrating the problem (what's
>>> actually retained at the time you're OOM'ing).
>>>
>>>
>>>
>>> On Mon, Jun 4, 2018 at 6:52 AM, Abdul Patel  wrote:
>>>
>>> Hi All,
>>>
>>>
>>>
>>> I recently upgraded my non prod cluster from 3.10 to 3.11.2.
>>>
>>> It was working fine for a 1.5 weeks then suddenly nodetool info startee
>>> reporting 80% and more memory consumption.
>>>
>>> Intially it was 16gb configured, then i bumped to 20gb and rebooted all
>>> 4 nodes of cluster-single DC.
>>>
>>> Now after 8 days i again see 80% + usage and its 16gb and above ..which
>>> we never saw before .
>>>
>>> Seems like memory leak bug?
>>>
>>> Does anyone has any idea ? Our 3.11.2 release rollout has been halted
>>> because of this.
>>>
>>> If not 3.11.2 whats the next best stable release we have now?
>>>
>>>
>>> The contents of this e-mail are intended for the named addressee only.
>>> It contains information that may be confidential. Unless you are the named
>>> addressee or an authorized designee, you may not copy or use it, or
>>> disclose it to anyone else. If you received it in error please notify us
>>> immediately and then destroy it. Dynatrace Austria GmbH (registration
>>> number FN 91482h) is a company registered in Linz whose registered office
>>> is at 4040 Linz, Austria, Freistädterstraße 313
>>>
>>


Re: Bind keyspace to specific data directory

2018-07-17 Thread Abdul Patel
The requirement is that we plan to have a new keyspace which will hist PII
data and we wanted to see if we can have encrypted ssd then can we find the
new keyspace with PII can be binded to it and no chanhes would required to
current keyspaces.

On Tuesday, July 17, 2018, Rahul Singh  wrote:

> What’s the goal, Abdul? Is it for security reasons or for organizational
> reasons. You could try prefixing / suffixing the keyspace names if its for
> organizational reasons (For now) if you don’t want to do the manual
> management of mounts as Anthony suggested .
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
> On Jul 16, 2018, 11:00 PM -0400, Anthony Grasso ,
> wrote:
>
> Hi Abdul,
>
> There is no mechanism offered in Cassandra to bind a keyspace (when
> created) to specific filesystem or directory. If multiple filesystems or
> directories are specified in the data_file_directories property in the
> *cassandra.yaml* then Cassandra will attempt to evenly distribute data
> from all keyspaces across them.
>
> Cassandra places table directories for each keyspace in a folder under the
> path(s) specified in the data_file_directories property. That is, if the
> data_file_directories property was set to */var/lib/cassandra/data* and
> keyspace "foo" was created, Cassandra would create the directory
> */var/lib/cassandra/data/foo*.
>
> One possible way bind a keyspace to a particular file system is create a
> custom mount point that has the same path as the keyspace. For example if
> you had a particular volume that you wanted to use for keyspace "foo", you
> could do something like:
>
> sudo mount / /var/lib/cassandra/data/foo
>
> Note that you would probably need to do this after the keyspace is created
> and before the tables are created. This setup would mean that all
> reads/writes for tables in keyspace "foo" would touch that volume.
>
> Regards,
> Anthony
>
> On Tue, 3 Jul 2018 at 07:02, Abdul Patel  wrote:
>
>> Hi
>>
>> Can we bind or specify while creating keyspace to bind to specific
>> filesystem or directory for writing?
>> I see we can split data on multiple filesystems but can we decide while
>> fileystem a particular keyspace can read and write?
>>
>


Bind keyspace to specific data directory

2018-07-02 Thread Abdul Patel
Hi

Can we bind or specify while creating keyspace to bind to specific
filesystem or directory for writing?
I see we can split data on multiple filesystems but can we decide while
fileystem a particular keyspace can read and write?


Bind keyspace to specific data directory

2018-07-02 Thread Abdul Patel
Hi

Can we bind or specify while creating keyspace to bind to specific
filesystem or directory for writing?
I see we can split data on multiple filesystems but can we decide while
fileystem a particular keyspace can read and write?


Cassandra read/sec and write/sec

2018-06-28 Thread Abdul Patel
Hi all

We use prometheus to monitor cassandra and then put it on graphana for
dashboard.
Whats the parameter to m3asure throughput  of cassandra?


Re: Is it ok to add more than one node to a exist cluster

2018-06-27 Thread Abdul Patel
Theres always an 2 minute rule ..after adding one node wait for 2 minutes
before addding second node.

On Wednesday, June 27, 2018, dayu  wrote:

> Thanks for your reply, kurt.
>
> another question, Can I bootstrap a new node when some node is in Joining
> state ? Or I should wait until Joining node becoming Normal ?
>
> Dayu
>
>
>
> At 2018-06-27 17:50:34, "kurt greaves"  wrote:
>
> Don't bootstrap nodes simultaneously unless you really know what you're
> doing, and you're using single tokens. It's not straightforward and will
> likely lead to data loss/inconsistencies. This applies for all current
> versions.
>
> On 27 June 2018 at 10:21, dayu  wrote:
>
>> Hi,
>> I have read a warning of not Simultaneously bootstrapping more than
>> one new node from the same rack in version 2.1  link
>> 
>> My cassandra cluster version is 3.0.10.
>> So I wonder to know Is it ok to add more than one node to a exist
>> cluster in 3.0.10.
>>
>> Thanks!
>> Dayu
>>
>>
>>
>>
>
>
>
>
>


Re: 3.11.2 memory leak

2018-06-19 Thread Abdul Patel
Hi All,

Do we kmow whats the stable version for now if u wish to upgrade ?

On Tuesday, June 5, 2018, Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:

> Jeff,
>
>
>
> FWIW, when talking about https://issues.apache.org/
> jira/browse/CASSANDRA-13929, there is a patch available since March
> without getting further attention.
>
>
>
> Regards,
>
> Thomas
>
>
>
> *From:* Jeff Jirsa [mailto:jji...@gmail.com]
> *Sent:* Dienstag, 05. Juni 2018 00:51
> *To:* cassandra 
> *Subject:* Re: 3.11.2 memory leak
>
>
>
> There have been a few people who have reported it, but nobody (yet) has
> offered a patch to fix it. It would be good to have a reliable way to
> repro, and/or an analysis of a heap dump demonstrating the problem (what's
> actually retained at the time you're OOM'ing).
>
>
>
> On Mon, Jun 4, 2018 at 6:52 AM, Abdul Patel  wrote:
>
> Hi All,
>
>
>
> I recently upgraded my non prod cluster from 3.10 to 3.11.2.
>
> It was working fine for a 1.5 weeks then suddenly nodetool info startee
> reporting 80% and more memory consumption.
>
> Intially it was 16gb configured, then i bumped to 20gb and rebooted all 4
> nodes of cluster-single DC.
>
> Now after 8 days i again see 80% + usage and its 16gb and above ..which we
> never saw before .
>
> Seems like memory leak bug?
>
> Does anyone has any idea ? Our 3.11.2 release rollout has been halted
> because of this.
>
> If not 3.11.2 whats the next best stable release we have now?
>
>
> The contents of this e-mail are intended for the named addressee only. It
> contains information that may be confidential. Unless you are the named
> addressee or an authorized designee, you may not copy or use it, or
> disclose it to anyone else. If you received it in error please notify us
> immediately and then destroy it. Dynatrace Austria GmbH (registration
> number FN 91482h) is a company registered in Linz whose registered office
> is at 4040 Linz, Austria, Freistädterstraße 313
>


3.11.2 memory leak

2018-06-04 Thread Abdul Patel
Hi All,

I recently upgraded my non prod cluster from 3.10 to 3.11.2.
It was working fine for a 1.5 weeks then suddenly nodetool info startee
reporting 80% and more memory consumption.
Intially it was 16gb configured, then i bumped to 20gb and rebooted all 4
nodes of cluster-single DC.
Now after 8 days i again see 80% + usage and its 16gb and above ..which we
never saw before .
Seems like memory leak bug?
Does anyone has any idea ? Our 3.11.2 release rollout has been halted
because of this.
If not 3.11.2 whats the next best stable release we have now?


Re: Cassandra few nodes having high mem consumption

2018-05-21 Thread Abdul Patel
Additonally the cqlsh was taking lil time to login ans immediatly the
message popped up in log lik
PERIODIC-COMMIT-LOG-SYNCER .
Seems commutlog isnt able to vommit to disk .any ideas?
I have ran nodetool flush and restarted nodes ..
But wanted to kmow the root cause.

On Monday, May 21, 2018, Abdul Patel <abd786...@gmail.com> wrote:

> Hi
>
> I have few cassandra nodes throwing suddwnly 80% memory usage , this
> happened 1 week after upgrading from 3.1.0 to 3.11.2 , no errors in log .
> Is their a way i can find high cpu or memory consuming process in
> cassnadra?
>


Cassandra few nodes having high mem consumption

2018-05-21 Thread Abdul Patel
Hi

I have few cassandra nodes throwing suddwnly 80% memory usage , this
happened 1 week after upgrading from 3.1.0 to 3.11.2 , no errors in log .
Is their a way i can find high cpu or memory consuming process in cassnadra?


Re: Question About Reaper

2018-05-21 Thread Abdul Patel
We have a paramater in reaper yaml file called
repairManagerSchrdulingIntervalSeconds default is 10 seconds   , i tested
with 8,6,5 seconds and found 5 seconds optimal for my environment ..you go
down further but it will have cascading effects in cpu and memory
consumption.
So test well.

On Monday, May 21, 2018, Surbhi Gupta <surbhi.gupt...@gmail.com> wrote:

> Thanks a lot for your inputs,
> Abdul, how did u tune reaper?
>
> On Sun, May 20, 2018 at 10:10 AM Jonathan Haddad <j...@jonhaddad.com>
> wrote:
>
>> FWIW the largest deployment I know about is a single reaper instance
>> managing 50 clusters and over 2000 nodes.
>>
>> There might be bigger, but I either don’t know about it or can’t
>> remember.
>>
>> On Sun, May 20, 2018 at 10:04 AM Abdul Patel <abd786...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I recently tested reaper and it actually helped us alot. Even with our
>>> small footprint 18 node reaper takes close to 6 hrs.>> ,i was able to tune it 50%>. But it really depends on number nodes. For
>>> example if you have 4 nodes then it runs on 4*256 =1024 segements ,
>>> so for your env. Ut will be 256*144 close to 36k segements.
>>> Better test on poc box how much time it takes and then proceed further
>>> ..i have tested so far in 1 dc only , we can actually have seperate reaper
>>> instance handling seperate dc but havent tested it yet.
>>>
>>>
>>> On Sunday, May 20, 2018, Surbhi Gupta <surbhi.gupt...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> We have a cluster with 144 nodes( 3 datacenter) with 256 Vnodes .
>>>> When we tried to start repairs from opscenter then it showed 1.9Million
>>>> ranges to repair .
>>>> And even after doing compaction and strekamthroughput to 0 , opscenter
>>>> is not able to help us much to finish repair in 9 days timeframe .
>>>>
>>>> What is your thought on Reaper ?
>>>> Do you think , Reaper might be able to help us in this scenario ?
>>>>
>>>> Thanks
>>>> Surbhi
>>>>
>>>>
>>>> --
>> Jon Haddad
>> http://www.rustyrazorblade.com
>> twitter: rustyrazorblade
>>
>>
>>


Re: Question About Reaper

2018-05-20 Thread Abdul Patel
Hi,

I recently tested reaper and it actually helped us alot. Even with our
small footprint 18 node reaper takes close to 6 hrs.. But it really depends on number nodes. For
example if you have 4 nodes then it runs on 4*256 =1024 segements ,
so for your env. Ut will be 256*144 close to 36k segements.
Better test on poc box how much time it takes and then proceed further ..i
have tested so far in 1 dc only , we can actually have seperate reaper
instance handling seperate dc but havent tested it yet.

On Sunday, May 20, 2018, Surbhi Gupta  wrote:

> Hi,
>
> We have a cluster with 144 nodes( 3 datacenter) with 256 Vnodes .
> When we tried to start repairs from opscenter then it showed 1.9Million
> ranges to repair .
> And even after doing compaction and strekamthroughput to 0 , opscenter is
> not able to help us much to finish repair in 9 days timeframe .
>
> What is your thought on Reaper ?
> Do you think , Reaper might be able to help us in this scenario ?
>
> Thanks
> Surbhi
>


Re: Invalid metadata has been detected for role

2018-05-18 Thread Abdul Patel
Hey

Thanks for response, i accidently decommissioned the seed node which was
causing this.
I promoted another node as seed node and restartwd all nodes with new seed
nodes , follwed by full nodetool repair and cleanup and it fixed the issue.

On Thursday, May 17, 2018, kurt greaves <k...@instaclustr.com> wrote:

> Can you post the stack trace and you're version of Cassandra?
>
> On Fri., 18 May 2018, 09:48 Abdul Patel, <abd786...@gmail.com> wrote:
>
>> Hi
>>
>> I had to decommission one dc , now while adding bacl the same nodes ( i
>> used nodetool decommission) they both get added fine and i also see them im
>> nodetool status but i am unable to login in them .gives invalid mwtadat
>> error, i ran repair and later cleanup as well.
>>
>> Any ideas?
>>
>>


Invalid metadata has been detected for role

2018-05-17 Thread Abdul Patel
Hi

I had to decommission one dc , now while adding bacl the same nodes ( i
used nodetool decommission) they both get added fine and i also see them im
nodetool status but i am unable to login in them .gives invalid mwtadat
error, i ran repair and later cleanup as well.

Any ideas?


Re: Error after 3.1.0 to 3.11.2 upgrade

2018-05-12 Thread Abdul Patel
Yeah found that all had 3 replication factor and system_auth had 1 ,
chnaged to 3 now ..so was this issue due to system_auth replication facyor
mismatch?

On Saturday, May 12, 2018, Hannu Kröger <hkro...@gmail.com> wrote:

> Hi,
>
> Did you check replication strategy and amounts of replicas of system_auth
> keyspace?
>
> Hannu
>
> Abdul Patel <abd786...@gmail.com> kirjoitti 12.5.2018 kello 5.21:
>
> No applicatiom isnt impacted ..no complains ..
> Also its an 4 node cluster in lower non production and all are on same
> version.
>
> On Friday, May 11, 2018, Jeff Jirsa <jji...@gmail.com> wrote:
>
>> The read is timing out - is the cluster healthy? Is it fully upgraded or
>> mixed versions? Repeated isn’t great, but is the application impacted?
>>
>> --
>> Jeff Jirsa
>>
>>
>> On May 12, 2018, at 6:17 AM, Abdul Patel <abd786...@gmail.com> wrote:
>>
>> Seems its coming from 3.10, got bunch of them today for 3.11.2, so if
>> this is repeatedly coming , whats solution for this?
>>
>> WARN  [Native-Transport-Requests-24] 2018-05-11 16:46:20,938
>> CassandraAuthorizer.java:96 - CassandraAuthorizer failed to authorize
>> # for 
>> ERROR [Native-Transport-Requests-24] 2018-05-11 16:46:20,940
>> ErrorMessage.java:384 - Unexpected exception during request
>> com.google.common.util.concurrent.UncheckedExecutionException:
>> java.lang.RuntimeException: 
>> org.apache.cassandra.exceptions.ReadTimeoutException:
>> Operation timed out - received only 0 responses.
>> at 
>> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
>> ~[guava-18.0.jar:na]
>> at com.google.common.cache.LocalCache.get(LocalCache.java:3937)
>> ~[guava-18.0.jar:na]
>> at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941)
>> ~[guava-18.0.jar:na]
>> at 
>> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
>> ~[guava-18.0.jar:na]
>> at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:108)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.service.ClientState.authorize(ClientState.java:439)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at org.apache.cassandra.service.ClientState.checkPermissionOnRe
>> sourceChain(ClientState.java:368) ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:345)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:332)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:310)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:260)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:221)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:530)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:507)
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>>
>> On Fri, May 11, 2018 at 8:30 PM, Jeff Jirsa <jji...@gmail.com> wrote:
>>
>>> That looks like Cassandra 3.10 not 3.11.2
>>>
>>> It’s also just the auth cache failing to refresh - if it’s transient
>>> it’s probably not a big deal. If it continues then there may be an issue
>>> with the cache refresher.
>>>
>>> --
>>> Jeff Jirsa
>>>
>>>
>>> On May 12, 2018, at 5:55 AM, Abdul Patel <abd786...@gmail.com> wrote:
>>>
>>> HI All,
>>>
>>> Seen below stack trace messages , in errorlog  one day after upgrade.
>>> one of the blogs said this might be due to old drivers, but not sure on
>>> it.
>>>
>>> FYI :
>>>
>>> INFO  [HANDSHAKE-/10.152.205.150] 2018-05-09 10:22:27,160
>>> OutboundTcpConnection.java:510 - Handshak

Re: Error after 3.1.0 to 3.11.2 upgrade

2018-05-11 Thread Abdul Patel
No applicatiom isnt impacted ..no complains ..
Also its an 4 node cluster in lower non production and all are on same
version.

On Friday, May 11, 2018, Jeff Jirsa <jji...@gmail.com> wrote:

> The read is timing out - is the cluster healthy? Is it fully upgraded or
> mixed versions? Repeated isn’t great, but is the application impacted?
>
> --
> Jeff Jirsa
>
>
> On May 12, 2018, at 6:17 AM, Abdul Patel <abd786...@gmail.com> wrote:
>
> Seems its coming from 3.10, got bunch of them today for 3.11.2, so if this
> is repeatedly coming , whats solution for this?
>
> WARN  [Native-Transport-Requests-24] 2018-05-11 16:46:20,938
> CassandraAuthorizer.java:96 - CassandraAuthorizer failed to authorize
> # for 
> ERROR [Native-Transport-Requests-24] 2018-05-11 16:46:20,940
> ErrorMessage.java:384 - Unexpected exception during request
> com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ReadTimeoutException:
> Operation timed out - received only 0 responses.
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
> ~[guava-18.0.jar:na]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3937)
> ~[guava-18.0.jar:na]
> at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941)
> ~[guava-18.0.jar:na]
> at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
> ~[guava-18.0.jar:na]
> at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:108)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.service.ClientState.authorize(ClientState.java:439)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at org.apache.cassandra.service.ClientState.
> checkPermissionOnResourceChain(ClientState.java:368)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:345)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:332)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:310)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at org.apache.cassandra.cql3.statements.SelectStatement.
> checkAccess(SelectStatement.java:260) ~[apache-cassandra-3.11.2.jar:
> 3.11.2]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:221)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:530)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:507)
> ~[apache-cassandra-3.11.2.jar:3.11.2]
>
> On Fri, May 11, 2018 at 8:30 PM, Jeff Jirsa <jji...@gmail.com> wrote:
>
>> That looks like Cassandra 3.10 not 3.11.2
>>
>> It’s also just the auth cache failing to refresh - if it’s transient it’s
>> probably not a big deal. If it continues then there may be an issue with
>> the cache refresher.
>>
>> --
>> Jeff Jirsa
>>
>>
>> On May 12, 2018, at 5:55 AM, Abdul Patel <abd786...@gmail.com> wrote:
>>
>> HI All,
>>
>> Seen below stack trace messages , in errorlog  one day after upgrade.
>> one of the blogs said this might be due to old drivers, but not sure on
>> it.
>>
>> FYI :
>>
>> INFO  [HANDSHAKE-/10.152.205.150] 2018-05-09 10:22:27,160
>> OutboundTcpConnection.java:510 - Handshaking version with /10.152.205.150
>> DEBUG [MessagingService-Outgoing-/10.152.205.150-Gossip] 2018-05-09
>> 10:22:27,160 OutboundTcpConnection.java:482 - Done connecting to /
>> 10.152.205.150
>> ERROR [Native-Transport-Requests-1] 2018-05-09 10:22:29,971
>> ErrorMessage.java:384 - Unexpected exception during request
>> com.google.common.util.concurrent.UncheckedExecutionException:
>> com.google.common.util.concurrent.UncheckedExecutionException:
>> java.lang.RuntimeException: 
>> org.apache.cassandra.exceptions.UnavailableException:
>> Cannot achieve consistency level LOCAL_ONE
>> at 
>> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
>> ~[guava-18.0.j

Re: Error after 3.1.0 to 3.11.2 upgrade

2018-05-11 Thread Abdul Patel
Seems its coming from 3.10, got bunch of them today for 3.11.2, so if this
is repeatedly coming , whats solution for this?

WARN  [Native-Transport-Requests-24] 2018-05-11 16:46:20,938
CassandraAuthorizer.java:96 - CassandraAuthorizer failed to authorize
# for 
ERROR [Native-Transport-Requests-24] 2018-05-11 16:46:20,940
ErrorMessage.java:384 - Unexpected exception during request
com.google.common.util.concurrent.UncheckedExecutionException:
java.lang.RuntimeException:
org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out -
received only 0 responses.
at
com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
~[guava-18.0.jar:na]
at com.google.common.cache.LocalCache.get(LocalCache.java:3937)
~[guava-18.0.jar:na]
at
com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941)
~[guava-18.0.jar:na]
at
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
~[guava-18.0.jar:na]
at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:108)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.service.ClientState.authorize(ClientState.java:439)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:368)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:345)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:332)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:310)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:260)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:221)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:530)
~[apache-cassandra-3.11.2.jar:3.11.2]
at
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:507)
~[apache-cassandra-3.11.2.jar:3.11.2]

On Fri, May 11, 2018 at 8:30 PM, Jeff Jirsa <jji...@gmail.com> wrote:

> That looks like Cassandra 3.10 not 3.11.2
>
> It’s also just the auth cache failing to refresh - if it’s transient it’s
> probably not a big deal. If it continues then there may be an issue with
> the cache refresher.
>
> --
> Jeff Jirsa
>
>
> On May 12, 2018, at 5:55 AM, Abdul Patel <abd786...@gmail.com> wrote:
>
> HI All,
>
> Seen below stack trace messages , in errorlog  one day after upgrade.
> one of the blogs said this might be due to old drivers, but not sure on it.
>
> FYI :
>
> INFO  [HANDSHAKE-/10.152.205.150] 2018-05-09 10:22:27,160
> OutboundTcpConnection.java:510 - Handshaking version with /10.152.205.150
> DEBUG [MessagingService-Outgoing-/10.152.205.150-Gossip] 2018-05-09
> 10:22:27,160 OutboundTcpConnection.java:482 - Done connecting to /
> 10.152.205.150
> ERROR [Native-Transport-Requests-1] 2018-05-09 10:22:29,971
> ErrorMessage.java:384 - Unexpected exception during request
> com.google.common.util.concurrent.UncheckedExecutionException:
> com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.UnavailableException:
> Cannot achieve consistency level LOCAL_ONE
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
> ~[guava-18.0.jar:na]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3937)
> ~[guava-18.0.jar:na]
> at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941)
> ~[guava-18.0.jar:na]
> at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
> ~[guava-18.0.jar:na]
> at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:108)
> ~[apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
> ~[apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
> ~[apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.service.ClientState.authorize(ClientState.java:419)
> ~[apache-cassandra-3.10.jar:3.10]
> at org.apache.cassandra.service.ClientState.
> checkPermissionOnResourceChain(ClientState.

Error after 3.1.0 to 3.11.2 upgrade

2018-05-11 Thread Abdul Patel
HI All,

Seen below stack trace messages , in errorlog  one day after upgrade.
one of the blogs said this might be due to old drivers, but not sure on it.

FYI :

INFO  [HANDSHAKE-/10.152.205.150] 2018-05-09 10:22:27,160
OutboundTcpConnection.java:510 - Handshaking version with /10.152.205.150
DEBUG [MessagingService-Outgoing-/10.152.205.150-Gossip] 2018-05-09
10:22:27,160 OutboundTcpConnection.java:482 - Done connecting to /
10.152.205.150
ERROR [Native-Transport-Requests-1] 2018-05-09 10:22:29,971
ErrorMessage.java:384 - Unexpected exception during request
com.google.common.util.concurrent.UncheckedExecutionException:
com.google.common.util.concurrent.UncheckedExecutionException:
java.lang.RuntimeException:
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve
consistency level LOCAL_ONE
at
com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
~[guava-18.0.jar:na]
at com.google.common.cache.LocalCache.get(LocalCache.java:3937)
~[guava-18.0.jar:na]
at
com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941)
~[guava-18.0.jar:na]
at
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
~[guava-18.0.jar:na]
at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:108)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.service.ClientState.authorize(ClientState.java:419)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:352)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:329)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:316)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:300)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.cql3.statements.SelectStatement.checkAccess(SelectStatement.java:221)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:214)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:523)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:500)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:146)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
[apache-cassandra-3.10.jar:3.10]
at
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
[netty-all-4.0.39.Final.jar:4.0.39.Final]
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
[netty-all-4.0.39.Final.jar:4.0.39.Final]
at
io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
[netty-all-4.0.39.Final.jar:4.0.39.Final]
at
io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
[netty-all-4.0.39.Final.jar:4.0.39.Final]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_51]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
[apache-cassandra-3.10.jar:3.10]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
Caused by: com.google.common.util.concurrent.UncheckedExecutionException:
java.lang.RuntimeException:
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve
consistency level LOCAL_ONE
at
com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
~[guava-18.0.jar:na]
at com.google.common.cache.LocalCache.get(LocalCache.java:3937)
~[guava-18.0.jar:na]
at
com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941)
~[guava-18.0.jar:na]
at
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
~[guava-18.0.jar:na]
at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:108)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.auth.RolesCache.getRoles(RolesCache.java:44)
~[apache-cassandra-3.10.jar:3.10]
at
org.apache.cassandra.auth.Roles.hasSuperuserStatus(Roles.java:51)
~[apache-cassandra-3.10.jar:3.10]
at

Re: Cassandra client drivers

2018-05-07 Thread Abdul Patel
Thanks Andy

On Monday, May 7, 2018, Andy Tolbert <andrew.tolb...@datastax.com> wrote:

> Hi Abdul,
>
> If you are already at C* 3.1.0 and the driver you are using works, it's
> almost certain to also work with 3.11.2 as well as there are no protocol
> changes between these versions.  I would advise testing your application
> out against a 3.11.2 test environment first though, just to be safe ;).
>
> Thanks,
> Andy
>
> On Mon, May 7, 2018 at 5:47 PM, Abdul Patel <abd786...@gmail.com> wrote:
>
>> Hi
>>
>> I am.planning for upgrade from 3.1.0 to 3.11.2 , just wanted to confirm
>> if cleint drivers need to upgraded? Or it will work with 3.1.0 drivers?
>>
>>
>>
>


Cassandra client drivers

2018-05-07 Thread Abdul Patel
Hi

I am.planning for upgrade from 3.1.0 to 3.11.2 , just wanted to confirm if
cleint drivers need to upgraded? Or it will work with 3.1.0 drivers?


Re: Cassandra limitations

2018-05-04 Thread Abdul Patel
Thanks ..
So whats ideal number when we should stop ..say 100 ?

On Friday, May 4, 2018, Jeff Jirsa <jji...@gmail.com> wrote:

> Cluster. The overhead is per cluster.
>
> There are two places you'll run into scaling pain here.
>
> 1) Size of the schema (which we have to serialize to send around) - too
> many tables, or too many columns in tables, can cause serializing schema to
> get really expensive and cause problems
> 2) Too many memtables - assume that all of them will have some tiny
> trivial amount of data in them, maybe 1MB. 200 * 1MB = 200MB of heap just
> for empty memtables. If you have a thousand tables, that's a gigabyte of
> heap, just for EMPTY memtables.
>
>
>
>
>
> On Fri, May 4, 2018 at 11:17 AM, Abdul Patel <abd786...@gmail.com> wrote:
>
>> I have 3 projects in pipeline adding 3 different cluster  across all
>> environwments would too costly option :)
>>
>> So 200 tables per keyspace or per cluster?
>>
>>
>> On Friday, May 4, 2018, Durity, Sean R <sean_r_dur...@homedepot.com>
>> wrote:
>>
>>> The issue is more with the number of tables, not the number of
>>> keyspaces. Because each table has a memTable, there is a practical limit to
>>> the number of memtables that a node can hold in its memory. (And scaling
>>> out doesn’t help, because every node still has a memTable for every table.)
>>> The practical table limit I have heard is in the low hundreds – maybe 200
>>> as a rough estimate.
>>>
>>>
>>>
>>> In general, we create a new cluster (instead of a new keyspace) for each
>>> application.
>>>
>>>
>>>
>>>
>>>
>>> Sean Durity
>>>
>>> *From:* Abdul Patel <abd786...@gmail.com>
>>> *Sent:* Thursday, May 03, 2018 5:56 PM
>>> *To:* User@cassandra.apache.org
>>> *Subject:* [EXTERNAL] Cassandra limitations
>>>
>>>
>>>
>>> Hi ,
>>>
>>>
>>>
>>> In my environment, we are coming up with 3 to 4 new projects , hence new
>>> keyspaces will be coming into picture.
>>>
>>> Do we have any limitations or performance issues when we hit to a number
>>> of keyspaces or number of nodes vs keyspaces?
>>>
>>> Also connections limitations if any?
>>>
>>>
>>>
>>> I know as data grows we can add more nodes and memory but nor sure about
>>> somethinh else which need to take into consideration.
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> The information in this Internet Email is confidential and may be
>>> legally privileged. It is intended solely for the addressee. Access to this
>>> Email by anyone else is unauthorized. If you are not the intended
>>> recipient, any disclosure, copying, distribution or any action taken or
>>> omitted to be taken in reliance on it, is prohibited and may be unlawful.
>>> When addressed to our clients any opinions or advice contained in this
>>> Email are subject to the terms and conditions expressed in any applicable
>>> governing The Home Depot terms of business or client engagement letter. The
>>> Home Depot disclaims all responsibility and liability for the accuracy and
>>> content of this attachment and for any damages or losses arising from any
>>> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
>>> items of a destructive nature, which may be contained in this attachment
>>> and shall not be liable for direct, indirect, consequential or special
>>> damages in connection with this e-mail message or its attachment.
>>>
>>
>


Re: Cassandra limitations

2018-05-04 Thread Abdul Patel
I have 3 projects in pipeline adding 3 different cluster  across all
environwments would too costly option :)

So 200 tables per keyspace or per cluster?


On Friday, May 4, 2018, Durity, Sean R <sean_r_dur...@homedepot.com> wrote:

> The issue is more with the number of tables, not the number of keyspaces.
> Because each table has a memTable, there is a practical limit to the number
> of memtables that a node can hold in its memory. (And scaling out doesn’t
> help, because every node still has a memTable for every table.) The
> practical table limit I have heard is in the low hundreds – maybe 200 as a
> rough estimate.
>
>
>
> In general, we create a new cluster (instead of a new keyspace) for each
> application.
>
>
>
>
>
> Sean Durity
>
> *From:* Abdul Patel <abd786...@gmail.com>
> *Sent:* Thursday, May 03, 2018 5:56 PM
> *To:* User@cassandra.apache.org
> *Subject:* [EXTERNAL] Cassandra limitations
>
>
>
> Hi ,
>
>
>
> In my environment, we are coming up with 3 to 4 new projects , hence new
> keyspaces will be coming into picture.
>
> Do we have any limitations or performance issues when we hit to a number
> of keyspaces or number of nodes vs keyspaces?
>
> Also connections limitations if any?
>
>
>
> I know as data grows we can add more nodes and memory but nor sure about
> somethinh else which need to take into consideration.
>
>
>
>
>
> --
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>


Cassandra limitations

2018-05-03 Thread Abdul Patel
Hi ,

In my environment, we are coming up with 3 to 4 new projects , hence new
keyspaces will be coming into picture.
Do we have any limitations or performance issues when we hit to a number of
keyspaces or number of nodes vs keyspaces?
Also connections limitations if any?

I know as data grows we can add more nodes and memory but nor sure about
somethinh else which need to take into consideration.


Cassandra reaper metrics

2018-04-25 Thread Abdul Patel
Hi

Does the metrics work with prometheus?
I did find graphite example ..not sure what would be the prometheus example.


Re: Cassandra reaper

2018-04-24 Thread Abdul Patel
Thanks Joaquin,

Yes i used the same and worked fine ..only thing is i had to add userid
password ..which is somewhat annoyoing to keep in comfig file ..can i get
reed of it and still store on reaper_db keyspace?
Also how to clean reaper_db by deleting completed reaper information from
gui? Or any other cleanup is required?

On Tuesday, April 24, 2018, Joaquin Casares <joaq...@thelastpickle.com>
wrote:

> Hello Abdul,
>
> Depending on what you want your backend to be stored on, you'll want to
> use a different file.
>
> So if you want your Reaper state to be stored within a Cassandra cluster,
> which I would recommend, use this file as your base file:
>
> https://github.com/thelastpickle/cassandra-reaper/blob/master/src/
> packaging/resource/cassandra-reaper-cassandra.yaml
>
> Make a copy of the yaml and include your system-specific settings. Then
> symlink it to the following location:
>
> /etc/cassandra-reaper/cassandra-reaper.yaml
>
> For completeness, this file is an example of how to use a Postgres server
> to store the Reaper state:
>
> https://github.com/thelastpickle/cassandra-reaper/blob/master/src/
> packaging/resource/cassandra-reaper.yaml
>
>
> Hope that helped!
>
> Joaquin Casares
> Consultant
> Austin, TX
>
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> On Tue, Apr 24, 2018 at 7:07 PM, Abdul Patel <abd786...@gmail.com> wrote:
>
>> Thanks
>>
>> But the differnce here is cassandra-reaper-caasandra has more paramters
>> than the cassandra-reaper.yaml
>> Can i just use the 1 file with all details or it looks for one specific
>> file?
>>
>>
>> On Tuesday, April 24, 2018, Joaquin Casares <joaq...@thelastpickle.com>
>> wrote:
>>
>>> Hello Abdul,
>>>
>>> You'll only want one:
>>>
>>> The yaml file used by the service is located at
>>> /etc/cassandra-reaper/cassandra-reaper.yaml and alternate config
>>> templates can be found under /etc/cassandra-reaper/configs. It is
>>> recommended to create a new file with your specific configuration and
>>> symlink it as /etc/cassandra-reaper/cassandra-reaper.yaml to avoid your
>>> configuration from being overwritten during upgrades.
>>>
>>> Adapt the config file to suit your setup and then run `sudo service
>>> cassandra-reaper start`.
>>>
>>>
>>> Source: http://cassandra-reaper.io/docs/download/install/#service-co
>>> nfiguration
>>>
>>>
>>> Hope that helps!
>>>
>>> Joaquin Casares
>>> Consultant
>>> Austin, TX
>>>
>>> Apache Cassandra Consulting
>>> http://www.thelastpickle.com
>>>
>>> On Tue, Apr 24, 2018 at 6:51 PM, Abdul Patel <abd786...@gmail.com>
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> For reaper do we need both file or only one?
>>>>
>>>> Cassandra.reaper.yaml
>>>> Cassandra-reaper-cassandra.yaml
>>>>
>>>
>>>
>


Re: Cassandra reaper

2018-04-24 Thread Abdul Patel
Thanks

But the differnce here is cassandra-reaper-caasandra has more paramters
than the cassandra-reaper.yaml
Can i just use the 1 file with all details or it looks for one specific
file?

On Tuesday, April 24, 2018, Joaquin Casares <joaq...@thelastpickle.com>
wrote:

> Hello Abdul,
>
> You'll only want one:
>
> The yaml file used by the service is located at 
> /etc/cassandra-reaper/cassandra-reaper.yaml
> and alternate config templates can be found under
> /etc/cassandra-reaper/configs. It is recommended to create a new file with
> your specific configuration and symlink it as 
> /etc/cassandra-reaper/cassandra-reaper.yaml
> to avoid your configuration from being overwritten during upgrades.
>
> Adapt the config file to suit your setup and then run `sudo service
> cassandra-reaper start`.
>
>
> Source: http://cassandra-reaper.io/docs/download/install/#
> service-configuration
>
>
> Hope that helps!
>
> Joaquin Casares
> Consultant
> Austin, TX
>
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> On Tue, Apr 24, 2018 at 6:51 PM, Abdul Patel <abd786...@gmail.com> wrote:
>
>> Hi All,
>>
>> For reaper do we need both file or only one?
>>
>> Cassandra.reaper.yaml
>> Cassandra-reaper-cassandra.yaml
>>
>
>


Cassandra reaper

2018-04-24 Thread Abdul Patel
Hi All,

For reaper do we need both file or only one?

Cassandra.reaper.yaml
Cassandra-reaper-cassandra.yaml


Re: Nodetool repair multiple dc

2018-04-20 Thread Abdul Patel
One quick question on reaper ..what data is stored in reaper_db keyspace ?
And how much does it grow?
Do we have to cleanup that frequently or reaper has mechnism to slef clean ?

On Friday, April 13, 2018, Alexander Dejanovski <a...@thelastpickle.com>
wrote:

> Hi Abdul,
>
> Reaper has been used in production for several years now, by many
> companies.
> I've seen it handling 100s of clusters and 1000s of nodes with a single
> Reaper process.
> Check the docs on cassandra-reaper.io to see which architecture matches
> your cluster : http://cassandra-reaper.io/docs/usage/multi_dc/
>
> Cheers,
>
> On Fri, Apr 13, 2018 at 4:38 PM Rahul Singh <rahul.xavier.si...@gmail.com>
> wrote:
>
>> Makes sense it takes a long time since it has to reconcile against
>> replicas in all DCs. I leverage commercial tools for production clusters,
>> but I’m pretty sure Reaper is the best open source option. Otherwise you’ll
>> waste a lot of time trying to figure it out own your own. No need to
>> reinvent the wheel.
>>
>> On Apr 12, 2018, 11:02 PM -0400, Abdul Patel <abd786...@gmail.com>,
>> wrote:
>>
>> Hi All,
>>
>> I have 18 node cluster across 3 dc , if i tey to run incremental repair
>> on singke node it takes forever sometome 45 to 1hr and sometime times out
>> ..so i started running "nodetool repair -dc dc1" for each dc one by one
>> ..which works fine ..do we have an better way to handle this?
>> I am thinking abouy exploring cassandra reaper ..does anyone has used
>> that in prod?
>>
>> --
> -
> Alexander Dejanovski
> France
> @alexanderdeja
>
> Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>


Logback-tools.xml

2018-04-18 Thread Abdul Patel
Hey All,

I have instakled 3.11.2 and i see 2 new files ..logback-tools.xml and
-jaas.config ..

What are they used for ?


Cassandra downgrade version

2018-04-16 Thread Abdul Patel
Hi All,

I am.planning to upgrade my cassandra cluster from 3.1.0 to 3.11.2 . Just
in case if somethings goes back then do we have any rollback or downgrade
option in cassandra  to older/ previous version?

Thanks


Cassandra datastax cerrification

2018-04-14 Thread Abdul Patel
Hi All,

I am preparing for cassandra certification(dba) orielly has stopped the
cassandra cerrification so the best bet is datastax now ..as per my
knwledge ds201 and 220 should be enough for cerrification and also i am
reading definitive guide on cassandra ..any other material required ? Any
practise test websites? As certification is costly and wanna clear in one
go ...


Nodetool repair multiple dc

2018-04-12 Thread Abdul Patel
Hi All,

I have 18 node cluster across 3 dc , if i tey to run incremental repair on
singke node it takes forever sometome 45 to 1hr and sometime times out ..so
i started running "nodetool repair -dc dc1" for each dc one by one ..which
works fine ..do we have an better way to handle this?
I am thinking abouy exploring cassandra reaper ..does anyone has used that
in prod?


Cssandra acid compliance?

2018-04-12 Thread Abdul Patel
As mongodb is coming up acid compliance , do we see cassnadra bringung this
to table ? Also one brimgs in this , how it will compensate on performance
and other factors? Also any chance to bringe in relational constraints and
foreign key like cockroach db ?


Re: Latest version and Features

2018-04-12 Thread Abdul Patel
Will driver upgrade is required if i move to 3.11.2 from 3.1.0? 3.x drivers
should work right?

On Thursday, April 12, 2018, Carlos Rolo  wrote:

> Thanks for the heads-up. I will update the post.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
> *linkedin.com/in/carlosjuzarterolo
> *
> Mobile: +351 918 918 100
> www.pythian.com
>
> On Thu, Apr 12, 2018 at 5:02 AM, Michael Shuler 
> wrote:
>
>> On 04/11/2018 06:12 PM, Carlos Rolo wrote:
>> >
>> > I blogged about this decision recently
>> > here: https://blog.pythian.com/what-cassandra-version-should
>> -i-use-2018/
>>
>> s/it the fact/is the fact/ typo, and possibly not 100% correct on the
>> statement in that sentence.
>>
>> There are commits since the last 2.1 & 2.2 releases. Generally, we'll do
>> a last release with any fixes on the branch, before shuttering
>> development on older branches. 1.0 and 1.1 had a few commits after the
>> last releases, but 1.2 and 2.0 both had final releases with any bug
>> fixes we had in-tree. I expect we'll do the same with 2.1 and 2.2 to
>> wrap things up nicely.
>>
>> --
>> Warm regards,
>> Michael
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>
> --
>
>
>
>


Re: Latest version and Features

2018-04-11 Thread Abdul Patel
Nicolas,
I do see all new features but instructions for upgrade are mentioned in
next section ..not sure if i missed it ..can you share that section?

On Wednesday, April 11, 2018, Abdul Patel <abd786...@gmail.com> wrote:

> Thanks .this is perfect
>
> On Wednesday, April 11, 2018, Nicolas Guyomar <nicolas.guyo...@gmail.com>
> wrote:
>
>> Sorry, I should have give you this link instead :
>> https://github.com/apache/cassandra/blob/trunk/NEWS.txt
>>
>> You'll find everything you need IMHO
>>
>> On 11 April 2018 at 17:05, Abdul Patel <abd786...@gmail.com> wrote:
>>
>>> Thanks.
>>>
>>> Is the upgrade process staright forward do we have any documentation to
>>> upgrade?
>>>
>>>
>>> On Wednesday, April 11, 2018, Jonathan Haddad <j...@jonhaddad.com> wrote:
>>>
>>>> Move to the latest 3.0, or if you're feeling a little more adventurous,
>>>> 3.11.2.
>>>>
>>>> 4.0 discussion is happening now, nothing is decided.
>>>>
>>>> On Wed, Apr 11, 2018 at 7:35 AM Abdul Patel <abd786...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> Our company is planning for upgrading cassandra to maitain the audit
>>>>> gudilines for patch cycle.
>>>>> We are currently on 3.1.0, whats the latest stable version and what
>>>>> are the new features?
>>>>> Will it be better to wait for 4.0? Any news on what will be new
>>>>> features in 4.0 ?
>>>>>
>>>>
>>


Re: Latest version and Features

2018-04-11 Thread Abdul Patel
Thanks .this is perfect

On Wednesday, April 11, 2018, Nicolas Guyomar <nicolas.guyo...@gmail.com>
wrote:

> Sorry, I should have give you this link instead :
> https://github.com/apache/cassandra/blob/trunk/NEWS.txt
>
> You'll find everything you need IMHO
>
> On 11 April 2018 at 17:05, Abdul Patel <abd786...@gmail.com> wrote:
>
>> Thanks.
>>
>> Is the upgrade process staright forward do we have any documentation to
>> upgrade?
>>
>>
>> On Wednesday, April 11, 2018, Jonathan Haddad <j...@jonhaddad.com> wrote:
>>
>>> Move to the latest 3.0, or if you're feeling a little more adventurous,
>>> 3.11.2.
>>>
>>> 4.0 discussion is happening now, nothing is decided.
>>>
>>> On Wed, Apr 11, 2018 at 7:35 AM Abdul Patel <abd786...@gmail.com> wrote:
>>>
>>>> Hi All,
>>>>
>>>> Our company is planning for upgrading cassandra to maitain the audit
>>>> gudilines for patch cycle.
>>>> We are currently on 3.1.0, whats the latest stable version and what are
>>>> the new features?
>>>> Will it be better to wait for 4.0? Any news on what will be new
>>>> features in 4.0 ?
>>>>
>>>
>


Re: Latest version and Features

2018-04-11 Thread Abdul Patel
Thanks.

Is the upgrade process staright forward do we have any documentation to
upgrade?

On Wednesday, April 11, 2018, Jonathan Haddad <j...@jonhaddad.com> wrote:

> Move to the latest 3.0, or if you're feeling a little more adventurous,
> 3.11.2.
>
> 4.0 discussion is happening now, nothing is decided.
>
> On Wed, Apr 11, 2018 at 7:35 AM Abdul Patel <abd786...@gmail.com> wrote:
>
>> Hi All,
>>
>> Our company is planning for upgrading cassandra to maitain the audit
>> gudilines for patch cycle.
>> We are currently on 3.1.0, whats the latest stable version and what are
>> the new features?
>> Will it be better to wait for 4.0? Any news on what will be new features
>> in 4.0 ?
>>
>


Latest version and Features

2018-04-11 Thread Abdul Patel
Hi All,

Our company is planning for upgrading cassandra to maitain the audit
gudilines for patch cycle.
We are currently on 3.1.0, whats the latest stable version and what are the
new features?
Will it be better to wait for 4.0? Any news on what will be new features in
4.0 ?