Re: Requesting Feedback for Cassandra as a backup solution.

2024-02-17 Thread CPC
hi, We implemented same strategy in one of our customers. Since 2016 we had one downtime in one DC because of high temperature(whole physical DC shutdown). With that approach I assume you will use Cassandra as a queue. You have to be careful about modeling and should use multiple partitions may b

cassandra 4.0 java 11 support

2021-07-27 Thread CPC
Hi , At cassandra site https://cassandra.apache.org/doc/latest/cassandra/new/java11.html , it says java 11 support is experimental and not recommended for production. What is the reason for that? I mean performance or bugs? Thank you...

Re: Cassandra and Vertica

2020-11-15 Thread CPC
Data distribution(vertica segements cassandra partition keys) are similar. Both dbms hold data as immutable data files but this is it. Aside from that nothing is similar. Cassandra was designed for OLTP loads but vertica designed for analytical loads. On Sun, 15 Nov 2020 at 23:37, Manu Chadha wro

Re: high write latency on a single table

2019-07-26 Thread CPC
mn by using metrics below.. > > org:apache:cassandra:metrics:columnfamily:.* ( reads from table metrics in > cassandra > https://cassandra.apache.org/doc/latest/operating/metrics.html#table-metrics ) > > > > > > > > > > > > On Wednesday, Jul

Re: high write latency on a single table

2019-07-24 Thread CPC
Hi Mehmet, Yes prometheus and opscenter On Wed, 24 Jul 2019 at 17:09, mehmet bursali wrote: > hi, > do you use any perfomance monitoring tool like prometheus? > > > > > On Monday, July 22, 2019, 1:16:58 PM GMT+3, CPC > wrote: > > > Hi everybody, > >

Re: high write latency on a single table

2019-07-22 Thread CPC
osts here >> <https://www.instaclustr.com/blog/>. >> >> This email has been sent on behalf of Instaclustr Pty. Limited >> (Australia) and Instaclustr Inc (USA). >> >> This email and any attachments may contain confidential and legally >> privileged info

Re: high write latency on a single table

2019-07-21 Thread CPC
Hi guys, Any idea? I thought it might be a bug but could not find anything related on jira. On Fri, Jul 19, 2019, 12:45 PM CPC wrote: > Hi Rajsekhar, > > Here the details: > > 1) > > [cassadm@bipcas00 ~]$ nodetool tablestats tims.MESSAGE_HISTORY > Tot

Re: high write latency on a single table

2019-07-19 Thread CPC
l cfhistograms for both the tables. > 3. Replication factor of the tables. > 4. Consistency with which write requests are sent > 5. Also the type of write queries for the table if handy would also help > (Light weight transactions or Batch writes or Prepared statements) > > Thanks &g

high write latency on a single table

2019-07-18 Thread CPC
Hi all, Our cassandra cluster consist of two dc and every dc we have 10 nodes. We are using DSE 5.1.12 (cassandra 3.11).We have a high local write latency on a single table. All other tables in our keyspace have normal latencies like 0.02 msec,even tables that have more write tps and more data. B

range repairs multiple dc

2019-02-07 Thread CPC
Hi All, I searched over documentation but could not find enough reference regarding -pr option. In some documentation it says you have to cover all ring in some places it says you have to run it on every node regardless of you have multiple dc. In our case we have three dc (DC1,DC2,DC3) with ever

Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-04 Thread CPC
Could you decrease chunk_length_in_kb to 16 or 8 and repeat the test. On Wed, Sep 5, 2018, 5:51 AM wxn...@zjqunshuo.com wrote: > How large is your row? You may meet reading wide row problem. > > -Simon > > *From:* Laxmikant Upadhyay > *Date:* 2018-09-05 01:01 > *To:* user > *Subject:* High IO

Re: cassandra getendpoints do not match with tracing

2018-06-25 Thread CPC
:20.612342 | 172.16.5.242 | 21342 | 10.201.165.77 I understand that 234 is an endpoint so it should communicate with it. But I dont understand why tracing includes 243 and 236 ips? They are not endpoints and we are submitting this query over 172.16.5.242. On Fri, 22 Jun 2018 at 21:43, CPC wrote

cassandra getendpoints do not match with tracing

2018-06-22 Thread CPC
Hi all, Recently we added some nodes to our cluster. After adding nodes we noticed that when we nodetool getendpoints tims "MESSAGE_HISTORY" partitionkey1 it reports three nodes per dc with 6 nodes in total which is expected since RF is 3. But when we run a query with local_one and tracing on ,

Re: DC aware failover

2017-11-16 Thread CPC
ave issues using it. Cheers, Le jeu. 16 nov. 2017 à 08:32, CPC a écrit : > Hi, > > We want to implement DC aware failover policy. For example if application > could not reach some part of the ring or if we loose 50% of local DC then > we want our application automatically to switc

DC aware failover

2017-11-15 Thread CPC
Hi, We want to implement DC aware failover policy. For example if application could not reach some part of the ring or if we loose 50% of local DC then we want our application automatically to switch other DC. We found this project on GitHub https://github.com/adejanovski/cassandra-dcaware-failov

Ip restriction for username

2017-10-06 Thread CPC
Hi, Is there some method to restrict a user to specific ip range/mask (MySQL and postgre has this kind of functionality)? I know dse have more advanced authentication like Kerberos and ldap but I don't know whether those can provide this functionality. Thanks

Re: Do not use Cassandra 3.11.0+ or Cassandra 3.0.12+

2017-09-11 Thread CPC
Hi, Is this bug fixed in dse 5.1.3? As I understand calling jmx getTombStoneRatio trigers that bug. We are using opscenter as well and do you have any idea whether opscenter using/calling this method? Thanks On Aug 29, 2017 6:35 AM, "Jeff Jirsa" wrote: > I shouldn't actually say I don't think

Re: Maximum and recommended storage per node

2017-07-28 Thread CPC
Hi Kiran, Is this raw storage size per node or allowed data per node? Can you provide links or articles about those recommendations? On 28 July 2017 at 12:45, Kiran mk wrote: > Recommended is 4TB per node > > Best regards, > Koran.M.K > > On 28-Jul-2017 1:57 PM, "C

Maximum and recommended storage per node

2017-07-28 Thread CPC
Hi all, Is there any recommended and maximum storage per node? In old articles 1tb per node was maximum but is it still apply. Or is it just depends on our latency requirements? Can you share your production experiences? Thank you...

Re: C* data modeling for time series

2017-07-26 Thread CPC
evices are doing compared to each other. On Wed, Jul 26, 2017 at 5:32 PM, CPC wrote: > Hi Junaid, > > Given a time range do you want to take all devices or a specific device? > > > On Jul 26, 2017 3:15 PM, "Junaid Nasir" wrote: > > I have a C* cluster (3 node

Re: C* data modeling for time series

2017-07-26 Thread CPC
Hi Junaid, Given a time range do you want to take all devices or a specific device? On Jul 26, 2017 3:15 PM, "Junaid Nasir" wrote: I have a C* cluster (3 nodes) with some 60gb data (replication factor 2). when I started using C* coming from SQL background didn't give much thought about modeling

Re: private interface for interdc messaging

2017-07-10 Thread CPC
017 at 21:51, CPC wrote: > Thank you Nitan. > > > On Jul 7, 2017 8:59 PM, "Nitan Kainth" wrote: > > Yes. Because that's the ip used for internode communication > > Sent from my iPhone > > On Jul 7, 2017, at 10:52 AM, CPC wrote: > > Hi Nitan, >

Re: private interface for interdc messaging

2017-07-07 Thread CPC
Thank you Nitan. On Jul 7, 2017 8:59 PM, "Nitan Kainth" wrote: Yes. Because that's the ip used for internode communication Sent from my iPhone On Jul 7, 2017, at 10:52 AM, CPC wrote: Hi Nitan, Do you mean setting broadcast_address to private network would suffice? On 7 Jul

Re: private interface for interdc messaging

2017-07-07 Thread CPC
get. > We had similar setup done in one of my previous project where we > segregated network between application and C* nodes communication. > > > On Jul 7, 2017, at 10:28 AM, CPC wrote: > > > > Hi, > > > > We are building 2 datacenters with each machine have

private interface for interdc messaging

2017-07-07 Thread CPC
Hi, We are building 2 datacenters with each machine have one public(for native client connections) and one for private(internode communication). What we noticed that nodes in one datacenters trying to communicate with other nodes in other dc over their public interfaces. I mean: DC1 Node1 public i