RE: Check out new features in K8ssandra and Mission Control

2024-02-28 Thread Durity, Sean R via user
The k8ssandra requirement is a major blocker. Sean R. Durity INTERNAL USE From: Christopher Bradford Sent: Tuesday, February 27, 2024 9:49 PM To: user@cassandra.apache.org Cc: Christopher Bradford Subject: [EXTERNAL] Re: Check out new features in K8ssandra and Mission Control Hey Jon, *

RE: Big Data Question

2023-08-18 Thread Durity, Sean R via user
Cost of availability is a fair question at some level of the discussion. In my experience, high availability is one of the top 2 or 3 reasons why Cassandra is chosen as the data solution. So, if I am given a Cassandra use case to build out, I would normally assume high availability is needed,

RE: Big Data Question

2023-08-17 Thread Durity, Sean R via user
For a variety of reasons, we have clusters with 5 TB of disk per host as a “standard.” In our larger data clusters, it does take longer to add/remove nodes or do things like upgradesstables after an upgrade. These nodes have 3+TB of actual data on the drive. But, we were able to shrink the node

RE: Cassandra p95 latencies

2023-08-11 Thread Durity, Sean R via user
I would expect single digit ms latency on reads and writes. However, we have not done any performance testing on Apache Cassandra 4.x. Sean R. Durity INTERNAL USE From: Shaurya Gupta Sent: Friday, August 11, 2023 1:16 AM To: user@cassandra.apache.org Subject: [EXTERNAL] Re: Cassandra p95

RE: Survey about the parsing of the tooling's output

2023-07-10 Thread Durity, Sean R via user
We also parse the output from nodetool info and nodetool status and (to a lesser degree) nodetool netstats. We have basically made info and status more operator-friendly in a multi-cluster environment. (And we added a useable return value to our info command that we can use to evaluate the

RE: Is cleanup is required if cluster topology changes

2023-05-05 Thread Durity, Sean R via user
I run clean-up in parallel, not serially, since it is a node-only kind of operation. And I only run in the impacted DC. With only 300 GB on a node, clean-up should not take very long. Check your compactionthroughput. I ran clean-up in parallel on 53 nodes with over 3 TB of data each. It took

RE: Cleanup

2023-02-17 Thread Durity, Sean R via user
, Feb 16, 2023 at 9:43 AM Marc Hoppins mailto:marc.hopp...@eset.com>> wrote: Compaction_throughtput_per_mb is 0 in cassandra.yaml. Is setting it in nodetool going to provide any increase? From: Durity, Sean R via user mailto:user@cassandra.apache.org>> Sent: Thursday, February 16, 2023

RE: Cleanup

2023-02-16 Thread Durity, Sean R via user
Clean-up is constrained/throttled by compactionthroughput. If your system can handle it, you can increase that throughput (nodetool setcompactionthroughput) for the clean-up in order to reduce the total time. It is a node-isolated operation, not cluster-involved. I often run clean up on all

RE: Startup fails - 4.1.0

2023-02-03 Thread Durity, Sean R via user
In most cases, I would delete the corrupt commit log file and restart. Then run repairs on that node. I have seen cases where multiple files are corrupted and it is easier to remove all commit log files to get the node restarted. Sean R. Durity From: Joe Obernberger Sent: Friday, February 3,

RE: Failed disks - correct procedure

2023-01-17 Thread Durity, Sean R via user
For physical hardware when disks fail, I do a removenode, wait for the drive to be replaced, reinstall Cassandra, and then bootstrap the node back in (and run clean-up across the DC). All of our disks are presented as one file system for data, which is not what the original question was

RE: Best compaction strategy for rarely used data

2022-12-30 Thread Durity, Sean R via user
: [EXTERNAL] Re: Best compaction strategy for rarely used data On 2022-12-29 21: 54, Durity, Sean R via user wrote: > At some point you will end up with large sstables (like 1 TB) that won’t > compact because there are not 4 similar-sized ones able to be compacted Yes, that's exactly

RE: Best compaction strategy for rarely used data

2022-12-29 Thread Durity, Sean R via user
If there isn’t a TTL and timestamp on the data, I’m not sure the benefits of TWCS for this use case. I would stick with size-tiered. At some point you will end up with large sstables (like 1 TB) that won’t compact because there are not 4 similar-sized ones able to be compacted (assuming default

RE: Cassandra 4.0.7 - issue - service not starting

2022-12-08 Thread Durity, Sean R via user
I have seen this when there is a tab character in the yaml file. Yaml is (too) picky on these things. Sean R. Durity DB Solutions Staff Systems Engineer – Cassandra From: Amit Patel via user Sent: Thursday, December 8, 2022 11:38 AM To: Arvydas Jonusonis ; user@cassandra.apache.org Subject:

RE: Cassandra Summit CFP update

2022-11-30 Thread Durity, Sean R via user
Does it need to be strictly Apache Cassandra? Or is something built on/working with DataStax Enterprise allowed? I would think if it doesn’t depend on DSE-only technology, it could still apply to a general Cassandra audience. Sean R. Durity From: Patrick McFadin Sent: Tuesday, November 29,

RE: Query drivertimeout PT2S

2022-11-09 Thread Durity, Sean R via user
>From the subject, this looks like a client-side timeout (thrown by the >driver). I have seen situations where the client/driver timeout of 2 seconds >is a shorter timeout than on the server side (10 seconds). So, the server >doesn’t really note any problem. Unless this is a very remote client

RE: Questions on the count and multiple index behaviour in cassandra

2022-09-29 Thread Durity, Sean R via user
Aggregate queries (like count(*) ) are fine *within* a reasonably sized partition (under 100 MB in size). However, Cassandra is not the right tool if you want to do aggregate queries *across* partitions (unless you break up the work with something like Spark). Choosing the right partition key

RE: Adding nodes

2022-07-12 Thread Durity, Sean R via user
In my experience C* is not cheaper storage than HDFS. If that is the goal, it may be painful. Each Cassandra DC has at least one full copy of the data set. For production data that I care about (that my app teams care about), we use RF=3 in each Cassandra DC. And I only use 1 Cassandra rack

RE: Guardrails in Cassandra 4.1 Alpha

2022-06-23 Thread Durity, Sean R
I'm not afraid to admit that I LOVE this feature. Exactly what a data engine should be able to do - stop bad behavior. Sean R. Durity From: Aaron Ploetz Sent: Thursday, June 23, 2022 3:22 PM To: user@cassandra.apache.org Subject: [EXTERNAL] Re: Guardrails in Cassandra 4.1 Alpha Ahh...yes, my

RE: Seed List

2022-06-23 Thread Durity, Sean R
It can work to use host names. We have done it for temporary clusters where there is at least a theoretical possibility of an ip address change. I don't know all the trade-offs of using host names, since we don't do that for production. Sean R. Durity INTERNAL USE -Original

RE: Configuration for new(expanding) cluster and new admins.

2022-06-16 Thread Durity, Sean R
I have run clusters with different disk size nodes by using different number of num_tokens. I used the basic math of just increasing the num_tokens by the same percentage as change in disk size. (So, if my "normal" node was 8 tokens, one with double the disk space would be 16.) One thing to

RE: Topology vs RackDC

2022-06-02 Thread Durity, Sean R
46, Durity, Sean R wrote: I agree with Marc. We use the cassandra-topology.properties file (and PropertyFileSnitch) for our deployments. Having a file different on every node has never made sense to me. There would still have to be some master file somewhere from which to generate that individual

RE: Topology vs RackDC

2022-06-02 Thread Durity, Sean R
I agree with Marc. We use the cassandra-topology.properties file (and PropertyFileSnitch) for our deployments. Having a file different on every node has never made sense to me. There would still have to be some master file somewhere from which to generate that individual node file. There is the

RE: Fetch all data from Cassandra 3.4.4

2022-05-31 Thread Durity, Sean R
A select with no where clause is not a good access pattern for Cassandra, regardless of driver version. It will not scale for large data sets or a large number of nodes. Ideally you want to select from a single partition for each query. So, depending on the size of the rows, one answer may be

RE: about the performance of select * from tbl

2022-04-26 Thread Durity, Sean R
If the number of rows is known and bounded and would be under 100 MB in size, I would suggest adding an artificial partition key so that all rows are in one partition. I recommend this technique for something like an application settings table that is retrieved infrequently (like on app

RE: Cassandra Management tools?

2022-02-28 Thread Durity, Sean R
I have used my own bash scripts with ssh connections to the nodes to automate everything from upgrades, node down monitoring, metrics or log collection, and rolling restarts. We are moving toward ansible (our infrastructure team is standardizing on its use). Rolling restart isn’t too bad in

Migration between Apache 4.x and DSE 6+?

2022-01-18 Thread Durity, Sean R
Has anyone been able to add Apache Cassandra 4.x nodes to a new DC within a DSE 6+ cluster (or vice versa) in order to migrate from one to the other with no downtime? I was able to do this prior to DSE 6/Cassandra 4.0, but that was before the internals rewrite (and different sstable format?) of

RE: about memory problem in write heavy system..

2022-01-11 Thread Durity, Sean R
In my experience, the 50% overhead for compaction/upgrade is for the worst case scenario systems – where the data is primarily one table and uses size-tiered compaction. (I have one of those.) What I really look at is if there is enough space to execute upgradesstables on the largest sstable.

RE: Separating storage and processing

2021-11-15 Thread Durity, Sean R
We have apps like this, also. For straight Cassandra, I think it is just the nature of how it works. DataStax provides some interesting solutions in different directions: BigNode (for handling 10-20 TB nodes) or Astra (cloud-based/container-driven solution that DOES separate read, write, and

RE: One big giant cluster or several smaller ones?

2021-11-15 Thread Durity, Sean R
For memory-sake, you do not want “too many” tables in a single cluster (~200 is a reasonable rule of thumb). But I don’t see a major concern with a few very large tables in the same cluster. The client side, at least in Java, could get large (memory-wise) holding a Cluster object for multiple

RE: Storing user activity logs

2021-07-20 Thread Durity, Sean R
Yes, use the time-bucketing approach and choose a bucket-size (included in the partition key) that is granular enough to keep partitions to about 100 MB in size. (Unbounded partitions WILL destroy your cluster.) If your queries *need* to retrieve all user activity over a certain period, then,

4.0 best feature/fix?

2021-05-07 Thread Durity, Sean R
There is not enough 4.0 chatter here. What feature or fix of the 4.0 release is most important for your use case(s)/environment? What is working well so far? What needs more work? Is there anything that needs more explanation? [cid:image001.png@01D7431D.8C1332E0] Sean Durity Staff Systems

RE: Cassandra 3.11 cqlsh doesn't work with latest JDK

2021-04-30 Thread Durity, Sean R
Try adding this into the SSL section of your cqlshrc file: version = SSLv23 Sean Durity From: Maxim Parkachov Sent: Friday, April 30, 2021 8:57 AM To: user@cassandra.apache.org; d...@cassandra.apache.org Subject: [EXTERNAL] Cassandra 3.11 cqlsh doesn't work with latest JDK Hi everyone, I

RE: Huge single-node DCs (?)

2021-04-09 Thread Durity, Sean R
DataStax Enterprise has a new-ish feature set called Big Node that is supposed to help with using much denser nodes. We are going to be doing some testing with that for a similar use case with ever-growing disk needs, but no real increase in read or write volume. At some point it may become

RE: Changing num_tokens and migrating to 4.0

2021-03-22 Thread Durity, Sean R
I have a cluster (almost 200 nodes) with a variety of disk sizes and use different numbers of tokens so that the machines can use the disk they have. It is a very handy feature! While I agree that a node with larger disk may handle more requests, that may not be enough to impact CPU or memory.

RE: Cassandra video tutorials for administrators.

2021-03-18 Thread Durity, Sean R
+1 for data modeling. If an admin can spend the day helping app teams get the model right BEFORE hitting production, those are the best days (and prevent the bad days of trying to engineer around a bad model/migrate data to new tables/etc) I also find good value in understanding the

RE: No node was available to execute query error

2021-03-16 Thread Durity, Sean R
Sometimes time bucketing can be used to create manageable partition sizes. How much data is attached to a day, week, or minute? Could you use a partition and clustering key like: ((source, time_bucket), timestamp)? Then your application logic can iterate through time buckets to pull out the

RE: underutilized servers

2021-03-05 Thread Durity, Sean R
Are there specific queries that are slow? Partition-key queries should have read latencies in the single digits of ms (or faster). If that is not what you are seeing, I would first review the data model and queries to make sure that the data is modeled properly for Cassandra. Without metrics, I

RE: Cassandra timeouts 3.11.6

2021-01-26 Thread Durity, Sean R (US)
I would be looking at the queries in the application to see if there are any cross-partition queries (ALLOW FILTERING or IN clauses across partitions). This looks like queries that work fine with small scale, but are hitting timeouts when the data size has increased. Also see if anyone has

RE: Node Size

2021-01-20 Thread Durity, Sean R
uration? I know there are a lot of - it depends - on that question, but say it was a write heavy, light read setup. Thank you! -Joe On 1/20/2021 10:06 AM, Durity, Sean R wrote: Yakir is correct. While it is feasible to have large disk nodes, the practical aspect of managing them is an issue.

RE: Node Size

2021-01-20 Thread Durity, Sean R
Yakir is correct. While it is feasible to have large disk nodes, the practical aspect of managing them is an issue. With the current technology, I do not build nodes with more than about 3.5 TB of disk available. I prefer 1-2 TB, but costs/number of nodes can change the considerations. Putting

RE: unable to restore data from copied data directory

2021-01-04 Thread Durity, Sean R
This may not answer all your questions, but maybe it will help move you further along: - you could copy the data (not system) folders *IF* the clusters match in topology. This would include the clusters having the same token range assignment(s). And you would have to copy the folders from one

RE: how to choose tombstone_failure_threshold value if I want to delete billions of entries?

2020-11-20 Thread Durity, Sean R
Tombstone_failure_threshold is only for reads. If the tombstones are in different partitions, and you aren’t doing cross-partition reads, you shouldn’t need to adjust that value. If disk space recovery is the goal, it depends on how available you need the data to be. The faster way is probably

RE: local read from coordinator

2020-11-11 Thread Durity, Sean R
Doesn’t the read get sent to all nodes that own the data in parallel (from the coordinator)? And the first one that is able to respond wins (for LOCAL_ONE). That was my understanding. Sean Durity From: Jeff Jirsa Sent: Wednesday, November 11, 2020 9:24 AM To: user@cassandra.apache.org

RE: Last stored value metadata table

2020-11-10 Thread Durity, Sean R
Auth Sent: Tuesday, November 10, 2020 11:50 AM To: user@cassandra.apache.org Subject: [EXTERNAL] Re: Last stored value metadata table Hi, On Tue, Nov 10, 2020 at 5:29 PM Durity, Sean R mailto:sean_r_dur...@homedepot.com>> wrote: Updates do not create tombstones. Deletes create tomb

RE: Last stored value metadata table

2020-11-10 Thread Durity, Sean R
Hi, On Tue, Nov 10, 2020 at 3:18 PM Durity, Sean R mailto:sean_r_dur...@homedepot.com>> wrote: My answer would depend on how many “names” you expect. If it is a relatively small and constrained list (under a few hundred thousand), I would start with something like: At the moment, the

RE: Last stored value metadata table

2020-11-10 Thread Durity, Sean R
My answer would depend on how many “names” you expect. If it is a relatively small and constrained list (under a few hundred thousand), I would start with something like: Create table last_values ( arbitrary_partition text, -- use an app name or something static to define the partition name

RE: data modeling qu: use a Map datatype, or just simple rows... ?

2020-10-01 Thread Durity, Sean R
I’m a little late on this one, but I would choose approach 1. It is much more comprehensible to anyone who comes afterwards. And it should easily scale in Cassandra to whatever volume you have. I think I would call the table recent_users to make it very clear the purpose of the table. It is

RE: Restore a table with dropped columns to a new cluster fails

2020-07-24 Thread Durity, Sean R
I would use dsbulk to unload and load. Then the schemas don’t really matter. You define which fields in the resulting file are loaded into which columns. You also won’t have the limitations and slowness of COPY TO/FROM. Sean Durity From: Mitch Gitman Sent: Friday, July 24, 2020 2:22 PM To:

RE: Running Large Clusters in Production

2020-07-13 Thread Durity, Sean R
I’m curious – is the scaling needed for the amount of data, the amount of user connections, throughput or what? I have a 200ish cluster, but it is primarily a disk space issue. When I can have (and administer) nodes with large disks, the cluster size will shrink. Sean Durity From: Isaac

RE: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-24 Thread Durity, Sean R
ion the nodes with 3.11.3. On Wednesday, June 24, 2020, Durity, Sean R mailto:sean_r_dur...@homedepot.com>> wrote: Streaming operations (repair/bootstrap) with different file versions is usually a problem. Running a mixed version cluster is fine – for the time you are doing the upgrade. I would

RE: Cassandra upgrade from 3.11.3 -> 3.11.6

2020-06-24 Thread Durity, Sean R
Streaming operations (repair/bootstrap) with different file versions is usually a problem. Running a mixed version cluster is fine – for the time you are doing the upgrade. I would not stay on mixed versions for any longer than that. It takes more time, but I separate out the admin tasks so

RE: Cassandra Bootstrap Sequence

2020-06-02 Thread Durity, Sean R
Sent: Tuesday, June 2, 2020 10:48 AM To: user@cassandra.apache.org Subject: [EXTERNAL] Re: Cassandra Bootstrap Sequence 3000 tables On Tuesday, June 2, 2020, Durity, Sean R mailto:sean_r_dur...@homedepot.com>> wrote: How many total tables in the cluster? Sean Durity From: Jai Bheems

RE: Impact of enabling authentication on performance

2020-06-02 Thread Durity, Sean R
To flesh this out a bit, I set roles_validity_in_ms and permissions_validity_in_ms to 360 (10 minutes). The default of 2000 is far too often for my use cases. Usually I set the RF for system_auth to 3 per DC. On a larger, busier cluster I have set it to 6 per DC. NOTE: if you set the

RE: Cassandra Bootstrap Sequence

2020-06-02 Thread Durity, Sean R
How many total tables in the cluster? Sean Durity From: Jai Bheemsen Rao Dhanwada Sent: Monday, June 1, 2020 8:36 PM To: user@cassandra.apache.org Subject: [EXTERNAL] Re: Cassandra Bootstrap Sequence Thanks Erick, I see below tasks are being run mostly. I didn't quite understand what exactly

RE: Issues, understanding how CQL works

2020-04-22 Thread Durity, Sean R
I thought this might be a single-time use case request. I think my first approach would be to use something like dsbulk to unload the data and then reload it into a table designed for the query you want to do (as long as you have adequate disk space). I think like a DBA/admin first. Dsbulk

RE: Multi DC replication between different Cassandra versions

2020-04-16 Thread Durity, Sean R
I agree – do not aim for a mixed version as normal. Mixed versions are fine during an upgrade process, but the goal is to complete the upgrade as soon as possible. As for other parts of your plan, the Kafka Connector is a “sink-only,” which means that it can only insert into Cassandra. It

RE: Table not updating

2020-03-24 Thread Durity, Sean R
Oh, I see it was clock drift in this case. Glad you found that out. Sean Durity From: Durity, Sean R Sent: Tuesday, March 24, 2020 2:10 PM To: user@cassandra.apache.org Subject: [EXTERNAL] RE: Table not updating I’m wondering about nulls. They are written as tombstones. So

RE: Table not updating

2020-03-24 Thread Durity, Sean R
I’m wondering about nulls. They are written as tombstones. So, it is an interesting question for a prepared statement where you are not binding all the variables. The driver or framework might be doing something you don’t expect. Sean Durity From: Sebastian Estevez Sent: Monday, March 23,

RE: [EXTERNAL] Re: Performance of Data Types used for Primary keys

2020-03-06 Thread Durity, Sean R
I agree. Cassandra already hashes the partition key to a numeric token. Sean Durity From: Jon Haddad Sent: Friday, March 6, 2020 9:29 AM To: user@cassandra.apache.org Subject: [EXTERNAL] Re: Performance of Data Types used for Primary keys It's not going to matter at all. On Fri, Mar 6, 2020,

RE: [EXTERNAL] Cassandra 3.11.X upgrades

2020-03-04 Thread Durity, Sean R
I agree – a back out becomes practically very challenging after the second node is upgraded, because the new data is written in the new disk format. To satisfy the “you must have a backout” rules, I just say that after node 1, I could stop that node, wipe the data, downgrade the binaries, and

RE: [EXTERNAL] Re: IN OPERATOR VS BATCH QUERY

2020-02-21 Thread Durity, Sean R
Batches are for atomicity, not performance. I would do single deletes with a prepared statement. An IN clause causes extra work for the coordinator because multiple partitions are being impacted. So, the coordinator has to coordinate all nodes involved in those writes (up to the whole

RE: [EXTERNAL] Re: Null values in sasi indexed column

2020-02-21 Thread Durity, Sean R
I would consider building a lookup table instead. Something like: Create table new_lookup ( new-lookup-partition text, existing-key text PRIMARY KEY (new-lookup-partition) ) For me, these are easier to understand and reason through for Cassandra performance and availability. I would use

RE: Mechanism to Bulk Export from Cassandra on daily Basis

2020-02-21 Thread Durity, Sean R
I would also push for something besides a full refresh, if at all possible. It feels like a waste of resources to me – and not predictably scalable. Suggestions: use a queue to send writes to both systems. If the downstream system doesn’t handle TTL, perhaps set an expiration date and a purge

RE: [EXTERNAL] Cassandra 3.11.X upgrades

2020-02-13 Thread Durity, Sean R
+1 on nodetool drain. I added that to our upgrade automation and it really helps with post-upgrade start-up time. Sean Durity From: Erick Ramirez Sent: Wednesday, February 12, 2020 10:29 PM To: user@cassandra.apache.org Subject: Re: [EXTERNAL] Cassandra 3.11.X upgrades Yes to the steps. The

RE: [EXTERNAL] Re: Cassandra Encyrption between DC

2020-02-13 Thread Durity, Sean R
I will just add-on that I usually reserve security changes as the primary exception where app downtime may be necessary with Cassandra. (DSE has some Transitional tools that are useful, though.) Sometimes a short outage is preferred over a longer, more-complicated attempt to keep the app up.

RE: [EXTERNAL] Cassandra 3.11.X upgrades

2020-02-12 Thread Durity, Sean R
Ah - I should have looked it up! Thank you for fixing my mistake. Sean Durity -Original Message- From: Michael Shuler Sent: Wednesday, February 12, 2020 3:17 PM To: user@cassandra.apache.org Subject: Re: [EXTERNAL] Cassandra 3.11.X upgrades On 2/12/20 12:58 PM, Durity, Sean R wrote

RE: [EXTERNAL] Cassandra 3.11.X upgrades

2020-02-12 Thread Durity, Sean R
est, Sergio On Wed, Feb 12, 2020, 10:58 AM Durity, Sean R mailto:sean_r_dur...@homedepot.com>> wrote: Check the readme.txt for any upgrade notes, but the basic procedure is to: * Verify that nodetool upgradesstables has completed successfully on all nodes from any previous upgrade *

RE: [EXTERNAL] Cassandra 3.11.X upgrades

2020-02-12 Thread Durity, Sean R
Check the readme.txt for any upgrade notes, but the basic procedure is to: * Verify that nodetool upgradesstables has completed successfully on all nodes from any previous upgrade * Turn off repairs and any other streaming operations (add/remove nodes) * Stop an un-upgraded node

RE: Connection reset by peer

2020-02-12 Thread Durity, Sean R
This looks like an error between your client and the cluster. Is the other ip address your client app? I have typically seen this when there are network issues between the client and the cluster. Cassandra driver connections are typically very long-lived. If something like a switch or firewall

RE: [EXTERNAL] Re: Running select against cassandra

2020-02-06 Thread Durity, Sean R
_session ( userid bigint, session_usr text, last_access_time timestamp, login_time timestamp, status int, PRIMARY KEY (userid, session_usr) ) WITH CLUSTERING ORDER BY (session_usr ASC) On Thu, Feb 6, 2020 at 2:09 PM Durity, Sean R mailto:sean_r_dur...@homedepot.com>> wrote: D

RE: [EXTERNAL] Re: Running select against cassandra

2020-02-06 Thread Durity, Sean R
LUSTERING ORDER BY (session_usr ASC) On Thu, Feb 6, 2020 at 2:09 PM Durity, Sean R mailto:sean_r_dur...@homedepot.com>> wrote: Do you only need the current count or do you want to keep the historical counts also? By active users, does that mean some kind of user that the applica

RE: [EXTERNAL] Re: Running select against cassandra

2020-02-06 Thread Durity, Sean R
Do you only need the current count or do you want to keep the historical counts also? By active users, does that mean some kind of user that the application tracks (as opposed to the Cassandra user connected to the cluster)? I would consider a table like this for tracking active users through

RE: [EXTERNAL] How to reduce vnodes without downtime

2020-01-31 Thread Durity, Sean R
nts) * saved_caches (usually located in /var/lib/cassandra/saved_caches) Cheers, Anthony On Fri, 31 Jan 2020 at 03:05, Durity, Sean R mailto:sean_r_dur...@homedepot.com>> wrote: Your procedure won’t work very well. On the first node, if you switched to 4, you would end up with only a tiny

RE: [EXTERNAL] How to reduce vnodes without downtime

2020-01-30 Thread Durity, Sean R
Your procedure won’t work very well. On the first node, if you switched to 4, you would end up with only a tiny fraction of the data (because the other nodes would still be at 256). I updated a large cluster (over 150 nodes – 2 DCs) to smaller number of vnodes. The basic outline was this: *

RE: [EXTERNAL] Re: sstableloader & num_tokens change

2020-01-27 Thread Durity, Sean R
I would suggest to be aware of potential data size expansion. If you load (for example) three copies of the data into a new cluster (because the RF of the origin cluster is 3), it will also get written to the RF of the new cluster (3 more times). So, you could see data expansion of 9x the

RE: [EXTERNAL] Re: COPY command with where condition

2020-01-17 Thread Durity, Sean R
sstablekeys (in the tools directory?) can extract the actual keys from your sstables. You have to run it on each node and then combine and de-dupe the final results, but I have used this technique with a query generator to extract data more efficiently. Sean Durity From: Chris Splinter

RE: [EXTERNAL] Re: *URGENT* Migration across different Cassandra cluster few having same keyspace/table names

2020-01-17 Thread Durity, Sean R
would be an option? 2. If merge is an issue - I am guessing without app code change - this wont be possible ,right? Thanks & Regards, Ankit Gadhiya On Fri, Jan 17, 2020 at 9:40 AM Durity, Sean R mailto:sean_r_dur...@homedepot.com>> wrote: A couple things to consider: * A separati

RE: [EXTERNAL] Re: *URGENT* Migration across different Cassandra cluster few having same keyspace/table names

2020-01-17 Thread Durity, Sean R
A couple things to consider: * A separation of apps into their own clusters is typically a better model to avoid later entanglements * Dsbulk (1.4.1) is now available for only open source clusters. It is a great tool for unloading/loading * What data problem are you trying to solve

RE: [EXTERNAL] Re: Log output when Cassandra is "up"?

2020-01-08 Thread Durity, Sean R
I use a script that calls nodetool info. If nodetool info returns an error (instance isn’t up, on the way up, etc.) then I return that same error code (and I know the node is NOT OK). If nodetool info succeeds, I then parse the output for each protocol to be up. A node can be up, but have

RE: [EXTERNAL] Re: How bottom of cassandra save data efficiently?

2020-01-02 Thread Durity, Sean R
100,000 rows is pretty small. Import your data to your cluster, do a nodetool flush on each node, then you can see how much disk space is actually used. There are different compression tools available to you when you create the table. It also matters if the rows are in separate partitions or

RE: [EXTERNAL] Re: Facing issues while starting Cassandra

2020-01-02 Thread Durity, Sean R
Any read-only file systems? Have you tried to start from the command line (instead of a service)? Sometimes that will give a more helpful error when start-up can’t complete. If your error is literally what you included, it looks like the executable can’t find the cassandra.yaml file. I will

RE: [EXTERNAL] Migration a Keyspace from 3.0.X to 3.11.2 Cluster which already have keyspaces

2019-12-02 Thread Durity, Sean R
The size of the data matters here. Copy to/from is ok if the data is a few million rows per table, but not billions. It is also relatively slow (but with small data or a decent outage window, it could be fine). If the data is large and the outage time matters, you may need custom code to read

RE: [EXTERNAL] Re: Upgrade strategy for high number of nodes

2019-12-02 Thread Durity, Sean R
All my upgrades are without downtime for the application. Yes, do the binary upgrade one node at a time. Then run upgradesstables on as many nodes as your app load can handle (maybe you can point the app to a different DC, while another DC is doing upgradesstables). Upgradesstables doesn’t

RE: [EXTERNAL] performance

2019-12-02 Thread Durity, Sean R
I’m not sure this is the fully correct question to ask. The size of the data will matter. The importance of high availability matters. Performance can be tuned by taking advantage of Cassandra’s design strengths. In general, you should not be doing queries with a where clause on non-key

RE: [EXTERNAL] Re: Cassandra 3.11.4 Node the load starts to increase after few minutes to 40 on 4 CPU machine

2019-10-31 Thread Durity, Sean R
There is definitely a resource risk to having thousands of open connections to each node. Some of the drivers have (had?) less than optimal default settings, like acquiring 50 connections per Cassandra node. This is usually overkill. I think 5-10/node is much more reasonable. It depends on your

RE: [EXTERNAL] n00b q re UPDATE v. INSERT in CQL

2019-10-25 Thread Durity, Sean R
Everything in Cassandra is an insert. So, an update and an insert are functionally equivalent. An update doesn't go update the existing data on disk; it is a new write of the columns involved. So, the difference in your scenario is that with the "targeted" update, you are writing less of the

RE: Cassandra Rack - Datacenter Load Balancing relations

2019-10-25 Thread Durity, Sean R
+1 for removing complexity to be able to create (and maintain!) “reasoned” systems! Sean Durity – Staff Systems Engineer, Cassandra From: Reid Pinchback Sent: Thursday, October 24, 2019 10:28 AM To: user@cassandra.apache.org Subject: [EXTERNAL] Re: Cassandra Rack - Datacenter Load Balancing

RE: merge two cluster

2019-10-23 Thread Durity, Sean R
Beneficial to whom? The apps, the admins, the developers? I suggest that app teams have separate clusters per application. This prevents the noisy neighbor problem, isolates any security issues, and helps when it is time for maintenance, upgrade, performance testing, etc. to not have to

RE: [EXTERNAL] Re: GC Tuning https://thelastpickle.com/blog/2018/04/11/gc-tuning.html

2019-10-21 Thread Durity, Sean R
I don’t disagree with Jon, who has all kinds of performance tuning experience. But for ease of operation, we only use G1GC (on Java 8), because the tuning of ParNew+CMS requires a high degree of knowledge and very repeatable testing harnesses. It isn’t worth our time. As a previous writer

RE: [EXTERNAL] Cassandra Export error in COPY command

2019-09-22 Thread Durity, Sean R
Copy command tries to export all rows in the table, not just the ones on the node. It will eventually timeout if the table is large. It is really built for something under 5 million rows or so. Dsbulk (from DataStax) is great for this, if you are a customer. Otherwise, you will probably need to

RE: [EXTERNAL] Re: loading big amount of data to Cassandra

2019-08-05 Thread Durity, Sean R
DataStax has a very fast bulk load tool - dsebulk. Not sure if it is available for open source or not. In my experience so far, I am very impressed with it. Sean Durity – Staff Systems Engineer, Cassandra -Original Message- From: p...@xvalheru.org Sent: Saturday, August 3, 2019 6:06

RE: [EXTERNAL] Apache Cassandra upgrade path

2019-07-26 Thread Durity, Sean R
leted? On Fri, Jul 26, 2019 at 11:42 AM Durity, Sean R mailto:sean_r_dur...@homedepot.com>> wrote: What you have seen is totally expected. You can’t stream between different major versions of Cassandra. Get the upgrade done, then worry about any down hardware. If you are using

RE: [EXTERNAL] Apache Cassandra upgrade path

2019-07-26 Thread Durity, Sean R
What you have seen is totally expected. You can’t stream between different major versions of Cassandra. Get the upgrade done, then worry about any down hardware. If you are using DCs, upgrade one DC at a time, so that there is an available environment in case of any disasters. My advice,

RE: [EXTERNAL] Re: Bursts of Thrift threads make cluster unresponsive

2019-06-28 Thread Durity, Sean R
This sounds like a bad query or large partition. If a large partition is requested on multiple nodes (because of consistency level), it will pressure all those replica nodes. Then, as the cluster tries to adjust the rest of the load, the other nodes can get overwhelmed, too. Look at cfstats to

RE: [EXTERNAL] Re: Cassandra migration from 1.25 to 3.x

2019-06-17 Thread Durity, Sean R
The advice so far is exactly correct for an in-place kind of upgrade. The blog post you mentioned is different. They decided to jump versions in Cassandra by standing up a new cluster and using a dual-write/dual-read process for their app. They also wrote code to read and interpret sstables in

RE: Recover lost node from backup or evict/re-add?

2019-06-12 Thread Durity, Sean R
I’m not sure it is correct to say, “you cannot.” However, that is a more complicated restore and more likely to lead to inconsistent data and take longer to do. You are basically trying to start from a backup point and roll everything forward and catch up to current. Replacing/re-streaming is

RE: [EXTERNAL] Re: Select in allow filtering stalls whole cluster. How to prevent such behavior?

2019-05-28 Thread Durity, Sean R
This may sound a bit harsh, but I teach my developers that if they are trying to use ALLOW FILTERING – they are doing it wrong! We often choose Cassandra for its high availability and scalability characteristics. We love no downtime. ALLOW FILTERING is breaking the rules of availability and

RE: [EXTERNAL] Re: Python driver concistency problem

2019-05-28 Thread Durity, Sean R
This is a stretch, but are you using authentication and/or authorization? In my understanding the queries executed for you to do the authentication and/or authorization are usually done at LOCAL_ONE (or QUORUM for cassandra user), but maybe there is something that is changed in the security

RE: [EXTERNAL] Two separate rows for the same partition !!

2019-05-15 Thread Durity, Sean R
Uniqueness is determined by the partition key PLUS the clustering columns. Hard to tell from your data below, but is it possible that one of the clustering columns (perhaps g) has different values? That would easily explain the 2 rows returned – because they ARE different rows in the same

RE: [EXTERNAL] Re: Using Cassandra as an object store

2019-04-19 Thread Durity, Sean R
Object stores are some of our largest and oldest use cases. Cassandra has been a good choice for us. We do chunk the objects into 64k chunks (I think), so that partitions are not too large and it scales predictably. For us, the choice was more about high availability and scalability, which

  1   2   >