Re: Cassandra HEAP Suggestion.. Need a help

2018-05-24 Thread Elliott Sims
JVM GC tuning can be pretty complex, but the simplest solution to OOM is probably switching to G1GC and feeding it a rather large heap. Theoretically a smaller heap and carefully-tuned CMS collector is more efficient, but CMS is kind of fragile and tuning it is more of a black art, where you can

Re: Snapshot SSTable modified??

2018-05-25 Thread Elliott Sims
I've run across this problem before - it seems like GNU tar interprets changes in the link count as changes to the file, so if the file gets compacted mid-backup it freaks out even if the file contents are unchanged. I worked around it by just using bsdtar instead. On Thu, May 24, 2018 at 6:08

Re: saving distinct data in cassandra result in many tombstones

2018-06-12 Thread Elliott Sims
If this is data that expires after a certain amount of time, you probably want to look into using TWCS and TTLs to minimize the number of tombstones. Decreasing gc_grace_seconds then compacting will reduce the number of tombstones, but at the cost of potentially resurrecting deleted data if the

Re: High load, low IO wait, moderate CPU usage

2018-06-15 Thread Elliott Sims
Do you have an actual performance issue anywhere at the application level? If not, I wouldn't spend too much time on it - load avg is a sort of odd indirect metric that may or may not mean anything depending on the situation. On Fri, Jun 15, 2018 at 6:49 AM, Igor Leão wrote: > Hi there, > > I

Re: Restoring snapshot

2018-06-11 Thread Elliott Sims
It's possible that it's something more subtle, but keep in mind that sstables don't include the schema. If you've made schema changes, you need to apply/revert those first or C* probably doesn't know what to do with those columns in the sstable. On Sun, Jun 10, 2018 at 11:38 PM, wrote: > Dear

Re: Snapshot SSTable modified??

2018-05-28 Thread Elliott Sims
:-/ > > Thanks Jeff & others for your responses. > > - Max > > On May 25, 2018, at 5:05pm, Elliott Sims wrote: > > I've run across this problem before - it seems like GNU tar interprets > changes in the link count as changes to the file, so if the file gets > compacted mid

Re: 3.11.2 memory leak

2018-06-04 Thread Elliott Sims
Are you seeing significant issues in terms of performance? Increased garbage collection, long pauses, or even OutOfMemory? Which garbage collector are you using and with what settings/thresholds? Since the JVM's garbage-collected, a bigger heap can mean a problem or it can just mean "hasn't

Re: Mongo DB vs Cassandra

2018-06-01 Thread Elliott Sims
I'd say for a large write-heavy workload like, Cassandra is a pretty clear winner over MongoDB. I agree with the commenters about understanding your query patterns a bit better before choosing, though. Cassandra's queries are a bit limited, and if you're loading all new data every day and

Re: Too many Cassandra threads waiting!!!

2018-08-01 Thread Elliott Sims
You might have more luck trying to analyze at the Java level, either via a (Java) stack dump and the "ttop" tool from Swiss Java Knife, or Cassandra tools like "nodetool tpstats" On Wed, Aug 1, 2018 at 2:08 AM, nokia ceph wrote: > Hi, > > i'm having a 5 node cluster with cassandra 3.0.13. > > i

Re: about cassandra..

2018-08-09 Thread Elliott Sims
Deflate instead of LZ4 will probably give you somewhat better compression at the cost of a lot of CPU. Larger chunk length might also help, but in most cases you probably won't see much benefit above 64K (and it will increase I/O load). On Wed, Aug 8, 2018 at 11:18 PM, Eunsu Kim wrote: > Hi

Re: Improve data load performance

2018-08-15 Thread Elliott Sims
Step one is always to measure your bottlenecks. Are you spending a lot of time compacting? Garbage collecting? Are you saturating CPU? Or just a few cores? Or I/O? Are repairs using all your I/O? Are you just running out of write threads? On Wed, Aug 15, 2018 at 5:48 AM, Abdul Patel

Re: "minimum backup" in vnodes

2018-08-15 Thread Elliott Sims
Assuming this isn't an existing cluster, the easiest method is probably to use logical "racks" to explicitly control which hosts have a full replica of the data. with RF3 and 3 "racks", each "rack" has one complete replica. If you're not using the logical racks, I think the replicas are spread

Re: Improve data load performance

2018-08-15 Thread Elliott Sims
gt;> Is this a one-time or occasional load or more frequently? >> >> Is the data located in the same physical data center as the cluster? (any >> network latency?) >> >> >> >> On the client side, prepared statements and ExecuteAsync can really speed >

Re: Huge daily outbound network traffic

2018-08-16 Thread Elliott Sims
t; >> On Thursday, August 16, 2018, 12:02:55 AM PDT, Behnam B.Marandi < >> behnam.b.mara...@gmail.com> wrote: >> >> >> Actually I did. It seems this is a cross node traffic from one node to >> port 7000 (storage_port) of the other node. >> >> On Sun, Aug 12,

Re: upgrade 2.1 to 3.0

2018-08-11 Thread Elliott Sims
Might be a silly question, but did you run "nodetool upgradesstables" and convert to the 3.0 format? Also, which 3.0? Newest, or an earlier 3.0.x? On Fri, Aug 10, 2018 at 3:05 PM, kooljava2 wrote: > Hello, > > We recently upgrade C* from 2.1 to 3.0. After the upgrade we are seeing > increase

Re: Huge daily outbound network traffic

2018-08-11 Thread Elliott Sims
Since it's at a consistent time, maybe just look at it with iftop to see where the traffic's going and what port it's coming from? On Fri, Aug 10, 2018 at 1:48 AM, Behnam B.Marandi < behnam.b.mara...@gmail.com> wrote: > I don't have any external process or planed repair in that time period. > In

Re: benefits oh HBase over Cassandra

2018-08-24 Thread Elliott Sims
At the time that Facebook chose HBase, Cassandra was drastically less mature than it is now and I think the original creators had already left. There were already various Hadoop variants running for data analytics etc, so lots of operational and engineering experience around it available. So,

Re: Cluster CPU usage limit

2018-09-06 Thread Elliott Sims
It's interesting and a bit surprising that 256 write threads isn't enough. Even with a lot of cores, I'd expect you to be able to saturate CPU with that many threads. I'd make sure you don't have other bottlenecks, like GC, IOPs, network, or "microbursts" where your load is actually fluctuating

Re: Network throughput requirements

2018-07-10 Thread Elliott Sims
Among the hosts in a cluster? It depends on how much data you're trying to read and write. In general, you're going to want a lot more bandwidth among hosts in the cluster than you have external-facing. Otherwise things like repairs and bootstrapping new nodes can get slow/difficult. To put it

Cassandra downgrade version

2018-04-25 Thread Elliott Sims
Looks like no major table version changes since 3.0, and a couple of minor changes in 3.0.7/3.7 and 3.0.8/3.8: https://github.com/apache/cassandra/blob/48a539142e9e318f9177ad8cec4781 9d1adc3df9/doc/source/architecture/storage_engine.rst So, I suppose whether a revert is safe or not depends on

Re: JVM Heap erratic

2018-06-28 Thread Elliott Sims
It depends a bit on which collector you're using, but fairly normal. Heap grows for a while, then the JVM decides via a variety of metrics that it's time to run a collection. G1GC is usually a bit steadier and less sawtooth than the Parallel Mark Sweep , but if your heap's a lot bigger than

Re: JVM Heap erratic

2018-06-28 Thread Elliott Sims
; All queries use cluster key, so I'm not accidentally reading a whole > partition. > The last place I'm looking - which maybe should be the first - is > tombstones. > > sorry for the afternoon rant! thanks for your eyes! > > On Thu, Jun 28, 2018 at 5:54 PM, Elliott Sims >

Re: High IO and poor read performance on 3.11.2 cassandra cluster

2018-09-11 Thread Elliott Sims
A few reasons I can think of offhand why your test setup might not see problems from large readahead: Your sstables are <4MB or your reads are typically <4MB from the end of the file Your queries tend to use the 4MB of data anyways Your dataset is small enough that most of it fits in the VM cache,