[
https://issues.apache.org/jira/browse/CASSANDRA-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15114212#comment-15114212
]
Peter Kovgan commented on CASSANDRA-10937:
------------------------------------------
Thank you, Jack.
I answer inline:
I still don't see any reason to believe that there is a bug here and that the
primary issue is that you are overloading the cluster.
Peter: Agree and hope this is the reason
Sure, Cassandra should do a better job of shedding/failing excessive incoming
requests, and there is an open Jira ticket to add just such a feature, but even
with that new feature, the net effect will be the same - it will still be up to
the application and operations to properly size the cluster and throttle
application load before it gets to Cassandra.
Peter: No problem, I understand the driving force for that, I only claim that
friendly warning would be appropriate in case of estimated danger of
approaching OOM.It is hard to do that, I understand. Some situations are not
easy to analyze and make conclusions. But see below…
OOM is not typically an indication of a software bug. Sure, sometimes code has
memory leaks, but with a highly dynamic system such as Cassandra, it typically
means either a misconfigured JVM or just very heavy load. Sometimes OOM simply
means that there is a lot of background processing going on (like compactions
or hinted handoff) that is having trouble keeping up with incoming requests.
Sometimes OOM occurs because you have too large a heap which defers GC but then
GC takes too long and further incoming requests simply generate more pressure
on the heap faster than that massive GC can deal with it.
Peter: Regarding compactions.. I could imagine that. We notice progressive
growth in IO demand.
So, I would take IO wait progressive growth as a warning trigger for possible
approaching OOM.E.g. if normal IO wait configured as 0.3%, and system
progressively goes through some configured thresholds of 0.7, 1.0, 1.5 % , I
would like to notice that in some warning log.This way, I can judge earlier,
that I need increase the ring or wait an OOM.
Now, in latest test, I see pending comactions gradually increases. Very slowely.
Two days ago it was 40, now 135, I wonder, is it a sign of a pending problem?
It is indeed tricky to make sure the JVM has enough heap but not too much.
Peter: Aware of that. I deal with GC issues in general more frequently than
others in my company. Previous DSE tests done with G1, providing multiple of
2048Mb (G1 recommendation), concretely I gave it 73728M Here I assume effective
GC with G1 is more a function of available CPU, because there are a lot of
“young” and “old” spaces and things are more complicated than in Concurrent
collector. CPU was fine when OOM happened, a lot of idle, another sign that IO
is a bottleneck.
We now test 2 single node installations, one with 36G heap and one with 73gb. I
want see which one is doing better. We also reduced load to 5 Mb/sec, instead
of 25-30.
DSE typically runs with a larger heap by default. You can try increasing your
heap to 10 or 12G. But if you make the heap too big, the big GC can bite you as
described above. In that case, the heap needs to be reduced. Typically you
don't need a heap smaller than 8 GB. If OOM occurs with a 8 GB heap it
typically means the load on that node is simply too heavy.
Be sure to review the recommendations in this blog post on reasonable
recommendations:
http://www.datastax.com/dev/blog/how-not-to-benchmark-cassandra
Peter: Done.All is by the book, except:
We use custom producer and custom data model.
We change data model, trying make it more effective, last change was adding day
to partition, we want avoid too wide rows. Our producer is multi-threaded and
configurable.
A few questions that will help us better understand what you are really trying
to do:
1. How much reading are you doing and when relative to writes?
Peter: In OOM-ended tests(In all tests before) we did only writes. Just
recently, with lower load I started did reads.
Meanwhile it is OK. (4 days passed)
2. Are you doing any updates or deletes? (Cause compaction, which can fall
behind your write/update load.)
Peter: No, no updates, and will not do. Our TTL will be set for 4 weeks in
production. Now I do no TTL to test reads on greater data storage.
3. How much data is on the cluster (rows)?
Peter:
This info is currently unavailable (for OOM-ended tests and previous particular
data model). I cannot check, because Cassandra fails on OOM during restart and
I have no different environment to see.
But for today’s test (we added a day to partition, other parameters are the
same ) estimated numbers from nodetool sfstats are:
Number of keys (estimate): 2000
Number of keys (estimate): 10142095
Number of keys (estimate): 350
Number of keys (estimate): 2000
Number of keys (estimate): 350
Number of keys (estimate): 12491
I assume now we have more rows that in previous tests. (because we added a
parameter to partition key).
4. How many tables?
Peter: 4 + 2 secondary indexes, indexes we will probably remove later. It is
just to test that kind.
5. What RF? RF=3 would be the recommendation, but if you have a heavy read load
you may need RF=5, although heavy load usually means you just need a lot more
nodes so that the fraction of incoming requests going to a particular node are
dramatically reduced. RF>3 is only needed if there is high load for each
particular row or partition.
Peter: We tested (when we tested on VMWare) with RF 2 and Consistency level
ONE, but later we started test single node (without VMWare). We plan in
production RF=2. To use less nodes….
6. Have you tested using cassandra-stress? That's the gold standard around here.
Peter: No, we need be close to our business reality to choose what is OK for us.
Besides, our data model is very straightforward, so we are not so far from
standard.
Below I list our data model.
All fields in each table are 4-8 bytes, expect one (BLOB) is 256 byte.
I, probably, will remove secondary indexes and create tables instead later.
I’ve noticed that their efficiency is worse than this of table.
7. Are your clients using token-aware routing? (Otherwise a write must be
bounced from the coordinating node to the node owning the token for the
partition key.)
Peter: No. we use default.
8. Are you using batches for your writes? If, so, do all the writes in one
batch have the same partition key? (If not, adds more network hops.)
Peter: no batches. I read about their controversial efficiency , so no batches
meanwhile.
9. What expectations did you have as to how many writes/reads a given number of
nodes should be able to handle?
Peter: We have no expectations menawhile, but we started from 30Mb/sec, 120000
writes/sec and failed on OOM, today we started 5Mb/sec, 20000 mes/sec. If that
works 1 week and does no OOM (and we have acceptable reads latency), then we
can use that. We even ready to test 2 Mb/sec, we ready for everything that
stable and gives us a hope :). In future production we expect load (not from
beginning) about 50-100 Mb/sec, with occasional spikes to 150 Mb/sec.
We have dilemma, whether put Kafka before Cassandra or not. I mean to amortize
such periodical spikes...
A few more questions:
1. When nodes do crash, what happens when you restart them? Do they immediately
crash again immediately or run for many hours?
Peter: they crash during startup. Even without VMWare. It clashes before I
apply any load. Just unable to start.
2. Is it just a single node crashing or do like all the nodes fail around the
same time, like falling dominoes?
Peter: There was test (On VMWare environment), where 8 nodes played a cluster
with RF 2. Crashed 3-4 from them. In narrow period of time.
Just to be clear, the fact that the cluster seemed fine for 48 hours does not
tell us whether it might have been near the edge of failing for quite some time
and maybe the precise pattern of load just statistically became the straw that
broke the camel's back at that moment. That's why it's important to know what
happened after you restarted and resumed the test after the crash as 48 hours.
Peter: got it.
It it really was a resource leak, then reducing the heap would make the failure
occur sooner. Determine what the minimal heap size is to run the test at all -
set it low enough so the test won't run even for a minute, then increase the
heap so it does run, then decrease it by less than you increased it - a binary
search for the exact heap size that is needed for the test to run even for a
few minutes or an hour. At least then you would have an easy to reproduce test
case. So if you can tune the heap so that the test can run successfully for say
10 minutes before reliably hitting the OOM, then you can see how much you need
to reduce the load (throttling the app) to be able to run without hitting OOM.
Peter: I agree with that logic, but it seems to me, there is another factor
(something with IO) playing a role. And this one is only accumulating within
particularly long runs. But I will try this approach. Amyway, we need also test
long runs to eliminate a possibility, that something accumulates only on long
run.
I'm not saying that there is absolutely no chance that there is a resource
leak, just simply that there are still a lot of open questions to answer about
usage before we can leap to that conclusion. Ultimately, we do have to have a
reliable repo test case before anything can be done.
In any case, at least at this stage it seems clear that you probably do need a
much larger cluster (more nodes with less load on each node.) Yes, it's
unfortunate the Cassandra won't give you a nice clean message that says that,
but that ultimate requirement remains unchanged - pending answers to all of the
open questions.
Peter: Agree.
Peter:
Now is our old data model (all previous tests- ended with OOM- done with it):
Load was like 20-30Mb/sec.
create keyspace OBLREPOSITORY_NY with replication = {'class':'SimpleStrategy',
'replication_factor' : 1};
create table obl_by_security(
day text,
security_id text,
producer_id text,
deal_code text,
message blob,
publish_time timestamp,
id uuid,
region text,
primary key ((security_id, day), publish_time))
with CLUSTERING ORDER BY (publish_time DESC)
;
CREATE INDEX ON obl_by_security (deal_code);
CREATE INDEX ON obl_by_security (producer_id);
create table obl_by_producer(
security_id text,
producer_id text,
deal_code text,
message blob,
publish_time timestamp,
id uuid,
region text,
primary key (producer_id, publish_time))
with CLUSTERING ORDER BY (publish_time DESC)
;
create table obl_by_deal_code(
security_id text,
producer_id text,
deal_code text,
message blob,
publish_time timestamp,
id uuid,
region text,
primary key (deal_code, publish_time))
with CLUSTERING ORDER BY (publish_time DESC)
;
create table obl_by_deal_code_per_security(
security_id text,
producer_id text,
deal_code text,
message blob,
publish_time timestamp,
id uuid,
region text,
primary key ((deal_code, security_id), publish_time))
with CLUSTERING ORDER BY (publish_time DESC)
;
Our today’s data model (very new), day added in partition to additional 3
tables to reduce ROW “wideness”. Test we started is 5.5 Mb/sec, runs already 4
days.
create table obl_by_security(
day text,
security_id text,
producer_id text,
deal_code text,
message blob,
publish_time timestamp,
id uuid,
region text,
primary key ((security_id, day), publish_time))
with CLUSTERING ORDER BY (publish_time DESC)
;
CREATE INDEX ON obl_by_security (deal_code);
CREATE INDEX ON obl_by_security (producer_id);
create table obl_by_producer(
day text,
security_id text,
producer_id text,
deal_code text,
message blob,
publish_time timestamp,
id uuid,
region text,
primary key ((producer_id, day), publish_time))
with CLUSTERING ORDER BY (publish_time DESC)
;
create table obl_by_deal_code(
day text,
security_id text,
producer_id text,
deal_code text,
message blob,
publish_time timestamp,
id uuid,
region text,
primary key ((deal_code, day), publish_time))
with CLUSTERING ORDER BY (publish_time DESC)
;
create table obl_by_deal_code_per_security(
day text,
security_id text,
producer_id text,
deal_code text,
message blob,
publish_time timestamp,
id uuid,
region text,
primary key ((deal_code, security_id, day), publish_time))
with CLUSTERING ORDER BY (publish_time DESC)
;
Thank you!
Peter.
> OOM on multiple nodes on write load (v. 3.0.0), problem also present on
> DSE-4.8.3, but there it survives more time
> ------------------------------------------------------------------------------------------------------------------
>
> Key: CASSANDRA-10937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10937
> Project: Cassandra
> Issue Type: Bug
> Environment: Cassandra : 3.0.0
> Installed as open archive, no connection to any OS specific installer.
> Java:
> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
> OS :
> Linux version 2.6.32-431.el6.x86_64
> ([email protected]) (gcc version 4.4.7 20120313 (Red
> Hat 4.4.7-4) (GCC) ) #1 SMP Sun Nov 10 22:19:54 EST 2013
> We have:
> 8 guests ( Linux OS as above) on 2 (VMWare managed) physical hosts. Each
> physical host keeps 4 guests.
> Physical host parameters(shared by all 4 guests):
> Model: HP ProLiant DL380 Gen9
> Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
> 46 logical processors.
> Hyperthreading - enabled
> Each guest assigned to have:
> 1 disk 300 Gb for seq. log (NOT SSD)
> 1 disk 4T for data (NOT SSD)
> 11 CPU cores
> Disks are local, not shared.
> Memory on each host - 24 Gb total.
> 8 (or 6, tested both) Gb - cassandra heap
> (lshw and cpuinfo attached in file test2.rar)
> Reporter: Peter Kovgan
> Priority: Critical
> Attachments: cassandra-to-jack-krupansky.docx, gc-stat.txt,
> more-logs.rar, some-heap-stats.rar, test2.rar, test3.rar, test4.rar,
> test5.rar, test_2.1.rar, test_2.1_logs_older.rar,
> test_2.1_restart_attempt_log.rar
>
>
> 8 cassandra nodes.
> Load test started with 4 clients(different and not equal machines), each
> running 1000 threads.
> Each thread assigned in round-robin way to run one of 4 different inserts.
> Consistency->ONE.
> I attach the full CQL schema of tables and the query of insert.
> Replication factor - 2:
> create keyspace OBLREPOSITORY_NY with replication =
> {'class':'NetworkTopologyStrategy','NY':2};
> Initiall throughput is:
> 215.000 inserts /sec
> or
> 54Mb/sec, considering single insert size a bit larger than 256byte.
> Data:
> all fields(5-6) are short strings, except one is BLOB of 256 bytes.
> After about a 2-3 hours of work, I was forced to increase timeout from 2000
> to 5000ms, for some requests failed for short timeout.
> Later on(after aprox. 12 hous of work) OOM happens on multiple nodes.
> (all failed nodes logs attached)
> I attach also java load client and instructions how set-up and use
> it.(test2.rar)
> Update:
> Later on test repeated with lesser load (100000 mes/sec) with more relaxed
> CPU (idle 25%), with only 2 test clients, but anyway test failed.
> Update:
> DSE-4.8.3 also failed on OOM (3 nodes from 8), but here it survived 48 hours,
> not 10-12.
> Attachments:
> test2.rar -contains most of material
> more-logs.rar - contains additional nodes logs
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)