#1 The cause of this problem is a CREATE TABLE statement collision. Do
*not* generate
tables dynamically from multiple clients, even with IF NOT EXISTS. First
thing you need to do is fix your code so that this does not happen. Just
create your tables manually from cqlsh allowing time for the schema
#1
> There is one table - daily_challenges - which shows compacted partition
> max bytes as ~460M and another one - daily_guest_logins - which shows
> compacted partition max bytes as ~36M.
460 is high, I like to keep my partitions under 100mb when possible. I've
seen worse though. The fix is to
And here is my cassandra-env.sh
https://gist.github.com/kunalg/2c092cb2450c62be9a20
Kunal
On 11 July 2015 at 00:04, Kunal Gangakhedkar
wrote:
> From jhat output, top 10 entries for "Instance Count for All Classes
> (excluding platform)" shows:
>
> 2088223 instances of class org.apache.cassandra
>From jhat output, top 10 entries for "Instance Count for All Classes
(excluding platform)" shows:
2088223 instances of class org.apache.cassandra.db.BufferCell
1983245 instances of class
org.apache.cassandra.db.composites.CompoundSparseCellName
1885974 instances of class
org.apache.cassandra.db.c
Any pointers on this?.
In 2.1, when updating the counter with UNLOGGED batch using timestamp isn't
safe as other column update with consistency level (with timestamp counter
update can be idempotent? ).
Thanks
Ajay
On 09-Jul-2015 11:47 am, "Ajay" wrote:
>
> Hi,
>
> What is the accuracy improvem
Thanks for quick reply.
1. I don't know what are the thresholds that I should look for. So, to save
this back-and-forth, I'm attaching the cfstats output for the keyspace.
There is one table - daily_challenges - which shows compacted partition max
bytes as ~460M and another one - daily_guest_logi
1. You want to look at # of sstables in cfhistograms or in cfstats look at:
Compacted partition maximum bytes
Maximum live cells per slice
2) No, here's the env.sh from 3.0 which should work with some tweaks:
https://github.com/tobert/cassandra/blob/0f70469985d62aeadc20b41dc9cdc9d72a035c64/conf/ca
On Sun, Jul 5, 2015 at 1:40 PM, Roman Tkachenko wrote:
> Hey guys,
>
> I have a table with RF=3 and LCS. Data model makes use of "wide rows". A
> certain query run against this table times out and tracing reveals the
> following error on two out of three nodes:
>
> *Scanned over 10 tombstones
Thanks, Sebastian.
Couple of questions (I'm really new to cassandra):
1. How do I interpret the output of 'nodetool cfstats' to figure out the
issues? Any documentation pointer on that would be helpful.
2. I'm primarily a python/c developer - so, totally clueless about JVM
environment. So, please
#1 You need more information.
a) Take a look at your .hprof file (memory heap from the OOM) with an
introspection tool like jhat or visualvm or java flight recorder and see
what is using up your RAM.
b) How big are your large rows (use nodetool cfstats on each node). If your
data model is bad, yo
I upgraded my instance from 8GB to a 14GB one.
Allocated 8GB to jvm heap in cassandra-env.sh.
And now, it crashes even faster with an OOM..
Earlier, with 4GB heap, I could go upto ~90% replication completion (as
reported by nodetool netstats); now, with 8GB heap, I cannot even get
there. I've alr
My understanding is that Cassandra File Structure follows below naming
convention
/cassandra/data/
Whereas our file structure is as below, each table has multiple names and when
we drop tables and recreate these directories remain. Also when we dropped the
table one node was down, whe
You, and only you, are responsible for knowing your data and data model.
If columns per row or rows per partition can be large, then an 8GB system
is probably too small. But the real issue is that you need to keep your
partition size from getting too large.
Generally, an 8GB system is okay, but o
I'm new to cassandra
How do I find those out? - mainly, the partition params that you asked for.
Others, I think I can figure out.
We don't have any large objects/blobs in the column values - it's all
textual, date-time, numeric and uuid data.
We use cassandra to primarily store segmentation data
What does your data and data model look like - partition size, rows per
partition, number of columns per row, any large values/blobs in column
values?
You could run fine on an 8GB system, but only if your rows and partitions
are reasonably small. Any large partitions could blow you away.
-- Jack
Attaching the stack dump captured from the last OOM.
Kunal
On 10 July 2015 at 13:32, Kunal Gangakhedkar
wrote:
> Forgot to mention: the data size is not that big - it's barely 10GB in all.
>
> Kunal
>
> On 10 July 2015 at 13:29, Kunal Gangakhedkar
> wrote:
>
>> Hi,
>>
>> I have a 2 node setup
Forgot to mention: the data size is not that big - it's barely 10GB in all.
Kunal
On 10 July 2015 at 13:29, Kunal Gangakhedkar
wrote:
> Hi,
>
> I have a 2 node setup on Azure (east us region) running Ubuntu server
> 14.04LTS.
> Both nodes have 8GB RAM.
>
> One of the nodes (seed node) died with
Hi,
I have a 2 node setup on Azure (east us region) running Ubuntu server
14.04LTS.
Both nodes have 8GB RAM.
One of the nodes (seed node) died with OOM - so, I am trying to add a
replacement node with same configuration.
The problem is this new node also keeps dying with OOM - I've restarted the
18 matches
Mail list logo