Re: Attached profiled data but need help understanding it

2017-02-28 Thread Kant Kodali
Hi Romain,

I am using Cassandra version 3.0.9 and here is the generated report

(Graphical view) of my thread dump as well!. Just send this over in case if
it helps.

Thanks,
kant

On Tue, Feb 28, 2017 at 7:51 PM, Kant Kodali  wrote:

> Hi Romain,
>
> Thanks again. My response are inline.
>
> kant
>
> On Tue, Feb 28, 2017 at 10:04 AM, Romain Hardouin 
> wrote:
>
>> > we are currently using 3.0.9.  should we use 3.8 or 3.10
>>
>> No, don't use 3.X in production unless you really need a major feature.
>> I would advise to stick to 3.0.X (i.e. 3.0.11 now).
>> You can backport CASSANDRA-11966 easily but of course you have to deploy
>> from source as a prerequisite.
>>
>
>   * By backporting you mean I should cherry pick CASSANDRA-11966 commit
> and compile from source?*
>
>>
>> > I haven't done any tuning yet.
>>
>> So it's a good news because maybe there is room for improvement
>>
>> > Can I change this on a running instance? If so, how? or does it require
>> a downtime?
>>
>> You can throttle compaction at runtime with "nodetool
>> setcompactionthroughput". Be sure to read all nodetool commmands, some of
>> them are really useful for a day to day tuning/management.
>>
>> If GC is fine, then check other things -> "[...] different pool sizes for
>> NTR, concurrent reads and writes, compaction executors, etc. Also check if
>> you can improve network latency (e.g. VF or ENA on AWS)."
>>
>> Regarding thread pools, some of them can be resized at runtime via JMX.
>>
>> > 5000 is the target.
>>
>> Right now you reached 1500. Is it per node or for the cluster?
>> We don't know your setup so it's hard to say it's doable. Can you provide
>> more details? VM, physical nodes, #nodes, etc.
>> Generally speaking LWT should be seldom used. AFAIK you won't achieve
>> 10,000 writes/s per node.
>>
>> Maybe someone on the list already made some tuning for heavy LWT workload?
>>
>
> *1500 total cluster.  *
>
> *I have a 8 node cassandra cluster. Each node is AWS m4.xlarge
> instance (so 4 vCPU, 16GB, 1Gbit network=125MB/s)*
>
>
>
> *I have 1 node (m4.xlarge) for my application which just inserts a
> bunch of data and each insert is an LWT I tested the network throughput
> of the node.  I can get up 98 MB/s.*
>
> *Now, when I start my application. I see that Cassandra nodes Receive
> rate/ throughput is about 4MB/s (yes it is in Mega Bytes. I checked this by
> running sudo iftop -B). The Disk I/O is also same and the Cassandra process
> CPU usage is about 360% (the max is 400% since it is a 4 core machine). The
> application node transmission throughput is about 6MB/s. so even with 4MB/s
> receive throughput at Cassandra node the CPU is almost maxed out. I am not
> sure what this says about Cassandra? But, what I can tell is that Network
> is way underutilized and that 8 nodes are unnecessary so we plan to bring
> it down to 4 nodes except each node this time will have 8 cores. All said,
> I am still not sure how to scale up from 1500 writes/sec? *
>
>
>>
>> Best,
>>
>> Romain
>>
>>
>


Re: Attached profiled data but need help understanding it

2017-02-28 Thread Kant Kodali
Hi Romain,

Thanks again. My response are inline.

kant

On Tue, Feb 28, 2017 at 10:04 AM, Romain Hardouin 
wrote:

> > we are currently using 3.0.9.  should we use 3.8 or 3.10
>
> No, don't use 3.X in production unless you really need a major feature.
> I would advise to stick to 3.0.X (i.e. 3.0.11 now).
> You can backport CASSANDRA-11966 easily but of course you have to deploy
> from source as a prerequisite.
>

  * By backporting you mean I should cherry pick CASSANDRA-11966 commit and
compile from source?*

>
> > I haven't done any tuning yet.
>
> So it's a good news because maybe there is room for improvement
>
> > Can I change this on a running instance? If so, how? or does it require
> a downtime?
>
> You can throttle compaction at runtime with "nodetool
> setcompactionthroughput". Be sure to read all nodetool commmands, some of
> them are really useful for a day to day tuning/management.
>
> If GC is fine, then check other things -> "[...] different pool sizes for
> NTR, concurrent reads and writes, compaction executors, etc. Also check if
> you can improve network latency (e.g. VF or ENA on AWS)."
>
> Regarding thread pools, some of them can be resized at runtime via JMX.
>
> > 5000 is the target.
>
> Right now you reached 1500. Is it per node or for the cluster?
> We don't know your setup so it's hard to say it's doable. Can you provide
> more details? VM, physical nodes, #nodes, etc.
> Generally speaking LWT should be seldom used. AFAIK you won't achieve
> 10,000 writes/s per node.
>
> Maybe someone on the list already made some tuning for heavy LWT workload?
>

*1500 total cluster.  *

*I have a 8 node cassandra cluster. Each node is AWS m4.xlarge instance
(so 4 vCPU, 16GB, 1Gbit network=125MB/s)*



*I have 1 node (m4.xlarge) for my application which just inserts a
bunch of data and each insert is an LWT I tested the network throughput
of the node.  I can get up 98 MB/s.*

*Now, when I start my application. I see that Cassandra nodes Receive
rate/ throughput is about 4MB/s (yes it is in Mega Bytes. I checked this by
running sudo iftop -B). The Disk I/O is also same and the Cassandra process
CPU usage is about 360% (the max is 400% since it is a 4 core machine). The
application node transmission throughput is about 6MB/s. so even with 4MB/s
receive throughput at Cassandra node the CPU is almost maxed out. I am not
sure what this says about Cassandra? But, what I can tell is that Network
is way underutilized and that 8 nodes are unnecessary so we plan to bring
it down to 4 nodes except each node this time will have 8 cores. All said,
I am still not sure how to scale up from 1500 writes/sec? *


>
> Best,
>
> Romain
>
>


Re: Attached profiled data but need help understanding it

2017-02-28 Thread Romain Hardouin
> we are currently using 3.0.9.  should we use 3.8 or 3.10
No, don't use 3.X in production unless you really need a major feature.I would 
advise to stick to 3.0.X (i.e. 3.0.11 now).You can backport CASSANDRA-11966 
easily but of course you have to deploy from source as a prerequisite.
> I haven't done any tuning yet.
So it's a good news because maybe there is room for improvement
> Can I change this on a running instance? If so, how? or does it require a 
> downtime?
You can throttle compaction at runtime with "nodetool setcompactionthroughput". 
Be sure to read all nodetool commmands, some of them are really useful for a 
day to day tuning/management. 
If GC is fine, then check other things -> "[...] different pool sizes for NTR, 
concurrent reads and writes, compaction executors, etc. Also check if you can 
improve network latency (e.g. VF or ENA on AWS)."
Regarding thread pools, some of them can be resized at runtime via JMX.
> 5000 is the target.
Right now you reached 1500. Is it per node or for the cluster?We don't know 
your setup so it's hard to say it's doable. Can you provide more details? VM, 
physical nodes, #nodes, etc.Generally speaking LWT should be seldom used. AFAIK 
you won't achieve 10,000 writes/s per node.
Maybe someone on the list already made some tuning for heavy LWT workload?
Best,
Romain


question of keyspace that just disappeared

2017-02-28 Thread George Webster
Hey Cassandra Users,

We recently encountered an issue with a keyspace just disappeared. I was
curious if anyone has had this occur before and can provide some insight.

We are using cassandra 3.10. 2 DCs  3 nodes each.
The data was still located in the storage folder but is not located inside
Cassandra

I searched the logs for any hints of error or commands being executed that
could have caused a loss of a keyspace. Unfortunately I found nothing. In
the logs the only unusual issue i saw was a series of read timeouts that
occurred right around when the keyspace went away. Since then I see
numerous entries in debug log as the following:

DEBUG [GossipStage:1] 2017-02-28 18:14:12,580 FailureDetector.java:457 -
Ignoring interval time of 2155674599 for /x.x.x..12
DEBUG [GossipStage:1] 2017-02-28 18:14:16,580 FailureDetector.java:457 -
Ignoring interval time of 2945213745 for /x.x.x.81
DEBUG [GossipStage:1] 2017-02-28 18:14:19,590 FailureDetector.java:457 -
Ignoring interval time of 2006530862 for /x.x.x..69
DEBUG [GossipStage:1] 2017-02-28 18:14:27,434 FailureDetector.java:457 -
Ignoring interval time of 3441841231 for /x.x.x.82
DEBUG [GossipStage:1] 2017-02-28 18:14:29,588 FailureDetector.java:457 -
Ignoring interval time of 2153964846 for /x.x.x.82
DEBUG [GossipStage:1] 2017-02-28 18:14:33,582 FailureDetector.java:457 -
Ignoring interval time of 2588593281 for /x.x.x.82
DEBUG [GossipStage:1] 2017-02-28 18:14:37,588 FailureDetector.java:457 -
Ignoring interval time of 2005305693 for /x.x.x.69
DEBUG [GossipStage:1] 2017-02-28 18:14:38,592 FailureDetector.java:457 -
Ignoring interval time of 2009244850 for /x.x.x.82
DEBUG [GossipStage:1] 2017-02-28 18:14:43,584 FailureDetector.java:457 -
Ignoring interval time of 2149192677 for /x.x.x.69
DEBUG [GossipStage:1] 2017-02-28 18:14:45,605 FailureDetector.java:457 -
Ignoring interval time of 2021180918 for /x.x.x.85
DEBUG [GossipStage:1] 2017-02-28 18:14:46,432 FailureDetector.java:457 -
Ignoring interval time of 2436026101 for /x.x.x.81
DEBUG [GossipStage:1] 2017-02-28 18:14:46,432 FailureDetector.java:457 -
Ignoring interval time of 2436187894 for /x.x.x.82

During the time of the disappearing keyspace we had two concurrent
activities:
1) Running a Spark job (via HDP 2.5.3 in Yarn) that was performing a
countbykey. It was using they Keyspace that disappeared. The operation
crashed.
2) We created a new keyspace to test out scheme. Only "fancy" thing in that
keyspace are a few material view tables. Data was being loaded into that
keyspace during the crash. The load process was extracting information and
then just writing to Cassandra.

Any ideas? Anyone seen this before?

Thanks,
George


Re: Is periodic manual repair necessary?

2017-02-28 Thread benjamin roth
Hi Jayesh,

Your statements are mostly right, except:
Yes, compactions do purge tombstones but that *does not avoid resurrection*.
A resurrection takes place in this situation:

Node A:
Key A is written
Key A is deleted

Node B:
Key A is written
- Deletion never happens for example because of a dropped mutation-

Then after gc_grace_seconds:
Node A:
Compaction removes both write and tombstone, so data is completely gone

Node B:
Still contains Key A

Then you do a repair
Node A:
Receives Key A from Node B

Got it?

But I was thinking a bit about your situation. If you NEVER do deletes and
have ONLY TTLs, this could change the game. Difference? If you have only
TTLs, the delete information and the write information resides always on
the same node and never exists alone, so the write-delete pair should
always be consistent. As far as i can see there will no be ressurections
then.
BUT: Please don't nail me down on it. *I have neither tested it nor read
the source code to prove it in theory.*

Maybe some other guys have some more thoughts or information on this.

By the way:
CS itself is not fragile. Distributed systems are. It's like the old
saying: Things that can go wrong will go wrong. Network fails, hardware
fails, software fails. You can have timeouts, dropped messages (timeouts
help a cluster/node to survive high pressure situations), a crashed daemon.
Yes things go wrong. All the time. Even on a 1 node system (like MySQL)
ensuring absolute consistency is not so easy and requires many safety nets
like unbuffered IO and battery backed HD controllers which can harm
performance a lot.

You could also create a perfectly consistent distributed system like CS but
it would be slow and not partition tolerant or not highly available.

2017-02-28 16:06 GMT+01:00 Thakrar, Jayesh :

> Thanks - getting a better picture of things.
>
>
>
> So "entropy" is tendency of a C* datastore to be inconsistent due to
> writes/updates not taking place across ALL nodes that carry replica of a
> row (can happen if nodes are down for maintenance)
>
> It can also happen due to node crashes/restarts that can result in loss of
> uncommitted data.
>
> This can result in either stale data or ghost data (column/row
> re-appearing after a delete).
>
> So there are the "anti-entropy" processes in place to help with this
>
> - hinted handoff
>
> - read repair (can happen while performing a consistent read OR also async
> as driven/configured by *_read_repair_chance AFTER consistent read)
>
> - commit logs
>
> - explicit/manual repair via command
>
> - compaction (compaction is indirect mechanism to purge tombstone, thereby
> ensuring that stale data will NOT resurrect)
>
>
>
> So for an application where you have only timeseries data or where data is
> always inserted, I would like to know the need for manual repair?
>
>
>
> I see/hear advice that there should always be a periodic (mostly weekly)
> manual/explicit repair in a C* system - and that's what I am trying to
> understand.
>
> Repair is a real expensive process and would like to justify the need to
> expend resources (when and how much) for it.
>
>
>
> Among other things, this advice also gives an impression to people not
> familiar with C* (e.g. me) that it is too fragile and needs substantial
> manual intervention.
>
>
>
> Appreciate all the feedback and details that you have been sharing.
>
>
>
> *From: *Edward Capriolo 
> *Date: *Monday, February 27, 2017 at 8:00 PM
> *To: *"user@cassandra.apache.org" 
> *Cc: *Benjamin Roth 
> *Subject: *Re: Is periodic manual repair necessary?
>
>
>
> There are 4 anti entropy systems in cassandra.
>
>
>
> Hinted handoff
>
> Read repair
>
> Commit logs
>
> Repair commamd
>
>
>
> All are basically best effort.
>
>
>
> Commit logs get corrupt and only flush periodically.
>
>
>
> Bits rot on disk and while crossing networks network
>
>
>
> Read repair is async and only happens randomly
>
>
>
> Hinted handoff stops after some time and is not guarenteed.
> On Monday, February 27, 2017, Thakrar, Jayesh <
> jthak...@conversantmedia.com> wrote:
>
> Thanks Roth and Oskar for your quick responses.
>
>
>
> This is a single datacenter, multi-rack setup.
>
>
>
> > A TTL is technically similar to a delete - in the end both create
> tombstones.
>
> >If you want to eliminate the possibility of resurrected deleted data, you
> should run repairs.
>
> So why do I need to worry about data resurrection?
>
> Because, the TTL for the data is specified at the row level (atleast in
> this case) i.e. across ALL columns across ALL replicas.
>
> So they all will have the same data or wont have the data at all (i.e. it
> would have been tombstoned).
>
>
>
>
>
> > If you can guarantuee a 100% that data is read-repaired before
> gc_grace_seconds after the data has been TTL'ed, you won't need an extra
> repair.
>
> Why read-repaired before "gc_grace_period"?
>
> Isn't 

Re: Is periodic manual repair necessary?

2017-02-28 Thread Thakrar, Jayesh
Thanks - getting a better picture of things.

So "entropy" is tendency of a C* datastore to be inconsistent due to 
writes/updates not taking place across ALL nodes that carry replica of a row 
(can happen if nodes are down for maintenance)
It can also happen due to node crashes/restarts that can result in loss of 
uncommitted data.
This can result in either stale data or ghost data (column/row re-appearing 
after a delete).
So there are the "anti-entropy" processes in place to help with this
- hinted handoff
- read repair (can happen while performing a consistent read OR also async as 
driven/configured by *_read_repair_chance AFTER consistent read)
- commit logs
- explicit/manual repair via command
- compaction (compaction is indirect mechanism to purge tombstone, thereby 
ensuring that stale data will NOT resurrect)

So for an application where you have only timeseries data or where data is 
always inserted, I would like to know the need for manual repair?

I see/hear advice that there should always be a periodic (mostly weekly) 
manual/explicit repair in a C* system - and that's what I am trying to 
understand.
Repair is a real expensive process and would like to justify the need to expend 
resources (when and how much) for it.

Among other things, this advice also gives an impression to people not familiar 
with C* (e.g. me) that it is too fragile and needs substantial manual 
intervention.

Appreciate all the feedback and details that you have been sharing.

From: Edward Capriolo 
Date: Monday, February 27, 2017 at 8:00 PM
To: "user@cassandra.apache.org" 
Cc: Benjamin Roth 
Subject: Re: Is periodic manual repair necessary?

There are 4 anti entropy systems in cassandra.

Hinted handoff
Read repair
Commit logs
Repair commamd

All are basically best effort.

Commit logs get corrupt and only flush periodically.

Bits rot on disk and while crossing networks network

Read repair is async and only happens randomly

Hinted handoff stops after some time and is not guarenteed.
On Monday, February 27, 2017, Thakrar, Jayesh 
> wrote:
Thanks Roth and Oskar for your quick responses.

This is a single datacenter, multi-rack setup.

> A TTL is technically similar to a delete - in the end both create tombstones.
>If you want to eliminate the possibility of resurrected deleted data, you 
>should run repairs.
So why do I need to worry about data resurrection?
Because, the TTL for the data is specified at the row level (atleast in this 
case) i.e. across ALL columns across ALL replicas.
So they all will have the same data or wont have the data at all (i.e. it would 
have been tombstoned).


> If you can guarantuee a 100% that data is read-repaired before 
> gc_grace_seconds after the data has been TTL'ed, you won't need an extra 
> repair.
Why read-repaired before "gc_grace_period"?
Isn't gc_grace_period the grace period for compaction to occur?
So if the data was not consistent and read-repair happens before that, then 
well and good.
Does read-repair not happen after gc/compaction?
If this table has data being constantly/periodically inserted, then compaction 
will also happen accordingly, right?

Thanks,
Jayesh


From: Benjamin Roth 
>
Date: Monday, February 27, 2017 at 11:53 AM
To: 
>
Subject: Re: Is periodic manual repair necessary?

A TTL is technically similar to a delete - in the end both create tombstones.
If you want to eliminate the possibility of resurrected deleted data, you 
should run repairs.

If you can guarantuee a 100% that data is read-repaired before gc_grace_seconds 
after the data has been TTL'ed, you won't need an extra repair.

2017-02-27 18:29 GMT+01:00 Oskar Kjellin 
>:
Are you running multi dc?

Skickat från min iPad

27 feb. 2017 kl. 16:08 skrev Thakrar, Jayesh 
>:
Suppose I have an application, where there are no deletes, only 5-10% of rows 
being occasionally updated (and that too only once) and a lot of reads.

Furthermore, I have replication = 3 and both read and write are configured for 
local_quorum.

Occasionally, servers do go into maintenance.

I understand when the maintenance is longer than the period for hinted_handoffs 
to be preserved, they are lost and servers may have stale data.
But I do expect it to be rectified on reads. If the stale data is not read 
again, I don’t care for it to be corrected as then the data will be 
automatically purged because of TTL.

In such a situation, do I need to have a periodic (weekly?) manual/batch 
read_repair process?

Thanks,
Jayesh Thakrar



--
Benjamin Roth

Re: How to find total data size of a keyspace.

2017-02-28 Thread Surbhi Gupta
Nodetool status key space_name .
On Tue, Feb 28, 2017 at 4:53 AM anuja jain  wrote:

> Hi,
> Using nodetool cfstats gives me data size of each  table/column family and
> nodetool ring gives me load of all keyspace in cluster but I need total
> data size of one keyspace in the cluster. How can I get that?
>
>
>


How to find total data size of a keyspace.

2017-02-28 Thread anuja jain
Hi,
Using nodetool cfstats gives me data size of each  table/column family and
nodetool ring gives me load of all keyspace in cluster but I need total
data size of one keyspace in the cluster. How can I get that?


Very long delay on "Writing Memtable-local@xyz.."

2017-02-28 Thread Bastian Schnorbus
Hi all,
anyone knows what exactly happens when I see
*Writing Memtable-local@57746066(0.472KiB serialized bytes, 15 ops, 0%/0%
of on/off-heap limit)*
during startup of C* processes in the cassandra-log?
When updating our 10-node cluster I see this for +10mins as last log entry.
There's some diskio during that time, but not too much...

Can I somewhere see some progress or maybe speed up things?

Thanks,
Bastian


Fwd: Node failure due to Incremental repair

2017-02-28 Thread Karthick V
Hi,
Recently I have enabled incremental repair in one of my test cluster setup
which consists of 8 nodes(DC1 - 4, DC2 - 4) with C* version of 2.1.13.
Currently, I am facing node failure scenario in this cluster with the
following exception during the incremental repair process

exception occurred during clean-up.  java.lang.reflect.
UndeclaredThrowableException
error: JMX connection closed. You should check server log for repair status
of keyspace VERTICALCRM(Subsequent keyspaces are not going to be repaired).
-- StackTrace --
java.io.IOException: JMX connection closed. You should check server log for
repair status of keyspace VERTICAL(Subsequent keyspaces are not going to be
repaired).
at org.apache.cassandra.tools.RepairRunner.
handleNotification(NodeProbe.java:1496)
at javax.management.NotificationBroadcasterSupport
.handleNotification(NotificationBroadcasterSupport.java:275)
at javax.management.NotificationBroadcasterSupport$SendNotifJob.run(
NotificationBroadcasterSupport.java:352)
at javax.management.NotificationBroadcasterSupport$1.execute(
NotificationBroadcasterSupport.java:337)
at javax.management.NotificationBroadcasterSupport.sendNotification(
NotificationBroadcasterSupport.java:248)
at javax.management.remote.rmi.RMIConnector.sendNotification(
RMIConnector.java:441)
at javax.management.remote.rmi.RMIConnector.access$1200(
RMIConnector.java:121)
at javax.management.remote.rmi.RMIConnector$
RMIClientCommunicatorAdmin.gotIOException(RMIConnector.java:1531)
at javax.management.remote.rmi.RMIConnector$RMINotifClient.
fetchNotifs(RMIConnector.java:1352)
at com.sun.jmx.remote.internal.ClientNotifForwarder$
NotifFetcher.fetchOneNotif(ClientNotifForwarder.java:655)
at com.sun.jmx.remote.internal.ClientNotifForwarder$
NotifFetcher.fetchNotifs(ClientNotifForwarder.java:607)
at com.sun.jmx.remote.internal.ClientNotifForwarder$
NotifFetcher.doRun(ClientNotifForwarder.java:471)
at com.sun.jmx.remote.internal.ClientNotifForwarder$
NotifFetcher.run(ClientNotifForwarder.java:452)
at com.sun.jmx.remote.internal.ClientNotifForwarder$
LinearExecutor$1.run(ClientNotifForwarder.java:108)

And the node was made down by this exception.

When I tried to starting the same node I got following exception

java.lang.OutOfMemoryError: Java heap space
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.<
init>(CompressedRandomAccessReader.java:73)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.
open(CompressedRandomAccessReader.java:48)
at org.apache.cassandra.io.util.CompressedPoolingSegmentedFile
.createPooledReader(CompressedPoolingSegmentedFile.java:95)
at org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(
PoolingSegmentedFile.java:62)
at org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(
SSTableReader.java:1902)
at org.apache.cassandra.db.columniterator.SimpleSliceReader.(
SimpleSliceReader.java:57)
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.
createReader(SSTableSliceIterator.java:65)
at org.apache.cassandra.db.columniterator.
SSTableSliceIterator.(SSTableSliceIterator.java:42)
at org.apache.cassandra.db.filter.SliceQueryFilter.
getSSTableColumnIterator(SliceQueryFilter.java:246)
at org.apache.cassandra.db.filter.QueryFilter.
getSSTableColumnIterator(QueryFilter.java:62)
at org.apache.cassandra.db.CollationController.collectAllData(
CollationController.java:270)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(
CollationController.java:65)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(
ColumnFamilyStore.java:2001)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(
ColumnFamilyStore.java:1844)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:353)
at org.apache.cassandra.db.SliceFromReadCommand.getRow(
SliceFromReadCommand.java:85)
at org.apache.cassandra.cql3.statements.SelectStatement.
readLocally(SelectStatement.java:309)
at org.apache.cassandra.cql3.statements.SelectStatement.
executeInternal(SelectStatement.java:328)
at org.apache.cassandra.cql3.statements.SelectStatement.
executeInternal(SelectStatement.java:67)
at org.apache.cassandra.cql3.QueryProcessor.executeInternal(
QueryProcessor.java:317)
at org.apache.cassandra.db.SystemKeyspace.getSSTableReadMeter(
SystemKeyspace.java:972)
at org.apache.cassandra.io.sstable.SSTableReader$
GlobalTidy.ensureReadMeter(SSTableReader.java:2388)
at org.apache.cassandra.io.sstable.SSTableReader$
InstanceTidier.setup(SSTableReader.java:2204)
at org.apache.cassandra.io.sstable.SSTableReader.setup(
SSTableReader.java:2145)
at org.apache.cassandra.io.sstable.SSTableReader.open(
SSTableReader.java:491)
at 

unsubscribe

2017-02-28 Thread Benjamin Roth
-- 
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer