Re: Switching Snitch

2018-08-26 Thread Joshua Galbraith
Pradeep,

Here are some related tickets that may also be helpful in understanding the
current behavior of these options.

* https://issues.apache.org/jira/browse/CASSANDRA-5897
* https://issues.apache.org/jira/browse/CASSANDRA-9474
* https://issues.apache.org/jira/browse/CASSANDRA-10243
* https://issues.apache.org/jira/browse/CASSANDRA-10242

On Sun, Aug 26, 2018 at 1:20 PM, Joshua Galbraith 
wrote:

> Pradeep,
>
> That being said, I haven't experimented with -Dcassandra.ignore_dc=true
> -Dcassandra.ignore_rack=true before.
>
> The description here may be helpful:
> https://github.com/apache/cassandra/blob/trunk/NEWS.txt#L685-L693
>
> I would spin up a small test cluster with data you don't care about and
> verify that your above assumptions are correct there first.
>
> On Sun, Aug 26, 2018 at 1:09 PM, Joshua Galbraith  > wrote:
>
>> Pradeep.
>>
>> Right, so from that documentation is sounds like you actually have to
>> stop all nodes in the cluster at once and bring them back up one at a time.
>> A rolling restart won't work here.
>>
>> On Sun, Aug 26, 2018 at 11:46 AM, Pradeep Chhetri 
>> wrote:
>>
>>> Hi Joshua,
>>>
>>> Thank you for the reply. Sorry i forgot to mention that I already went
>>> through that documentation. There are few missing things regarding which I
>>> have few questions:
>>>
>>> 1) One thing which isn't mentioned there is that cassandra fails to
>>> restart when we change the datacenter name *or* rack name of a node. So
>>> whether should i first rolling restart cassandra with flag
>>> "-Dcassandra.ignore_dc=true -Dcassandra.ignore_rack=true", then run
>>> sequential repair and then cleanup and then rolling restart cassandra
>>> without that flag.
>>>
>>> 2) Should i not allow any read/write operation from applications during
>>> the time when sequential repair is running.
>>>
>>> Regards,
>>> Pradeep
>>>
>>> On Mon, Aug 27, 2018 at 12:19 AM, Joshua Galbraith <
>>> jgalbra...@newrelic.com.invalid> wrote:
>>>
>>>> Pradeep, it sounds like what you're proposing counts as a topology
>>>> change because you are changing the datacenter name and rack name.
>>>>
>>>> Please refer to the documentation here about what to do in that
>>>> situation:
>>>> https://docs.datastax.com/en/cassandra/3.0/cassandra/operati
>>>> ons/opsSwitchSnitch.html
>>>>
>>>> In particular:
>>>>
>>>> Simply altering the snitch and replication to move some nodes to a new
>>>>> datacenter will result in data being replicated incorrectly.
>>>>
>>>>
>>>> Topology changes may occur when the replicas are placed in different
>>>>> places by the new snitch. Specifically, the replication strategy places 
>>>>> the
>>>>> replicas based on the information provided by the new snitch.
>>>>
>>>>
>>>> If the topology of the network has changed, but no datacenters are
>>>>> added:
>>>>> a. Shut down all the nodes, then restart them.
>>>>> b. Run a sequential repair and nodetool cleanup on each node.
>>>>
>>>>
>>>> On Sun, Aug 26, 2018 at 11:14 AM, Pradeep Chhetri <
>>>> prad...@stashaway.com> wrote:
>>>>
>>>>> Hello everyone,
>>>>>
>>>>> Since i didn't hear from anyone, just want to describe my question
>>>>> again:
>>>>>
>>>>> Am i correct in understanding that i need to do following steps to
>>>>> migrate data from SimpleSnitch to GPFS changing datacenter name and rack
>>>>> name to AWS region and Availability zone respectively
>>>>>
>>>>> 1) Update the rack and datacenter fields in
>>>>> cassandra-rackdc.properties file and rolling restart cassandra with this
>>>>> flag "-Dcassandra.ignore_dc=true -Dcassandra.ignore_rack=true"
>>>>>
>>>>> 2) Run nodetool repair --sequential and nodetool cleanup.
>>>>>
>>>>> 3) Rolling restart cassandra removing the flag
>>>>> "-Dcassandra.ignore_dc=true -Dcassandra.ignore_rack=true"
>>>>>
>>>>> Regards,
>>>>> Pradeep
>>>>>
>>>>> On Thu, Aug 23, 2018 at 10:53 PM, Pradeep Chhetri <
>>>>> prad...@stashaway.com> wrote:
>>>>>
>>>>>

Re: Switching Snitch

2018-08-26 Thread Joshua Galbraith
Pradeep,

That being said, I haven't experimented with -Dcassandra.ignore_dc=true
-Dcassandra.ignore_rack=true before.

The description here may be helpful:
https://github.com/apache/cassandra/blob/trunk/NEWS.txt#L685-L693

I would spin up a small test cluster with data you don't care about and
verify that your above assumptions are correct there first.

On Sun, Aug 26, 2018 at 1:09 PM, Joshua Galbraith 
wrote:

> Pradeep.
>
> Right, so from that documentation is sounds like you actually have to stop
> all nodes in the cluster at once and bring them back up one at a time. A
> rolling restart won't work here.
>
> On Sun, Aug 26, 2018 at 11:46 AM, Pradeep Chhetri 
> wrote:
>
>> Hi Joshua,
>>
>> Thank you for the reply. Sorry i forgot to mention that I already went
>> through that documentation. There are few missing things regarding which I
>> have few questions:
>>
>> 1) One thing which isn't mentioned there is that cassandra fails to
>> restart when we change the datacenter name *or* rack name of a node. So
>> whether should i first rolling restart cassandra with flag
>> "-Dcassandra.ignore_dc=true -Dcassandra.ignore_rack=true", then run
>> sequential repair and then cleanup and then rolling restart cassandra
>> without that flag.
>>
>> 2) Should i not allow any read/write operation from applications during
>> the time when sequential repair is running.
>>
>> Regards,
>> Pradeep
>>
>> On Mon, Aug 27, 2018 at 12:19 AM, Joshua Galbraith <
>> jgalbra...@newrelic.com.invalid> wrote:
>>
>>> Pradeep, it sounds like what you're proposing counts as a topology
>>> change because you are changing the datacenter name and rack name.
>>>
>>> Please refer to the documentation here about what to do in that
>>> situation:
>>> https://docs.datastax.com/en/cassandra/3.0/cassandra/operati
>>> ons/opsSwitchSnitch.html
>>>
>>> In particular:
>>>
>>> Simply altering the snitch and replication to move some nodes to a new
>>>> datacenter will result in data being replicated incorrectly.
>>>
>>>
>>> Topology changes may occur when the replicas are placed in different
>>>> places by the new snitch. Specifically, the replication strategy places the
>>>> replicas based on the information provided by the new snitch.
>>>
>>>
>>> If the topology of the network has changed, but no datacenters are added:
>>>> a. Shut down all the nodes, then restart them.
>>>> b. Run a sequential repair and nodetool cleanup on each node.
>>>
>>>
>>> On Sun, Aug 26, 2018 at 11:14 AM, Pradeep Chhetri >> > wrote:
>>>
>>>> Hello everyone,
>>>>
>>>> Since i didn't hear from anyone, just want to describe my question
>>>> again:
>>>>
>>>> Am i correct in understanding that i need to do following steps to
>>>> migrate data from SimpleSnitch to GPFS changing datacenter name and rack
>>>> name to AWS region and Availability zone respectively
>>>>
>>>> 1) Update the rack and datacenter fields in cassandra-rackdc.properties
>>>> file and rolling restart cassandra with this flag
>>>> "-Dcassandra.ignore_dc=true -Dcassandra.ignore_rack=true"
>>>>
>>>> 2) Run nodetool repair --sequential and nodetool cleanup.
>>>>
>>>> 3) Rolling restart cassandra removing the flag
>>>> "-Dcassandra.ignore_dc=true -Dcassandra.ignore_rack=true"
>>>>
>>>> Regards,
>>>> Pradeep
>>>>
>>>> On Thu, Aug 23, 2018 at 10:53 PM, Pradeep Chhetri <
>>>> prad...@stashaway.com> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I am currently running a 3.11.2 cluster in SimpleSnitch hence the
>>>>> datacenter is datacenter1 and rack is rack1 for all nodes on AWS. I want 
>>>>> to
>>>>> switch to GPFS by changing the rack name to the availability-zone name and
>>>>> datacenter name to region name.
>>>>>
>>>>> When I try to restart individual nodes by changing those values, it
>>>>> failed to start throwing the error about dc and rack name mismatch but
>>>>> gives me an option to set ignore_dc and ignore_rack to true to bypass it.
>>>>>
>>>>> I am not sure if it is safe to set those two flags to true and if
>>>>> there is any drawback now or in future when i add a new datacenter to the
>>>>> cluster. I went through the documentation on Switching Snitches but didn't
>>>>> get much explanation.
>>>>>
>>>>> Regards,
>>>>> Pradeep
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Joshua Galbraith *| Lead Software Engineer | New Relic
>>>
>>
>>
>
>
> --
> *Joshua Galbraith *| Lead Software Engineer | New Relic
>



-- 
*Joshua Galbraith *| Lead Software Engineer | New Relic


Re: Switching Snitch

2018-08-26 Thread Joshua Galbraith
Pradeep.

Right, so from that documentation is sounds like you actually have to stop
all nodes in the cluster at once and bring them back up one at a time. A
rolling restart won't work here.

On Sun, Aug 26, 2018 at 11:46 AM, Pradeep Chhetri 
wrote:

> Hi Joshua,
>
> Thank you for the reply. Sorry i forgot to mention that I already went
> through that documentation. There are few missing things regarding which I
> have few questions:
>
> 1) One thing which isn't mentioned there is that cassandra fails to
> restart when we change the datacenter name *or* rack name of a node. So
> whether should i first rolling restart cassandra with flag
> "-Dcassandra.ignore_dc=true -Dcassandra.ignore_rack=true", then run
> sequential repair and then cleanup and then rolling restart cassandra
> without that flag.
>
> 2) Should i not allow any read/write operation from applications during
> the time when sequential repair is running.
>
> Regards,
> Pradeep
>
> On Mon, Aug 27, 2018 at 12:19 AM, Joshua Galbraith <
> jgalbra...@newrelic.com.invalid> wrote:
>
>> Pradeep, it sounds like what you're proposing counts as a topology change
>> because you are changing the datacenter name and rack name.
>>
>> Please refer to the documentation here about what to do in that situation:
>> https://docs.datastax.com/en/cassandra/3.0/cassandra/operati
>> ons/opsSwitchSnitch.html
>>
>> In particular:
>>
>> Simply altering the snitch and replication to move some nodes to a new
>>> datacenter will result in data being replicated incorrectly.
>>
>>
>> Topology changes may occur when the replicas are placed in different
>>> places by the new snitch. Specifically, the replication strategy places the
>>> replicas based on the information provided by the new snitch.
>>
>>
>> If the topology of the network has changed, but no datacenters are added:
>>> a. Shut down all the nodes, then restart them.
>>> b. Run a sequential repair and nodetool cleanup on each node.
>>
>>
>> On Sun, Aug 26, 2018 at 11:14 AM, Pradeep Chhetri 
>> wrote:
>>
>>> Hello everyone,
>>>
>>> Since i didn't hear from anyone, just want to describe my question again:
>>>
>>> Am i correct in understanding that i need to do following steps to
>>> migrate data from SimpleSnitch to GPFS changing datacenter name and rack
>>> name to AWS region and Availability zone respectively
>>>
>>> 1) Update the rack and datacenter fields in cassandra-rackdc.properties
>>> file and rolling restart cassandra with this flag
>>> "-Dcassandra.ignore_dc=true -Dcassandra.ignore_rack=true"
>>>
>>> 2) Run nodetool repair --sequential and nodetool cleanup.
>>>
>>> 3) Rolling restart cassandra removing the flag  "-Dcassandra.ignore_dc=true
>>> -Dcassandra.ignore_rack=true"
>>>
>>> Regards,
>>> Pradeep
>>>
>>> On Thu, Aug 23, 2018 at 10:53 PM, Pradeep Chhetri >> > wrote:
>>>
>>>> Hello,
>>>>
>>>> I am currently running a 3.11.2 cluster in SimpleSnitch hence the
>>>> datacenter is datacenter1 and rack is rack1 for all nodes on AWS. I want to
>>>> switch to GPFS by changing the rack name to the availability-zone name and
>>>> datacenter name to region name.
>>>>
>>>> When I try to restart individual nodes by changing those values, it
>>>> failed to start throwing the error about dc and rack name mismatch but
>>>> gives me an option to set ignore_dc and ignore_rack to true to bypass it.
>>>>
>>>> I am not sure if it is safe to set those two flags to true and if there
>>>> is any drawback now or in future when i add a new datacenter to the
>>>> cluster. I went through the documentation on Switching Snitches but didn't
>>>> get much explanation.
>>>>
>>>> Regards,
>>>> Pradeep
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *Joshua Galbraith *| Lead Software Engineer | New Relic
>>
>
>


-- 
*Joshua Galbraith *| Lead Software Engineer | New Relic


Re: Switching Snitch

2018-08-26 Thread Joshua Galbraith
Pradeep, it sounds like what you're proposing counts as a topology change
because you are changing the datacenter name and rack name.

Please refer to the documentation here about what to do in that situation:
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsSwitchSnitch.html

In particular:

Simply altering the snitch and replication to move some nodes to a new
> datacenter will result in data being replicated incorrectly.


Topology changes may occur when the replicas are placed in different places
> by the new snitch. Specifically, the replication strategy places the
> replicas based on the information provided by the new snitch.


If the topology of the network has changed, but no datacenters are added:
> a. Shut down all the nodes, then restart them.
> b. Run a sequential repair and nodetool cleanup on each node.


On Sun, Aug 26, 2018 at 11:14 AM, Pradeep Chhetri 
wrote:

> Hello everyone,
>
> Since i didn't hear from anyone, just want to describe my question again:
>
> Am i correct in understanding that i need to do following steps to migrate
> data from SimpleSnitch to GPFS changing datacenter name and rack name to
> AWS region and Availability zone respectively
>
> 1) Update the rack and datacenter fields in cassandra-rackdc.properties
> file and rolling restart cassandra with this flag
> "-Dcassandra.ignore_dc=true -Dcassandra.ignore_rack=true"
>
> 2) Run nodetool repair --sequential and nodetool cleanup.
>
> 3) Rolling restart cassandra removing the flag  "-Dcassandra.ignore_dc=true
> -Dcassandra.ignore_rack=true"
>
> Regards,
> Pradeep
>
> On Thu, Aug 23, 2018 at 10:53 PM, Pradeep Chhetri 
> wrote:
>
>> Hello,
>>
>> I am currently running a 3.11.2 cluster in SimpleSnitch hence the
>> datacenter is datacenter1 and rack is rack1 for all nodes on AWS. I want to
>> switch to GPFS by changing the rack name to the availability-zone name and
>> datacenter name to region name.
>>
>> When I try to restart individual nodes by changing those values, it
>> failed to start throwing the error about dc and rack name mismatch but
>> gives me an option to set ignore_dc and ignore_rack to true to bypass it.
>>
>> I am not sure if it is safe to set those two flags to true and if there
>> is any drawback now or in future when i add a new datacenter to the
>> cluster. I went through the documentation on Switching Snitches but didn't
>> get much explanation.
>>
>> Regards,
>> Pradeep
>>
>>
>>
>>
>>
>>
>>
>>
>


-- 
*Joshua Galbraith *| Lead Software Engineer | New Relic


Re: Too many Cassandra threads waiting!!!

2018-08-03 Thread Joshua Galbraith
Renoy,

Thanks. That kernel is new enough to have the patch for the infamous Linux
kernel futex bug detailed here:
https://groups.google.com/d/topic/mechanical-sympathy/QbmpZxp6C64

To answer your questions above:

What you're seeing is likely just normal behavior for Cassandra and is an
artifact of its staged event driven architecture (SEDA). You can read more
about that if you follow the links in the post above. There is work to move
from SEDA to a thread-per-core (TPC) architecture, which you can read about
in https://issues.apache.org/jira/browse/CASSANDRA-10989.

There are a number of tuning parameters you can tune to adjust the number
of threads working on a few of the various stages within Cassandra (e.g.
memtable_flush_writers, native_transport_max_threads, and
max_hints_delivery_threads).

There will of course be performance impacts for tuning these parameters and
the right values will depend on your data model, hardware, and workload
(among other things).

On Thu, Aug 2, 2018 at 10:44 PM, nokia ceph 
wrote:

> Hi Joshua,
>
> # uname -a
> Linux cn6.chn6us1c1.cdn 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27
> UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
> # cat /etc/redhat-release
> CentOS Linux release 7.4.1708 (Core)
>
> On Fri, Aug 3, 2018 at 8:27 AM, Joshua Galbraith  invalid> wrote:
>
>> Renoy,
>>
>> Out of curiosity, which kernel version are your nodes running?
>>
>> You may find this old message on the mailing list helpful:
>> http://mail-archives.apache.org/mod_mbox/cassandra-user/2016
>> 02.mbox/%3CCAA=6J0-0VabfAn3DJfatOxyJwwEHpdiE67v2wm_
>> u5kaqoro...@mail.gmail.com%3E
>>
>> On Wed, Aug 1, 2018 at 5:38 PM, Elliott Sims 
>> wrote:
>>
>>> You might have more luck trying to analyze at the Java level, either via
>>> a (Java) stack dump and the "ttop" tool from Swiss Java Knife, or Cassandra
>>> tools like "nodetool tpstats"
>>>
>>> On Wed, Aug 1, 2018 at 2:08 AM, nokia ceph 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> i'm having a 5 node cluster with cassandra 3.0.13.
>>>>
>>>> i could see the cassandra process has too many threads.
>>>>
>>>> *# pstree -p `pgrep java` | wc -l*
>>>> *453*
>>>>
>>>> And almost all of those threads are in *sleeping* state and wait at
>>>> *# cat  /proc/166022/task/1698913/wchan*
>>>> *futex_wait_queue_me*
>>>>
>>>> Some more info:
>>>> *# strace -e trace=all -p 166022*
>>>> *strace: Process 166022 attached*
>>>> *futex(0x7efc24aeb9d0, FUTEX_WAIT, 166023, NULL*
>>>>
>>>> # cat /proc/166022/stack
>>>> [] futex_wait_queue_me+0xc6/0x130
>>>> [] futex_wait+0x17b/0x280
>>>> [] do_futex+0x106/0x5a0
>>>> [] SyS_futex+0x80/0x180
>>>> [] system_call_fastpath+0x16/0x1b
>>>> [] 0x
>>>>
>>>>
>>>> What is the reason cassandra is having these many threads? is it the
>>>> normal behavior of cassandra?  Is there a way to reduce this thread count?
>>>> will there be any performance impact because of this (our platform experts
>>>> suspects so)?
>>>>
>>>> Regards,
>>>> Renoy  Paulose
>>>>
>>>>
>>>
>>
>>
>> --
>> *Joshua Galbraith *| Lead Software Engineer | New Relic
>>
>
>


-- 
*Joshua Galbraith *| Lead Software Engineer | New Relic


Re: Too many Cassandra threads waiting!!!

2018-08-02 Thread Joshua Galbraith
Renoy,

Out of curiosity, which kernel version are your nodes running?

You may find this old message on the mailing list helpful:
http://mail-archives.apache.org/mod_mbox/cassandra-user/201602.mbox/%3CCAA=6j0-0vabfan3djfatoxyjwwehpdie67v2wm_u5kaqoro...@mail.gmail.com%3E

On Wed, Aug 1, 2018 at 5:38 PM, Elliott Sims  wrote:

> You might have more luck trying to analyze at the Java level, either via a
> (Java) stack dump and the "ttop" tool from Swiss Java Knife, or Cassandra
> tools like "nodetool tpstats"
>
> On Wed, Aug 1, 2018 at 2:08 AM, nokia ceph 
> wrote:
>
>> Hi,
>>
>> i'm having a 5 node cluster with cassandra 3.0.13.
>>
>> i could see the cassandra process has too many threads.
>>
>> *# pstree -p `pgrep java` | wc -l*
>> *453*
>>
>> And almost all of those threads are in *sleeping* state and wait at
>> *# cat  /proc/166022/task/1698913/wchan*
>> *futex_wait_queue_me*
>>
>> Some more info:
>> *# strace -e trace=all -p 166022*
>> *strace: Process 166022 attached*
>> *futex(0x7efc24aeb9d0, FUTEX_WAIT, 166023, NULL*
>>
>> # cat /proc/166022/stack
>> [] futex_wait_queue_me+0xc6/0x130
>> [] futex_wait+0x17b/0x280
>> [] do_futex+0x106/0x5a0
>> [] SyS_futex+0x80/0x180
>> [] system_call_fastpath+0x16/0x1b
>> [] 0x
>>
>>
>> What is the reason cassandra is having these many threads? is it the
>> normal behavior of cassandra?  Is there a way to reduce this thread count?
>> will there be any performance impact because of this (our platform experts
>> suspects so)?
>>
>> Regards,
>> Renoy  Paulose
>>
>>
>


-- 
*Joshua Galbraith *| Lead Software Engineer | New Relic


Re: nodetool status

2018-07-03 Thread Joshua Galbraith
https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/tools/nodetool/Status.java

On Tue, Jul 3, 2018 at 7:57 AM, Thouraya TH  wrote:

> Hi all,
> Please, can you give me a link to the source code behind the command
> "nodetool status" ?
> Thank you so much.
> KInd regards.
>



-- 
*Joshua Galbraith *| Lead Software Engineer | New Relic


Re: Is there a plan for Feature like this in C* ?

2018-07-03 Thread Joshua Galbraith
There is more info and background context on CDC here:
https://issues.apache.org/jira/browse/CASSANDRA-8844

On Mon, Jul 2, 2018 at 9:26 PM, Justin Cameron 
wrote:

> Sorry - you'd need a source connector, not the sink.
>
> On Tue, 3 Jul 2018 at 04:24 Justin Cameron  wrote:
>
>> Yeah, if you're using Kafka Connect you could use the Cassandra sink
>> connector
>>
>> On Tue, 3 Jul 2018 at 02:37 Jeff Jirsa  wrote:
>>
>>> Its a stable API - the project doesn’t ship a Kafka connector but
>>> certainly people have written them
>>>
>>>
>>> --
>>> Jeff Jirsa
>>>
>>>
>>> On Jul 2, 2018, at 6:50 PM, Kant Kodali  wrote:
>>>
>>> Hi Justin,
>>>
>>> Thanks, Looks like a very early stage feature and no integration with
>>> Kafka yet I suppose.
>>>
>>> Thanks!
>>>
>>> On Mon, Jul 2, 2018 at 6:24 PM, Justin Cameron 
>>> wrote:
>>>
>>>> yes, take a look at http://cassandra.apache.org/
>>>> doc/latest/operating/cdc.html
>>>>
>>>> On Tue, 3 Jul 2018 at 01:20 Kant Kodali  wrote:
>>>>
>>>>> https://www.cockroachlabs.com/docs/v2.1/change-data-capture.html
>>>>>
>>>> --
>>>>
>>>>
>>>> *Justin Cameron*Senior Software Engineer
>>>>
>>>>
>>>> <https://www.instaclustr.com/>
>>>>
>>>>
>>>> This email has been sent on behalf of Instaclustr Pty. Limited
>>>> (Australia) and Instaclustr Inc (USA).
>>>>
>>>> This email and any attachments may contain confidential and legally
>>>> privileged information.  If you are not the intended recipient, do not copy
>>>> or disclose its content, but please reply to this email immediately and
>>>> highlight the error to the sender and then immediately delete the message.
>>>>
>>>
>>> --
>>
>>
>> *Justin Cameron*Senior Software Engineer
>>
>>
>> <https://www.instaclustr.com/>
>>
>>
>> This email has been sent on behalf of Instaclustr Pty. Limited
>> (Australia) and Instaclustr Inc (USA).
>>
>> This email and any attachments may contain confidential and legally
>> privileged information.  If you are not the intended recipient, do not copy
>> or disclose its content, but please reply to this email immediately and
>> highlight the error to the sender and then immediately delete the message.
>>
> --
>
>
> *Justin Cameron*Senior Software Engineer
>
>
> <https://www.instaclustr.com/>
>
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>



-- 
*Joshua Galbraith *| Lead Software Engineer | New Relic


Re: Problem with dropped mutations

2018-06-26 Thread Joshua Galbraith
Hannu,

Dropped mutations are often a sign of load-shedding due to an overloaded
node or cluster. Are you seeing resource saturation like high CPU usage
(because the write path is usually CPU-bound) on any of the nodes in your
cluster?

Some potential contributing factors that might be causing you to drop
mutations are long garbage collection (GC) pauses or large partitions. Do
the drops coincide with an increase in requests, a code change, or
compaction activity?

On Tue, Jun 26, 2018 at 7:48 AM, Hannu Kröger  wrote:

> Hello,
>
> We have a cluster with somewhat heavy load and we are seeing dropped
> mutations (variable amount and not all nodes have those).
>
> Are there some clear trigger which cause those? What would be the best
> pragmatic approach to start debugging those? We have already added more
> memory which seemed to help somewhat but not completely.
>
> Cheers,
> Hannu
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


-- 
*Joshua Galbraith *| Senior Software Engineer | New Relic


Re: RE: RE: [EXTERNAL] Cluster is unbalanced

2018-06-20 Thread Joshua Galbraith
>Also, our partition keys are not distributed evenly as I had pasted output
earlier.

Thanks, I see that now. Can you share the full output of nodetool tablestats
 and nodetool tablehistograms?

Out of curiosity, are you running repairs on this cluster? If so, what type
of repairs are you running and how often?

One way you might differentiate between a server-side/configuration issue
or a client/data model issue is to write a script that populates a test
keyspace with uniformly distributed partitions and see if that keyspace
also exhibits a similar imbalance of partitions per node. You might be able
to use a heavily-throttled cassandra-stress invocation to handle this.

On Wed, Jun 20, 2018 at 12:32 PM, learner dba <
cassandra...@yahoo.com.invalid> wrote:

>
> Hi Joshua,
>
> Okay, that string appears to be a base64-encoded version 4 UUID.
> Why not use Cassandra's UUID data type to store that directly rather than
> storing the longer base64 string as text?  --> It's an old application and
> the person who coded it, has left the company.
> What does the UUID represent? --> Unique account id.
> Is it identifying a unique product, an image, or some other type of
> object? --> yes
> When and how is the underlying UUID being generated by the application?
> --> Not sure about it.
>
> I assume you're using the default partitioner, but just in case, can you
> confirm which partitioner you're using in your cassandra.yaml file (e.g.
> Murmer3, Random, ByteOrdered)? --> partitioner: org.apache.cassandra.dht.
> Murmur3Partitioner
>
>
> Mentioned Jiras are from much older version than ours "3.11.2"; Also, our
> partition keys are not distributed evenly as I had pasted output earlier.
> Which means none of the Jiras apply in our case :(
>
>
> On Wednesday, June 20, 2018, 12:18:28 PM EDT, Joshua Galbraith <
> jgalbra...@newrelic.com.INVALID> wrote:
>
>
> Okay, that string appears to be a base64-encoded version 4 UUID. Why not
> use Cassandra's UUID data type to store that directly rather than storing
> the longer base64 string as text? What does the UUID represent? Is it
> identifying a unique product, an image, or some other type of object? When
> and how is the underlying UUID being generated by the application?
>
> I assume you're using the default partitioner, but just in case, can you
> confirm which partitioner you're using in your cassandra.yaml file (e.g.
> Murmer3, Random, ByteOrdered)?
>
> Also, please have a look at these two issues and verify you're not
> experiencing either:
>
> * https://issues.apache.org/jira/browse/CASSANDRA-7032
> * https://issues.apache.org/jira/browse/CASSANDRA-10430
>
>
> On Wed, Jun 20, 2018 at 9:59 AM, learner dba  invalid> wrote:
>
> Partition key has value as:
>
> MWY4MmI0MTQtYTk2YS00YmRjLTkxND MtOWU0MjM1OWU2NzUy other column is blob.
>
> On Tuesday, June 19, 2018, 6:07:59 PM EDT, Joshua Galbraith <
> jgalbra...@newrelic.com. INVALID> wrote:
>
>
> > id text PRIMARY KEY
>
> What values are written to this id field? Can you give us some examples or
> explain the general use case?
>
> On Tue, Jun 19, 2018 at 1:18 PM, learner dba  invalid > wrote:
>
> Hi Sean,
>
> Here is create table:
>
> CREATE TABLE ks.cf (
>
> id text PRIMARY KEY,
>
> accessdata blob
>
> ) WITH bloom_filter_fp_chance = 0.01
>
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
>
> AND comment = ''
>
> AND compaction = {'class': 'org.apache.cassandra.db. compaction.
> SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
>
> AND compression = {'chunk_length_in_kb': '64', 'class': '
> org.apache.cassandra.io. compress.LZ4Compressor'}
>
> AND crc_check_chance = 1.0
>
> AND dclocal_read_repair_chance = 0.1
>
> AND default_time_to_live = 0
>
> AND gc_grace_seconds = 864000
>
> AND max_index_interval = 2048
>
> AND memtable_flush_period_in_ms = 0
>
> AND min_index_interval = 128
>
> AND read_repair_chance = 0.0
>
> AND speculative_retry = '99PERCENTILE';
> Nodetool status:
>
> Datacenter: dc1
>
> ===
>
> Status=Up/Down
>
> |/ State=Normal/Leaving/Joining/ Moving
>
> --  Address Load   Tokens   Owns (effective)  Host ID
>   Rack
>
> UN  x   20.66 GiB  256  61.4% f4f54949-83c9-419b-9a43-
> cb630b36d8c2  RAC1
>
> UN  x  65.77 GiB  256  59.3% 3db430ae-45ef-4746-a273-
> bc1f66ac8981  RAC1
>
> UN  xx  60.58 GiB  256  58.4% 1f23e869-1823-4b75-8d3e-
> f9b32acba9a6  RAC1
>
> UN  x  47.0

Re: RE: RE: [EXTERNAL] Cluster is unbalanced

2018-06-20 Thread Joshua Galbraith
Okay, that string appears to be a base64-encoded version 4 UUID. Why not
use Cassandra's UUID data type to store that directly rather than storing
the longer base64 string as text? What does the UUID represent? Is it
identifying a unique product, an image, or some other type of object? When
and how is the underlying UUID being generated by the application?

I assume you're using the default partitioner, but just in case, can you
confirm which partitioner you're using in your cassandra.yaml file (e.g.
Murmer3, Random, ByteOrdered)?

Also, please have a look at these two issues and verify you're not
experiencing either:

* https://issues.apache.org/jira/browse/CASSANDRA-7032
* https://issues.apache.org/jira/browse/CASSANDRA-10430


On Wed, Jun 20, 2018 at 9:59 AM, learner dba  wrote:

> Partition key has value as:
>
> MWY4MmI0MTQtYTk2YS00YmRjLTkxNDMtOWU0MjM1OWU2NzUy other column is blob.
>
> On Tuesday, June 19, 2018, 6:07:59 PM EDT, Joshua Galbraith <
> jgalbra...@newrelic.com.INVALID> wrote:
>
>
> > id text PRIMARY KEY
>
> What values are written to this id field? Can you give us some examples or
> explain the general use case?
>
> On Tue, Jun 19, 2018 at 1:18 PM, learner dba  invalid> wrote:
>
> Hi Sean,
>
> Here is create table:
>
> CREATE TABLE ks.cf (
>
> id text PRIMARY KEY,
>
> accessdata blob
>
> ) WITH bloom_filter_fp_chance = 0.01
>
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
>
> AND comment = ''
>
> AND compaction = {'class': 'org.apache.cassandra.db. compaction.
> SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
>
> AND compression = {'chunk_length_in_kb': '64', 'class': '
> org.apache.cassandra.io. compress.LZ4Compressor'}
>
> AND crc_check_chance = 1.0
>
> AND dclocal_read_repair_chance = 0.1
>
> AND default_time_to_live = 0
>
> AND gc_grace_seconds = 864000
>
> AND max_index_interval = 2048
>
> AND memtable_flush_period_in_ms = 0
>
> AND min_index_interval = 128
>
> AND read_repair_chance = 0.0
>
> AND speculative_retry = '99PERCENTILE';
> Nodetool status:
>
> Datacenter: dc1
>
> ===
>
> Status=Up/Down
>
> |/ State=Normal/Leaving/Joining/ Moving
>
> --  Address Load   Tokens   Owns (effective)  Host ID
>   Rack
>
> UN  x   20.66 GiB  256  61.4% f4f54949-83c9-419b-9a43-
> cb630b36d8c2  RAC1
>
> UN  x  65.77 GiB  256  59.3% 3db430ae-45ef-4746-a273-
> bc1f66ac8981  RAC1
>
> UN  xx  60.58 GiB  256  58.4% 1f23e869-1823-4b75-8d3e-
> f9b32acba9a6  RAC1
>
> UN  x  47.08 GiB  256  57.5% 7aca9a36-823f-4185-be44-
> c1464a799084  RAC1
>
> UN  x  51.47 GiB  256  63.4% 18cff010-9b83-4cf8-9dc2-
> f05ac63df402  RAC1
>
> Datacenter: dc2
>
> 
>
> Status=Up/Down
>
> |/ State=Normal/Leaving/Joining/ Moving
>
> --  Address Load   Tokens   Owns (effective)  Host ID
>   Rack
>
> UN     24.37 GiB  256  59.5% 1b694180-210a-4b75-8f2a-
> 748f4a5b6a3d  RAC1
>
> UN  x 30.76 GiB  256  56.7% 597bac04-c57a-4487-8924-
> 72e171e45514  RAC1
>
> UN    10.73 GiB  256  63.9% 6e7e474e-e292-4433-afd4-
> 372d30e0f3e1  RAC1
>
> UN  xx 19.77 GiB  256  61.5% 58751418-7b76-40f7-8b8f-
> a5bf8fe7d9a2  RAC1
>
> UN  x  10.33 GiB  256  58.4% 6d58d006-2095-449c-8c67-
> 50e8cbdfe7a7  RAC1
>
>
> cassandra-rackdc.properties:
>
> dc=dc1
> rack=RAC1 --> same in all nodes
>
> cassandra.yaml:
> num_tokens: 256
>
> endpoint_snitch: GossipingPropertyFileSnitch
> I can see cassandra-topology.properties, I believe it shouldn't be there
> with GossipPropertyFileSnitch. Can this file be causing any trouble in data
> distribution.
>
> cat /opt/cassandra/conf/cassandra- topology.properties
>
> # Licensed to the Apache Software Foundation (ASF) under one
>
> # or more contributor license agreements.  See the NOTICE file
>
> # distributed with this work for additional information
>
> # regarding copyright ownership.  The ASF licenses this file
>
> # to you under the Apache License, Version 2.0 (the
>
> # "License"); you may not use this file except in compliance
>
> # with the License.  You may obtain a copy of the License at
>
> #
>
> # http://www.apache.org/ licenses/LICENSE-2.0
> <http://www.apache.org/licenses/LICENSE-2.0>
>
> #
>
> # Unles

Re: RE: RE: [EXTERNAL] Cluster is unbalanced

2018-06-19 Thread Joshua Galbraith
@homedepot.com> wrote:
>
>
> You are correct that the cluster decides where data goes (based on the
> hash of the partition key). However, if you choose a “bad” partition key,
> you may not get good distribution of the data, because the hash is
> deterministic (it always goes to the same nodes/replicas). For example, if
> you have a partition key of a datetime, it is possible that there is more
> data written for a certain time period – thus a larger partition and an
> imbalance across the cluster. Choosing a “good” partition key is one of the
> most important decisions for a Cassandra table.
>
>
>
> Also, I have seen the use of racks in the topology cause an imbalance in
> the “first” node of the rack.
>
>
>
> To help you more, we would need the create table statement(s) for your
> keyspace and the topology of the cluster (like with nodetool status).
>
>
>
>
>
> Sean Durity
>
> *From:* learner dba 
> *Sent:* Tuesday, June 19, 2018 9:50 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: RE: [EXTERNAL] Cluster is unbalanced
>
>
>
> We do not chose the node where partition will go. I thought it is snitch's
> role to chose replica nodes. Even the partition size does not vary on our
> largest column family:
>
> Percentile  SSTables Write Latency  Read LatencyPartition Size
>   Cell Count
>
>   (micros)  (micros)   (bytes)
>
>
> 50% 0.00 17.08 61.21      3311
> 1
>
> 75% 0.00 20.50 88.15  3973
> 1
>
> 95% 0.00 35.43105.78  3973
> 1
>
> 98% 0.00 42.51126.93  3973
> 1
>
> 99% 0.00 51.01126.93  3973
> 1
>
> Min 0.00  3.97 17.0961
>
>
> Max 0.00 73.46126.93 11864
> 1
>
>
>
> We are kinda stuck here to identify, what could be causing this un-balance.
>
>
>
> On Tuesday, June 19, 2018, 7:15:28 AM EDT, Joshua Galbraith <
> jgalbra...@newrelic.com.INVALID> wrote:
>
>
>
>
>
> >If it was partition key issue, we would see similar number of partition
> keys across nodes. If we look closely number of keys across nodes vary a
> lot.
>
> I'm not sure about that, is it possible you're writing more new partitions
> to some nodes even though each node owns the same number of tokens?
>
> [image: Image removed by sender.]
>
>
>
> On Mon, Jun 18, 2018 at 6:07 PM, learner dba  invalid> wrote:
>
> Hi Sean,
>
>
>
> Are you using any rack aware topology? --> we are using gossip file
>
> Are you using any rack aware topology? --> we are using gossip file
>
>  What are your partition keys? --> Partition key is uniq
>
> Is it possible that your partition keys do not divide up as cleanly as you
> would like across the cluster because the data is not evenly distributed
> (by partition key)?  --> No, we verified it.
>
>
>
> If it was partition key issue, we would see similar number of partition
> keys across nodes. If we look closely number of keys across nodes vary a
> lot.
>
>
>
>
>
> Number of partitions (estimate): 3142552
>
> Number of partitions (estimate): 15625442
>
> Number of partitions (estimate): 15244021
>
> Number of partitions (estimate): 9592992
>
> Number of partitions (estimate): 15839280
>
>
>
>
>
>
>
>
>
>
>
> On Monday, June 18, 2018, 5:39:08 PM EDT, Durity, Sean R <
> sean_r_dur...@homedepot.com> wrote:
>
>
>
>
>
> Are you using any rack aware topology? What are your partition keys? Is it
> possible that your partition keys do not divide up as cleanly as you would
> like across the cluster because the data is not evenly distributed (by
> partition key)?
>
>
>
>
>
> Sean Durity
>
> lord of the (C*) rings (Staff Systems Engineer – Cassandra)
>
> MTC 2250
>
> #cassandra - for the latest news and updates
>
>
>
> *From:* learner dba 
> *Sent:* Monday, June 18, 2018 2:06 PM
> *To:* User cassandra.apache.org
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__cassandra.apache.org=DwMFaQ=MtgQEAMQGqekjTjiAhkudQ=aC_gxC6z_4f9GLlbWiKzHm1vucZTtVYWDDvyLkh8IaQ=8q4p6nWedWQJ9gpXCnoa6KR4HRmSf3B1whdYKNFub6M=TmzIaVextVyZy81p9JuU7R6PFv84RfhgtEezCe063V0=>
> 
> *Subject:* [EXTERNAL] Cluster is unbalanced
>
>
>
> H

Re:

2018-06-19 Thread Joshua Galbraith
a:461)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>> ~[na:1.8.0_131]
>> at java.util.HashMap$EntrySpliterator.tryAdvance(HashMap.java:1712)
>> ~[na:1.8.0_131]
>> at 
>> java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126)
>> ~[na:1.8.0_131]
>> at 
>> java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498)
>> ~[na:1.8.0_131]
>> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485)
>> ~[na:1.8.0_131]
>> at 
>> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>> ~[na:1.8.0_131]
>> at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230)
>> ~[na:1.8.0_131]
>> at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196)
>> ~[na:1.8.0_131]
>> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>> ~[na:1.8.0_131]
>> at java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454)
>> ~[na:1.8.0_131]
>> at org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByN
>> ame.removeUnfinishedLeftovers(LogTransaction.java:456)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>> at org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfin
>> ishedLeftovers(LogTransaction.java:423) ~[apache-cassandra-3.11.0.jar:
>> 3.11.0]
>> at org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfin
>> ishedLeftovers(LogTransaction.java:415) ~[apache-cassandra-3.11.0.jar:
>> 3.11.0]
>> at org.apache.cassandra.db.lifecycle.LifecycleTransaction.remov
>> eUnfinishedLeftovers(LifecycleTransaction.java:544)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>> at 
>> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:636)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>> at 
>> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:275)
>> [apache-cassandra-3.11.0.jar:3.11.0]
>> at 
>> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>> [apache-cassandra-3.11.0.jar:3.11.0]
>> at 
>> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689)
>> [apache-cassandra-3.11.0.jar:3.11.0]
>>
>> Do you have any idea what may be wrong?
>>
>> Thanks in advance,
>> Deniz
>>
>
>


-- 
*Joshua Galbraith *| Senior Software Engineer | New Relic
C: 907-209-1208 | jgalbra...@newrelic.com


Re: RE: [EXTERNAL] Cluster is unbalanced

2018-06-19 Thread Joshua Galbraith
>If it was partition key issue, we would see similar number of partition
keys across nodes. If we look closely number of keys across nodes vary a
lot.

I'm not sure about that, is it possible you're writing more new partitions
to some nodes even though each node owns the same number of tokens?

On Mon, Jun 18, 2018 at 6:07 PM, learner dba  wrote:

> Hi Sean,
>
> Are you using any rack aware topology? --> we are using gossip file
> Are you using any rack aware topology? --> we are using gossip file
>  What are your partition keys? --> Partition key is uniq
> Is it possible that your partition keys do not divide up as cleanly as you
> would like across the cluster because the data is not evenly distributed
> (by partition key)?  --> No, we verified it.
>
> If it was partition key issue, we would see similar number of partition
> keys across nodes. If we look closely number of keys across nodes vary a
> lot.
>
>
> Number of partitions (estimate): 3142552
> Number of partitions (estimate): 15625442
> Number of partitions (estimate): 15244021
> Number of partitions (estimate): 9592992
> Number of partitions (estimate): 15839280
>
>
>
>
>
> On Monday, June 18, 2018, 5:39:08 PM EDT, Durity, Sean R <
> sean_r_dur...@homedepot.com> wrote:
>
>
> Are you using any rack aware topology? What are your partition keys? Is it
> possible that your partition keys do not divide up as cleanly as you would
> like across the cluster because the data is not evenly distributed (by
> partition key)?
>
>
>
>
>
> Sean Durity
>
> lord of the (C*) rings (Staff Systems Engineer – Cassandra)
>
> MTC 2250
>
> #cassandra - for the latest news and updates
>
>
>
> *From:* learner dba 
> *Sent:* Monday, June 18, 2018 2:06 PM
> *To:* User cassandra.apache.org 
> *Subject:* [EXTERNAL] Cluster is unbalanced
>
>
>
> Hi,
>
>
>
> Data volume varies a lot in our two DC cluster:
>
>  Load   Tokens   Owns
>
>  20.01 GiB  256  ?
>
>  65.32 GiB  256  ?
>
>  60.09 GiB  256  ?
>
>  46.95 GiB  256  ?
>
>  50.73 GiB  256  ?
>
> kaiprodv2
>
> =
>
> /Leaving/Joining/Moving
>
>  Load   Tokens   Owns
>
>  25.19 GiB  256  ?
>
>  30.26 GiB  256  ?
>
>  9.82 GiB   256  ?
>
>  20.54 GiB  256  ?
>
>  9.7 GiB256  ?
>
>
>
> I ran clearsnapshot, garbagecollect and cleanup, but it increased the size
> on heavier nodes instead of decreasing. Based on nodetool cfstats, I can
> see partition keys on each node varies a lot:
>
>
>
> Number of partitions (estimate): 3142552
>
> Number of partitions (estimate): 15625442
>
> Number of partitions (estimate): 15244021
>
> Number of partitions (estimate): 9592992
>
> Number of partitions (estimate): 15839280
>
>
>
> How can I diagnose this imbalance further?
>
>
>



-- 
*Joshua Galbraith *| Senior Software Engineer | New Relic
C: 907-209-1208 | jgalbra...@newrelic.com


Re: compaction_throughput: Difference between 0 (unthrottled) and large value

2018-06-13 Thread Joshua Galbraith
Thomas,

This post from Ryan Svihla has a few notes in it that may or may not be
useful to you:

>If you read the original throttling Jira you can see that there is a hurry
up and wait component to unthrottled compaction (CASSANDRA-2156- Compaction
Throttling). Ultimately you will saturate your IO in bursts, backing up
other processes and making different bottlenecks spike up a long the way,
potentially causing something OTHER than compaction to get so far behind
that the server becomes unresponsive (such as GC).

via https://medium.com/@foundev/how-i-tune-cassandra-compaction-7c16fb0b1d99

On Mon, Jun 11, 2018 at 12:05 AM, Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:

> Sorry, should have first looked at the source code. In case of 0, it is
> set to Double.MAX_VALUE.
>
>
>
> Thomas
>
>
>
> *From:* Steinmaurer, Thomas [mailto:thomas.steinmau...@dynatrace.com]
> *Sent:* Montag, 11. Juni 2018 08:53
> *To:* user@cassandra.apache.org
> *Subject:* compaction_throughput: Difference between 0 (unthrottled) and
> large value
>
>
>
> Hello,
>
>
>
> on a 3 node loadtest cluster with very capable machines (32 physical
> cores, 512G RAM, 20T storage (26 disk RAID)), I’m trying to max out
> compaction, thus currently testing with:
>
>
>
> concurrent_compactors: 16
>
> compaction_throughput_mb_per_sec: 0
>
>
>
> With our simulated incoming load + compaction etc., the Linux volume shows
> ~ 20 Mbyte/s Read IO + 50 Mbyte/s Write IO in AVG, constantly.
>
>
>
>
>
> Setting throughput to 0 should mean unthrottled, right? Is this really
> unthrottled from a throughput perspective and then is basically limited by
> disk capabilities only? Or should it be better set to a very high value
> instead of 0. Is there any semantical difference here?
>
>
>
>
>
> Thanks,
>
> Thomas
>
>
>
> The contents of this e-mail are intended for the named addressee only. It
> contains information that may be confidential. Unless you are the named
> addressee or an authorized designee, you may not copy or use it, or
> disclose it to anyone else. If you received it in error please notify us
> immediately and then destroy it. Dynatrace Austria GmbH (registration
> number FN 91482h) is a company registered in Linz whose registered office
> is at 4040 Linz, Austria, Freistädterstraße 313
> The contents of this e-mail are intended for the named addressee only. It
> contains information that may be confidential. Unless you are the named
> addressee or an authorized designee, you may not copy or use it, or
> disclose it to anyone else. If you received it in error please notify us
> immediately and then destroy it. Dynatrace Austria GmbH (registration
> number FN 91482h) is a company registered in Linz whose registered office
> is at 4040 Linz, Austria, Freistädterstraße 313
>



-- 
*Joshua Galbraith *| Senior Software Engineer | New Relic
C: 907-209-1208 | jgalbra...@newrelic.com