Over the past year we've migrated several clusters from DSE to Apache
Cassandra. We've mostly done I place conversions node by node with no
downtime. DSE 4.8.X to Apache Cassandra 2.1.x
On Wed, May 29, 2019 at 8:55 PM Goetz, Anthony
wrote:
> My team migrated from DSE to OSS a few years ago by
My team migrated from DSE to OSS a few years ago by doing datacenter switch.
You will need to update replication strategy for all keyspaces that are using
Everywhere to NetworkTopologyStrategy before adding any OSS nodes. As Jonathan
mentioned, DSE nodes will revert this change on restart.
Has anyone tried to do a DC switch as a means to migrate from Datastax to
OSS? This would be the safest route as the ability to revert back to
Datastax is easy. However, I'm curious how the dse_system keyspace would be
replicated to OSS using their custom Everywhere strategy. You may have to
If cassandra version is same, it should work
Regards,
Nitan
Cell: 510 449 9629
> On May 28, 2019, at 4:21 PM, Rahul Reddy wrote:
>
> Hello,
>
> Does sstableloader works between datastax and Apache cassandra. I'm trying to
> migrate dse 5.0.7 to Apache 3.11.1 ?
Hello Simon,
Sorry if the question has already been answered.
This was probably answered here indeed (and multiple times I'm sure), but I
do not mind taking a moment to repeat this :).
About *why?*
This difference is expected. It can be due to multiple factors such as:
- Different compaction
Hello,
I can't answer this question about the sstableloader (even though I think
it should be ok). My understanding, even though I'm not really up to date
with latest Datastax work, is that DSE uses a modified but compatible
version of Cassandra, for everything that is not 'DSE feature'
All ports are open.
We tried rolling restart and full cluster down and start one mode at a time.
Changes down were:
Storage addition
Ddl for column drop and recreate
Schema version is same for few nodes and few shows unavailable.
Network has been verified in detail and no severe packet drops.
If I only send ReadTotalLatency to Graphite/Grafana, can I run an average
on it and use "scale to seconds=1" ?
Will that do the trick?
Thanks!
On Wed, May 29, 2019 at 5:31 PM shalom sagges
wrote:
> Hi All,
>
> I'm creating a dashboard that should collect read/write latency metrics on
> C* 3.x.
There are various attributes under
org.apache.cassandra.metrics.ClientRequest.Latency.Read these measure the
latency in milliseconds
Thanks
Paul
www.redshots.com
> On 29 May 2019, at 15:31, shalom sagges wrote:
>
> Hi All,
>
> I'm creating a dashboard that should collect read/write
Hello,
This metric is available indeed:
Most of the metrics available are documented here:
http://cassandra.apache.org/doc/latest/operating/metrics.html
For client requests (coordinator perspective latency):
http://cassandra.apache.org/doc/latest/operating/metrics.html#client-request-metrics
To answer your question
org.apache.cassandra.metrics:type=Table,name=ReadTotalLatency can give you
the total local read latency in microseconds and you can get the count from
the Latency read metric.
If you are going to do that be sure to do it on the delta from previous
query (new - last) for
Ideas that come mind are:
- Rolling restart of the cluster
- Use of 'nodetool resetlocalschema' --> function name speaks for itself.
Note that this is to be ran on each node you think is having schema issues
- Are all nodes showing a schema version showing the same one?
- Port not fully open
Hi All,
I'm creating a dashboard that should collect read/write latency metrics on
C* 3.x.
In older versions (e.g. 2.0) I used to divide the total read latency in
microseconds with the read count.
Is there a metric attribute that shows read/write latency without the need
to do the math, such as
Hi Garvit,
When updating counters, Cassandra does a read then a write, so there is an
overhead of using counters. This is all explained here:
https://www.datastax.com/dev/blog/whats-new-in-cassandra-2-1-a-better-implementation-of-counters
Hi Garvit,
I can not answer your main question but when I read your lines one thing
was popping up constantly: "why do you ask this?"
So what is the background of this question? Do you see anything smelly?
Actually
a) I always assumed so naturally there are of course lots of in-parallel
Hi,
Sorry if the question has already been answered.
Where nodetool status is run on a 3 node cluster (replication factor :
3), the load between the different nodes is not equal.
/# nodetool status opush//
//Datacenter: datacenter1//
//===//
//Status=Up/Down//
//|/
16 matches
Mail list logo