be prepared to fail at that point
and experiment with different settings, but the good thing about this process
is the fact that you can rollback at any stage without affecting the original
cluster.
Paul Chandler
> On 3 Apr 2019, at 10:46, Stefan Miklosovic
> wrote:
>
> On Wed
of commands in the directory
~/.cassandra, and from the stack trace you supply it looks like it is failing
to create that directory. So I would check the file system permissions there.
Thanks
Paul Chandler
> On 3 Apr 2019, at 15:15, David Taylor wrote:
>
> I am running a System87 Oryx P
not nodetest.
>
> Running nodetool status (or nodetool --help) results in the same stack trace
> as before.
>
> On Wed, Apr 3, 2019 at 11:34 AM Paul Chandler <mailto:p...@redshots.com>> wrote:
> David,
>
> When you start cassandra all the logs go to system.log normally i
the threads in steps, I
normally go in steps of 32., but that is based on the size of machines I
normally work with.
But as Anthony said, if it is a high read system, then it could easily be
tombstones or garbage collection.
Thanks
Paul Chandler
> On 11 Apr 2019, at 03:57, Abdul Patel wr
to feed into the new cluster, this this will work, however it you want
it for anything else then it doesn’t help at all.
I can supply more details later if this method is of interest.
Thanks
Paul Chandler
> On 10 Apr 2019, at 22:52, Carl Mueller
> wrote:
>
> We have a multite
are being dropped then increase concurrent_reads, I normally change it
to 96 to start with, but it will depend on size of your nodes.
Otherwise it might be badly designed queries, have you investigated which
queries are producing the client timeouts?
Regards
Paul Chandler
> On 9 Apr 2019, at
the
GossipingPropertyFileSnitch, although that comes with the one caveat that I
have never tried it on Amazon, only GCP.
Thanks
Paul Chandler
www.redshots.com
> On 16 Apr 2019, at 15:02, Shravan R wrote:
>
> Thanks Paul. Glad to know that you are speaking on the very subject soon.
>
> Even though you
quot; : { "tstamp" : "2019-06-03T14:56:54.926536Z", "ttl" :
300, "expires_at" : "2019-06-03T15:01:54Z", "expired" : true },
"cells" : [
{ "name" : "when", "deletion_info" : { "local_delete_time" :
"
Hi Rahul,
Opscenter is a Datastax product, have you raised a support request with them (
https://support.datastax.com <https://support.datastax.com/> ), they should be
able to answer this sort of question.
Regards
Paul Chandler
> On 4 Jun 2019, at 12:24, Bhardwaj, Rahul wrote:
&g
ers/
<http://www.redshots.com/cassandra-counting-without-using-counters/>
I hope these links help.
Regards
Paul Chandler
> On 29 May 2019, at 10:18, Attila Wind wrote:
>
> Hi Garvit,
>
> I can not answer your main question but when I read your lines one thing was
>
There are various attributes under
org.apache.cassandra.metrics.ClientRequest.Latency.Read these measure the
latency in milliseconds
Thanks
Paul
www.redshots.com
> On 29 May 2019, at 15:31, shalom sagges wrote:
>
> Hi All,
>
> I'm creating a dashboard that should collect read/write
Roy,
I have seen this exception before when a column had been dropped then re added
with the same name but a different type. In particular we dropped a column and
re created it as static, then had this exception from the old sstables created
prior to the ddl change.
Not sure if this applies
n some machines the
> sstable even does not exist on the filesystem.
> On one machine I was able to dump the sstable to dump file without any issue
> . Any idea how to tackle this issue ?
>
>
> On Tue, May 7, 2019 at 12:32 AM Paul Chandler <mailto:p...@redshots.
invoices. With a small amount of reads
per object, then you can specify smaller CPUs and memory machines with a large
amount of storage. If there are a large amount of reads, them you need to think
much more carefully about memory and CPU, as per the Walmart article you
referenced.
Thanks
Paul
Hi Mike,
It sounds like that record may have been deleted, if that is the case then it
would still be shown in this sstable, but the deleted tombstone record would be
in a later sstable. You can use nodetool getsstables to work out which sstables
contain the data.
I recommend reading The Last
others that share a very similar schema, and only some nodes)
> seems like it will help me prevent it.
>
>
> On Thu, May 2, 2019 at 1:00 PM Paul Chandler <mailto:p...@redshots.com>> wrote:
> Hi Mike,
>
> It sounds like that record may have been deleted, if that is the
of moving 90+ clusters: http://www.redshots.com/accelerate/
<http://www.redshots.com/accelerate/>
Happy to answer any more questions.
Regards
Paul Chandler
www.redshots.com
> On 26 Jun 2019, at 16:19, Voytek Jarnot wrote:
>
> I started an higher-level thread years ago about m
it was not an issue.
Thanks
Paul Chandler
www.redshots.com
PS Myself and Gilberto Müeller are presenting at Datastax Accelerate on this
very subject, how we migrated 91 clusters to Google, including what problems we
had along the way. It would be worth you attending that session if you
I have always found Amy's Cassandra 2.1 tuning guide great for the Linux
performance tuning:
https://tobert.github.io/pages/als-cassandra-21-tuning-guide.html
Sent from my iPhone
> On 26 Jul 2019, at 23:49, Krish Donald wrote:
>
> Any one has Cheat Sheet for Unix based OS, Performance
Hi Voytek,
I looked into this a little while ago, and couldn’t really find a definitive
answer. We ended up keeping the GossipingPropertyFileSnitch in our GCP
Datacenter, the only downside that I could see is that you have to manually
specify the rack and DC. But doing it that way does allow
Hi Shalom,
When tracking down specific queries I have used ngrep and fed the results into
Wireshark, this will allow you to find out everything about the requests coming
into the node from the client, as long as the connection is not encrypted.
I wrote this up here a few months ago:
Hi Adarsh,
You will have problems if you manually delete data when using TWCS.
To fully understand why, I recommend reading this The Last Pickle post:
https://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
And this post I wrote that dives deeper into the problems with deletes:
We had what sounds like a similar problem with a DSE cluster a little while
ago, It was not being used, and had no tables in it. The memory kept rising
until it was killed by the oom-killer.
We spent along time trying to get to the bottom of the problem, but it suddenly
stopped when the
Hi all,
I have looked at the release notes for the up coming release 3.11.6 and seen
the part about corruption of frozen UDT types during upgrade from 3.0.
We have a number of cluster using UDT and have been upgrading to 3.11.4 and
haven’t noticed any problems.
In the ticket ( CASSANDRA-15035
Hi Behroz,
It looks like the number of tables is the problem, with 5,000 - 10,000 tables,
that is way above the recommendations.
Take a look here:
https://docs.datastax.com/en/dse-planning/doc/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatTooManyTables
ders are fine, the scrub will be a no-op. Otherwise, it will report
> that new metadata files are being written. For more details, see
> https://support.datastax.com/hc/en-us/articles/360025955351
> <https://support.datastax.com/hc/en-us/articles/360025955351>. Cheers!
>
> Erick
Thanks Erick, looks like I have a bit of detective work to do on Monday, to
work out which of my list of clusters started out as 2.* or DSE 4.* and whether
they had UDT’s at that time.
> On 15 Feb 2020, at 00:50, Erick Ramirez wrote:
>
> I am still having problems reproducing this, so I am
3.0/cassandra/configuration/configLoggingLevels.html>
Thanks
Paul Chandler
www.redshots.com
> On 10 Mar 2020, at 08:56, Gil Ganz wrote:
>
> That's one option, I wish I there was a way to disable just that and not the
> entire debug log level, there are some things there I would like to
Hi all,
Is there a way to stop a nodetool move that is currently in progress?
It is not moving the data between the nodes as expected and I would like to
stop it before it completes.
Thank you
Paul
-
To unsubscribe,
Hi all,
Can anyone recommend a tool to perform schema DDL upgrades, that follows best
practice to ensure you don’t get schema mismatches if running multiple upgrade
statements in one migration ?
Thanks
Paul
-
To unsubscribe,
github.com/Cobliteam/cassandra-migrate>
>- [ ] https://github.com/patka/cassandra-migration
> <https://github.com/patka/cassandra-migration>
>- [ ] https://github.com/comeara/pillar <https://github.com/comeara/pillar>
>
> On Thu, Oct 8, 2020 at 5:45 PM Paul
Hi Manu,
nodetool uses the JMX user and password, I think the normal default for that is
for it not being required, but not sure if that is the case for the setup you
are using. So just try nodetool flush and see if that works.
Regards
Paul
Sent from my iPhone
> On 1 Jan 2021, at 20:41,
:14, Manu Chadha wrote:
>
> Just nodetool doesn't work unfortunately
>
> Sent from my iPhone
>
>>> On 1 Jan 2021, at 21:28, Paul Chandler wrote:
>>>
>> Hi Manu,
>>
>> nodetool uses the JMX user and password, I think the normal default f
> Not sure if apt has some way to force install/ignore dependencies, however if
> you do that it may work, otherwise your only workaround would be to install
> from the tarball.
>
> raft.so <https://raft.so/> - Cassandra consulting, support, and managed
> services
>
&g
qlsh#L65>
All our clusters are currently on Ubuntu 16.04 which does not come with python
3.6, so this is going to be a major pain to upgrade them to 4.0.
Does the apt packaging really need to specify 3.6 ?
Thanks
Paul Chandler
Hi Joe
This could also be caused by the replication factor of the keyspace, if you
have NetworkTopologyStrategy and it doesn’t list a replication factor for the
datacenter datacenter1 then you will get this error message too.
Paul
> On 12 Mar 2021, at 13:07, Erick Ramirez wrote:
>
> Does
querying, doing updates and observing
> the effect.
>
> raft.so - Cassandra consulting, support, managed services
>
> On Sat., 20 Feb. 2021, 02:29 Paul Chandler, <mailto:p...@redshots.com>> wrote:
> All,
>
> We have a use case where we need to change the datacente
All,
We have a use case where we need to change the datacenter name for a cassandra
cluster, we have a script to do this that involves a short downtime. This does
the following
1) Change replication factor for the system key spaces to be { ‘OLD_DC’ : ‘3’,
’NEW_DC”: ‘3’ }
2) Change the dc
> On 21 Feb 2021, at 22:30, Kane Wilson wrote:
>
> Make sure you test it on a practice cluster. Messing with the system tables
> is risky business!
>
> raft.so <https://raft.so/> - Cassandra consulting, support, and managed
> services
>
>
> On Sun, Fe
Hi Michael,
I have had similar problems in the past, and found this Last Pickle post very
useful: https://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
This should help you pinpoint what is stopping the SSTables being deleted.
Assuming you are never manually deleting records from the
I wrote a blog post describing how to do this a few years ago:
http://www.redshots.com/who-is-connecting-to-a-cassandra-cluster/
Sent from my iPhone
> On 19 Nov 2021, at 18:13, Saha, Sushanta K
> wrote:
>
>
> I need to shutdown an old Apache Cassandra server for good. Running 3.0.x.
>
Hi all
We keep having a problem with hint files on one of our Cassandra nodes (v
3.11.6 ), there keeps being the following error messages repeated for same file.
INFO [HintsDispatcher:25] 2021-11-02 08:55:29,830
HintsDispatchExecutor.java:289 - Finished hinted handoff of file
rg/jira/projects/CASSANDRA/issues/>
>
> It would be helpful to provide
> 1. The version of the cassandra
> 2. The options used for snapshotting
>
> - Yifan
>
> On Tue, Mar 22, 2022 at 9:41 AM Paul Chandler <mailto:p...@redshots.com>> wrote:
> Hi all,
>
> Was
; I do not think there is a ticket already. Feel free to create one.
> https://issues.apache.org/jira/projects/CASSANDRA/issues/
> <https://issues.apache.org/jira/projects/CASSANDRA/issues/>
>
> It would be helpful to provide
> 1. The version of the cassandra
> 2
Hi all,
Was there any further progress made on this? Did a Jira get created?
I have been debugging our backup scripts and seem to have found the same
problem.
As far as I can work out so far, it seems that this happens when a new snapshot
is created and the old snapshot is being tarred.
I
Thanks Erick and Bowen
I do find all the different parameters for repairs confusing, and even reading
up on it now, I see Datastax warns against incremental repairs with -pr, but
then the code here seems to negate the need for this warning.
Anyway running it like this, produces data in the
commit logs if this is the case.
>
> However, if you don't find any logs related to replaying commit logs, the
> cause may be completely different.
>
>
> On 19/01/2022 11:54, Paul Chandler wrote:
>> Hi all,
>>
>> We have upgraded a couple of clusters fr
of the nodes is hanging
again.
Does anyone have an ideas what is causing the problems ?
Thanks
Paul Chandler
a
>> find . -name '*Data*' | while read datf; do echo $datf ; sudo -u
>> cassandra sstablemetadata $datf; done >> ~/sstablemetadata.txt
>> cqlsh -e "paging off; select * from system.repairs" >> ~/repairs.out
>>
>> $ egrep 'Repaired at: 1' sstable
t deleted during the cleaned up for a number
> of reasons.
>
> On 24/01/2022 19:45, Paul Chandler wrote:
>> Hi Bowen,
>>
>> Yes, there does seem to be a lot of rows, on one of the upgraded clusters
>> there 75,000 rows.
>>
>> I have been experimenting
Could it be that email address is in the user group? If so your email could
have triggered that automatic response, however I have not received anything
after my recent emails
There was an email on the dev list on 14/4/2020 that said the following, so
they do seem to have had an interest in
usters.
>
> On 26/01/2022 10:19, Paul Chandler wrote:
>> I changed the the range repair to be full repair, reset the repairedAt for
>> all SSTables and deleted the old data out of the system.repairs table.
>>
>> This then did not create any new rows in the system.
may need to reset the "repairedAt" value in all SSTables using
> the "sstablerepairedset" tool if you decide to move on to use subrange full
> repairs.
>
>
>
> On 25/01/2022 12:39, Paul Chandler wrote:
>> Hi Bowen,
>>
>> Yes there are
, but see if that ticket applies to your
experience.
Thanks
Paul Chandler
> On 21 Jul 2022, at 15:12, pwozniak wrote:
>
> Yes, I did it. Nothing like this in my code. Consistency level is set only in
> one place (shown below).
>
>
>
> On 7/21/22 4:08 PM, manish khandelw
Hi Lapo
Take a look at TWCS, I think that could help your use case:
https://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
Regards
Paul Chandler
Sent from my iPhone
> On 29 Dec 2022, at 08:55, Lapo Luchini wrote:
>
> Hi, I have a table which gets (a lot of) data that is wri
, is
there any other issues to look out for?
Thanks
Paul Chandler
rolling upgrade, it's the number of DCs and racks
> matter.
>
> Cheers,
> Bowen
>
> On 24/04/2024 16:16, Paul Chandler wrote:
>> Hi all,
>>
>> We have some large clusters ( 1000+ nodes ), these are across multiple
>> datacenters.
>>
>> Wh
57 matches
Mail list logo