Hi all,
We are facing a connection latency of 200ms between API server and db
server during connection.
We are working with Apache cassandra 4.0.7 and open jdk ver 11.0.17. We are
using php on API side and connecting using php Cassandra driver (CPP ver
2.7) with below string.
$cluster =
Isn't there a very big (>40GB) sstable in /volumes/cassandra/data/data1? If
there is you could split it or change your data model to prevent such sstables.
Sent using https://www.zoho.com/mail/
Forwarded message
From: Loïc CHANEL via user
To:
Date: Fri, 06
Another solution: distribute data in more tables, for example you could create
multiple tables based on value or hash_bucket of one of the columns, by doing
this current data volume and compaction overhead would be divided to the
number of underlying tables. Although there is a limitation for
Hi team,
Does anyone know how to even the data between several data disks ?
Another approach could be to prevent Cassandra from writing on a 90% full
disk, but is there a way to do that ?
Thanks,
Loïc CHANEL
System Big Data engineer
SoftAtHome (Lyon, France)
Le lun. 19 déc. 2022 à 11:07, Loïc
Hi,
I'm not part of the team, I reply as a fellow user.
Columns which are part of the PRIMARY KEY are always indexed and used to
optimize the query, but it also depends in how the partition key is defined.
Details here in the docs:
Hello Team,
Here is a simple query. Whenever a select query is being run with cluster
columns in where clause, does it happen that the entire partition is being
read from disk to memory and then iterated over to fetch the required
result set.
Or there are indexes in place which help read only
Yes, clean-up will reduce the disk space on the existing nodes by re-writing
only the data that the node now owns into new sstables.
Sean R. Durity
DB Solutions
Staff Systems Engineer – Cassandra
From: Lapo Luchini
Sent: Friday, December 30, 2022 4:12 AM
To: user@cassandra.apache.org
Subject:
On 2022-12-29 21:54, Durity, Sean R via user wrote:
At some point you will end up with large sstables (like 1 TB) that won’t
compact because there are not 4 similar-sized ones able to be compacted
Yes, that's exactly what's happening.
I'll see maybe just one more compaction, since the
If there isn’t a TTL and timestamp on the data, I’m not sure the benefits of
TWCS for this use case. I would stick with size-tiered. At some point you will
end up with large sstables (like 1 TB) that won’t compact because there are not
4 similar-sized ones able to be compacted (assuming default
Hi Lapo
Take a look at TWCS, I think that could help your use case:
https://thelastpickle.com/blog/2016/12/08/TWCS-part1.html
Regards
Paul Chandler
Sent from my iPhone
> On 29 Dec 2022, at 08:55, Lapo Luchini wrote:
>
> Hi, I have a table which gets (a lot of) data that is written once
Hi, I have a table which gets (a lot of) data that is written once and
very rarely read (it is used for data that is mandatory for regulatory
reasons), and almost never deleted.
I'm using the default SCTS as at the time I didn't know any better, but
SSTables size are getting huge, which is a
Hi Deepti
I think you can reach out to
https://groups.google.com/a/lists.datastax.com/g/cpp-driver-user.
Regards
Manish
On Fri, Dec 23, 2022 at 12:52 PM Deepti Sharma S via user <
user@cassandra.apache.org> wrote:
> Hello Team,
>
>
>
> Could you please help in answering below query.
>
>
>
>
>
Hello Team,
Could you please help in answering below query.
Regards,
Deepti Sharma
PMP(r) & ITIL
From: Deepti Sharma S via user
Sent: 20 December 2022 18:39
To: user@cassandra.apache.org
Cc: Nandita Singh S
Subject: Query for Cassandra Driver
Hello Team,
We have an Application following
Hey Amit,
I’ve tried now with Cassandra built from source and running on my laptop, and
it behaves as expected (I can toggle ciphers on 1.3) must be something wrong in
my container setup.
Thanks for the help there! Sorry for the noise
Jackson
From: Amit Patel
Ah that make sense. I am not using containers, all I did was restrict at java
level TLS and configured cipher in Cassandra.yaml (that only support TLS 1.3).
Here is my logs:
INFO [main] 2022-12-20 16:04:46,246 SSLFactory.java:521 - Internode messaging
enabled TLS protocols: TLSv1.2, TLSv1.3
Thanks Amit,
Ah ha, I was testing something similar before on Cassandra 3.11 which had those
JRE settings, I’ve falsely assumed that those settings were present in my
Cassandra 4 environment. I’ve added the following to by jre runtime security
file.
jdk.certpath.disabledAlgorithms=MD2, MD5,
Hello Team,
We have an Application following C++98 standard, compiled with gcc version
7.5.0 on SUSE Linux.
We are currently using DataStax C/C++ Driver(Version 2.6) and its working fine
with application(C++98).
Now We have a requirement to update DataStax C/C++ Driver to latest version
2.16.
Hi Jackson,
I have faced similar issue even if we configure cipher for TLS 1.3 ,I couldn't
control cipher, TLS1 and TLS 1.1 was appearing in scan.
I had to restrict (secure) at Java security level.
-- There are 2 solutions for this:
- First would be by configuring the cipher_suites parameter of
Hi All,
I’ve hit some trouble recently around restricting ciphers for clients on a test
Cassandra 4.0.4 cluster, we’d like to be able to control the Ciphers offered
via Cassandra for both TLS 1.2 and 1.3. Was wondering if anyone has had any
luck with getting my particular use case to work.
You're right, I definitely missed that the structure was not
/volumes/cassandra/data/testkeyspace/test_table-
258a3400999211e98ee681105b53681d/.test_table/ but actually
/volumes/cassandra/data/testkeyspace/test_table-
258a3400999211e98ee681105b53681d/.test_index/
Thanks a lot Stefan !
Loïc
Hi team,
Small question about hidden folders. I understood that Cassandra data is
stored in the following directory organization :
///-/ but I noticed hidden folders
in some of the table directories. For example :
Hi team,
I had a disk space issue on a Cassandra server, and I noticed that the data
was not evenly shared between my 15 disks.
Here is the repartition :
/dev/vde199G 89G 4.7G *96%* /volumes/cassandra/data/data1
/dev/vdd199G 51G 44G 54% /volumes/cassandra/data/data2
If multiple things are dying under load, you'll want to check "dmesg" and
see if the oom-killer is getting triggered. Something like "atop" can be
good for figuring out what was using all of the memory when it was
triggered if the kernel logs don't have enough info.
On Thu, Dec 15, 2022 at 12:41
3.11.x versions will be maintained till May July 2023. Please refer
https://cassandra.apache.org/_/download.html
On Thu, Dec 15, 2022, 20:55 Pranav Kumar (EXT) via user <
user@cassandra.apache.org> wrote:
> Hi Team,
>
>
>
> Could you please help us to know when version 3.11.13 is going to be
Hi Team,
Could you please help us to know when version 3.11.13 is going to be EOS? Till
when we are going to get fixes for the version 3.11.13.
Regards,
Pranav
Update: It may be that the load on these hosts is causing problems for SSSD not
the other way around. In any case, it seems that both services are off at the
same time.
From: Marc Hoppins
Sent: Wednesday, December 14, 2022 10:59 AM
To: user@cassandra.apache.org
Subject: SSSD and Cassandra
Hi all,
If SSSD stops responding to requests/listening, is this going to cause the
Cassandra service to shut down? I didn't see anything to indicate such
behaviour in the config, only for disk issues.
I had two hosts where SSSD was not accepting logins and, after restarting that
service and
The Cassandra team is pleased to announce the GA release of Apache
Cassandra version 4.1.0.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of
This looks like https://issues.apache.org/jira/browse/CASSANDRA-17273
iirc you can merge the two files - making sure all ADD and REMOVE records are
in both files, I think you would need to add
HI all,
Is there a config setting to only log INFO line itself and omit the remaining
java/netty items? These are repeated every 30 seconds which creates
un-necessary spam in the system log. Despite having logback configured at INFO
level, these extra items keep appearing.
INFO
Hi, all,
We had a failed HDD on one node. The node was shut down pending repair. There
are now 4 other nodes with Cassandra not running and unable to startup due to
the following kinds of error. Is this kind of thing due to the original
stopped node?
ERROR [main] 2022-12-12 14:58:10,838
Resolved the issue :
Issue was Java Security we have configured to allow only TLS 1.2 and above .
Had to change to as per below.
# cat java.security |grep TLS
#jdk.tls.disabledAlgorithms=SSLv3, RC4, DH keySize, MD5withRSA < 2048, TLSv1,
TLSv1.1
jdk.tls.disabledAlgorithms=SSLv3, DSA, RSA
I found one of the link same issue. But not sure it is a guava library
classpath issue in my case. Can anyone please have suggestions?
Misleading error message in YamlConfigurationLoader.loadConfig(): "Invalid
yaml" · Issue #334 · jsevellec/cassandra-unit ·
java -version
openjdk version "1.8.0_352"
OpenJDK Runtime Environment (Temurin)(build 1.8.0_352-b08)
OpenJDK 64-Bit Server VM (Temurin)(build 25.352-b08, mixed mode)
From: Jeff Jirsa
Sent: 08 December 2022 17:42
To: user@cassandra.apache.org; Amit Patel
Subject: Re: Cassandra 4.0.7 - issue -
What version of java are you using?
On Thu, Dec 8, 2022 at 8:07 AM Amit Patel via user <
user@cassandra.apache.org> wrote:
> Hi,
>
>
>
> I have installed cassandra-4.0.7-1.noarch - repo ( baseurl=
> https://redhat.cassandra.apache.org/40x/noboolean/) on Redhat 7.9.
>
>
>
> We have configured
Even default installation and config files (I have not changed anything) same
issue , cassandra service does not start.
cat cassandra.log
CompilerOracle: dontinline
org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset
I have seen this when there is a tab character in the yaml file. Yaml is (too)
picky on these things.
Sean R. Durity
DB Solutions
Staff Systems Engineer – Cassandra
From: Amit Patel via user
Sent: Thursday, December 8, 2022 11:38 AM
To: Arvydas Jonusonis ; user@cassandra.apache.org
Subject:
Hi Arvydas,
CompilerOracle: dontinline
org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset (Lorg/apac
he/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns;
CompilerOracle: dontinline
Amit,
W/ould you be able to provide the full stacktrace?
Arvydas
On Thu, Dec 8, 2022 at 8:07 AM Amit Patel via user <
user@cassandra.apache.org> wrote:
> Hi,
>
>
>
> I have installed cassandra-4.0.7-1.noarch - repo ( baseurl=
> https://redhat.cassandra.apache.org/40x/noboolean/) on Redhat
Hi,
I have installed cassandra-4.0.7-1.noarch - repo (
baseurl=https://redhat.cassandra.apache.org/40x/noboolean/) on Redhat 7.9.
We have configured the below properties in cassandra.yaml
Basic Parameters configured in /etc/cassandra/conf/cassandra.yaml"
cluster_name: 'CDBCluster'
On 2022-12-06 14:21, Gábor Auth wrote:
No! Just start it and the other nodes in the cluster will acknowledge
the new IP, they recognize the node by id, stored in the data folder of
the node.
Thanks Gábor and Erick!
It worked flawlessly.
--
Lapo Luchini
l...@lapo.it
Hi,
On Tue, Dec 6, 2022 at 12:41 PM Lapo Luchini wrote:
> I'm trying to change IP address of an existing live node (possibly
> without deleting data and streaming terabytes all over again) following
> these steps:
https://stackoverflow.com/a/57455035/166524
> 1. echo 'auto_bootstrap: false' >>
If (a) the node is part of the cluster, and (b) is running and operational,
then (c) the cluster will recognise that the node has a new IP when you
restart the node and there's nothing to do on the C* side.
A new IP will be handled by C* automatically. Think of situations where a
node experiences
Hi all,
I'm trying to change IP address of an existing live node (possibly
without deleting data and streaming terabytes all over again) following
these steps:
https://stackoverflow.com/a/57455035/166524
1. echo 'auto_bootstrap: false' >> cassandra.yaml
2. add
Great question!
First, a Cassandra Summit without Sean Durity just wouldn't feel the same!
As for your question, all good. Apache Cassandra and its many forms are
what we are looking for. Commercial builds, aaS, and even private forks.
This year, we are expanding to bring in ecosystem tools that
Does it need to be strictly Apache Cassandra? Or is something built on/working
with DataStax Enterprise allowed? I would think if it doesn’t depend on
DSE-only technology, it could still apply to a general Cassandra audience.
Sean R. Durity
From: Patrick McFadin
Sent: Tuesday, November 29,
To come over the top on this, speaking can be great for your career and
company. And Patrick will help you find a great topic. And you only have to
deal with him for 15min, which is _mostly_ doable ;p
If you need help getting internal approvals - communications or potentially
even budget -, we
*Hi everyone,An update on the current CFP process for Cassandra
Summit. There are currently 23 talk submissions which are far behind what
we need. Two days of tracks mean we need 60 approved talks. Ideally, we
need over 100 submitted to ensure we have a good pool of quality talks. We
already have
The Cassandra team is pleased to announce the release of Apache Cassandra
version 4.1-rc1.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source
Hello Cassandra Community!
Hopefully, you’ve seen the news that we are having a Cassandra Summit on
March 13, 2022. It’s been years since we have done something this big in
the community. We’re all a little out of practice. In an open source
community like ours, one of the most important things
Not that simple. By making a node listen on both IPv4 and IPv6, they
will accept connections from both, but other nodes will still only
trying to connect to this node on the address it is broadcasting. That
means if a node's broadcasting a IPv4 address, then all other nodes in
the cluster must
So basically listen_address=:: (which should accept both IPv4 and IPv6)
is fine, as long as broadcast_address reports the same single IPv4
address that the node always reported previously?
The presence of broadcast_address removes the "different nodes in the
cluster pick different addresses
I would expect that you'll need NAT64 in order to have a cluster with
mixed nodes between IPv6-only servers and dual-stack servers that's
broadcasting their IPv4 addresses. Once all IPv4-broadcasting dual-stack
nodes are replaced with nodes either IPv6-only or dual-stack but
broadcasting IPv6
0.9 was never a seed before.
Based on your comment, I also tried, from having all three nodes up
(following the initial bootstrap), restarting 0.7. This failed with the
same error.
On 2022/11/09 15:37:24 Jeff Jirsa wrote:
> When you say you configured them to talk to .0.31 as a seed, did
Hi All,
We are planning to upgrade the Operating System from RHEL 7.9 to RHEL 8.6,
request to share the compatibility for Cassandra 3.11 and 4.x with RHEL
Versions if anyone knows.
Regards
Ranju
I have a (3.11) cluster running on IPv4 addresses on a set of dual-stack
servers; I'd like to add a new IPv6-only server to the cluster… is it
possible to have the dual-stack ones answer on IPv6 addresses as well
(while keeping the single IPv4 address as broadcast_address, I guess)?
This
Hi,
DataStax Cassandra 4.14 is actually the driver's version. Almost the latest
https://mvnrepository.com/artifact/com.datastax.oss/java-driver-core
It would be useful to know which version of Cassandra you are using, even
if, it would be surprised it is actually the cause of your error.
As it
When you say you configured them to talk to .0.31 as a seed, did you do
that by changing the yaml?
Was 0.9 ever a seed before?
I expect if you start 0.7 and 0.9 at the same time, it all works. This
looks like a logic/state bug that needs to be fixed, though.
(If you're going to upgrade, usually
>From the subject, this looks like a client-side timeout (thrown by the
>driver). I have seen situations where the client/driver timeout of 2 seconds
>is a shorter timeout than on the server side (10 seconds). So, the server
>doesn’t really note any problem. Unless this is a very remote client
This is a mailing list for the Apache Cassandra, and that's not the same
as DataStax Enterprise Cassandra you are using. We may still be able to
help here if you could provide more details, such as the queries, table
schema, system stats (cpu, ram, disk io, network, and so on), logs,
table
You should take a snapshot before starting the upgrade process. You
cannot achieve a snapshot of "the most current situation" in a live
cluster anyway, as data are constantly written to the cluster even after
a node is stopped for upgrading. So you've gotta to accept the outdated
snapshots if
Hi All,
My application is frequently getting timeout errors since 2 weeks now. I'm
using datastax Cassandra 4.14
Can someone help me here?
Thanks,
Shagun
Thanks for the tip Eric. We're actually on 3.2 and the issue isn't with the
Reaper. The issue is with Cassandra. It will report that a table has
pending compactions, but it will never actually start compacting. The
pending number stays at that level until we run a manual compaction.
-richard
On
We had issues where Reaper would never actually start some repairs. The GUI
would say RUNNING but the progress would be 0/.
Datastax support said there is a bug and recommended upgrading to 3.2.
Upgrading Reaper to 3.2 resolved our issue.
Hope this helps.
Eric
From: Richard Hesse
Sent:
Hi all,
On a test setup I a looking to do an upgrade from 4.0.3 to 4.0.6.
Would one typically snapshot before DRAIN or after?
If DRAIN after snapshot, I would have to restart the service to snapshot and
would this not then be accepting new operations/data?
If DRAIN before snapshot, would
Hello everyone,
Sorry for not responding earlier. The GC observed was indeed a symptom. The CPU
spike and the slow Cassandra node responses was due to a massive connection of
client processes. Most probably, this caused the GC as well.
The guides shared have a lot of interesting points, though
Ah.
Version is 4.0.3
nodetool snapshot
From research, ‘nodetool snapshot’ will snapshot all keyspaces by default. So,
as I want to update to a new version, I assume this is what I want. Nodetool
snapshot produces the output as previously posted. None of it makes any sense
to me, especially
Can you help us out by providing more details? When asking questions, it's
always a good idea to include background info such as versions and steps to
replicate the issue. Cheers!
They are really in mebibytes (MiB). In the upcoming release of Cassandra,
the configuration is getting standardised to KiB, MiB, etc, to remove
ambiguity (CASSANDRA-15234 [1]). For more info, see Ekaterina
Dimitrova's blog post [2]. Cheers!
[1]
HI all,
This is a test setup so has quickly been configured, and with a nominal amount
of test data. Initially, I was getting "'Malformed IPv6 address at index 7"
errors so I appended "-Dcom.sun.jndi.rmiURLParsing=legacy" which removed that.
However, snapshots are not being performed. Is it
Hi, all,
The config has data limits described as KB, MB, etc. Are these KB MB or KiB
MiB? (curses to the lazy modern age for forcing a change) Nodetool status
reports TiB. I assume these are all really base2 numbers but am just seeking to
clarify.
Eg.,
# Default value ("auto") is 1/256th
Sorry about that. 4.0.6
On Sun, Oct 30, 2022, 11:19 AM Dinesh Joshi wrote:
> It would be helpful if you could tell us what version of Cassandra you’re
> using?
>
> Dinesh
>
> > On Oct 30, 2022, at 10:07 AM, Richard Hesse wrote:
> >
> >
> > Hi, I'm hoping to get some help with a vexing issue
It would be helpful if you could tell us what version of Cassandra you’re using?
Dinesh
> On Oct 30, 2022, at 10:07 AM, Richard Hesse wrote:
>
>
> Hi, I'm hoping to get some help with a vexing issue with one of our
> keyspaces. During Reaper repair sessions, one keyspace will end up with
>
Hi, I'm hoping to get some help with a vexing issue with one of our
keyspaces. During Reaper repair sessions, one keyspace will end up with
hanging, non-started compactions. That is, the number of compactions as
reported by nodetool compactionstats stays flat and there are no running
compactions.
[image: cday-20221110-wakanda_forever.png]
Calling all developers!
The Apache Cassandra community invites you to join an action-packed day of
superhero events held simultaneously across 3 cities on November 10 — Santa
Clara CA, Bellevue WA and Houston TX!
Event info
WORKSHOP - Attend in-person
and yes, you need to set the consistency level to ONE in the cassandra.yaml
if it's running in your local machine
denylist_consistency_level: ONE
On Tue, Oct 25, 2022 at 10:41 AM Cheng Wang wrote:
> Awesome! That's great to hear!
> Pls feel free to let me know if you have any questions!
>
>
Awesome! That's great to hear!
Pls feel free to let me know if you have any questions!
Thanks,
Cheng
On Tue, Oct 25, 2022 at 10:36 AM Aaron Ploetz wrote:
> Works!
>
> So I was running on my *local*, and all of my attempts to add to the
> denylist were failing because the
Works!
So I was running on my *local*, and all of my attempts to add to the
denylist were failing because the denylist_consistency_level was set to
QUORUM:
WARN [main] 2022-10-25 11:57:27,238 NoSpamLogger.java:108 - Attempting to
load denylist and not enough nodes are available for a QUORUM
Sequentially, and yes - for some definition of "directly" - but not just
because it's sequential, but also because each sstable has cost in reading
(e.g. JVM garbage created when you open/seek that has to be collected after
the read)
On Tue, Oct 25, 2022 at 8:27 AM Grzegorz Pietrusza
wrote:
>
HI all
I can't find any information about how cassandra handles reads involving
multiple sstables. Are sstables read concurrently or sequentially? Is read
latency directly connected to the number of opened sstables?
Regards
Grzegorz
Thanks Erick, indeed with curl with the redirect flag I can see the file
there.
On Mon, Oct 24, 2022 at 8:21 AM Erick Ramirez
wrote:
> redhat.cassandra.apache.org/40x/ redirects to
> apache.jfrog.io/artifactory/cassandra-rpm/40x/. When I curl it on the
> command line, I can see that the
redhat.cassandra.apache.org/40x/ redirects to
apache.jfrog.io/artifactory/cassandra-rpm/40x/. When I curl it on the
command line, I can see that the cassandra-tools package for 4.0.6 is
there. Cheers!
cassandra-4.0.6-1.noarch.rpm
25-Aug-2022 09:05 45.43 MB
cassandra-4.0.6-1.src.rpm
Hey
Seems like the following release of 4.0.7 a few hours ago ago, repo
settings were a bit changed.
Where can one download the 4.0.6 cassandra-tools rpm from?
It's not in https://apache.jfrog.io/ui/native/cassandra-rpm/40x/ and
https://redhat.cassandra.apache.org/40x/ points to a login page with
The Cassandra team is pleased to announce the release of Apache Cassandra
version 4.0.7.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source
The Cassandra team is pleased to announce the release of Apache Cassandra
version 3.11.14.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source
The Cassandra team is pleased to announce the release of Apache Cassandra
version 3.0.28.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source
Awesome. Thank you, Cheng! I’ll give this a shot and let you know.
Thanks,
Aaron
> On Oct 21, 2022, at 12:45 AM, Cheng Wang wrote:
>
>
> Hi Aaron,
>
> After reading through the code, I finally figured out the issue. So back to
> your original question where you failed to run
> $>run
Hi Aaron,
After reading through the code, I finally figured out the issue. So back to
your original question where you failed to run
$>run denylistKey stackoverflow weather_sensor_data "'Minneapolis,
MN',202210"
#IllegalArgumentException: Operation denylistKey with 4 parameters doesn't
exist in
No worries, Cheng!
So I actually pivoted a little and adjusted my example table to use a
single integer-based partition key.
aaron@cqlsh:stackoverflow> SELECT ks_name, table_name, blobAsint(key) FROM
system_distributed.partition_denylist WHERE ks_name='stackoverflow' AND
Hi Aaron,
Sorry for the late reply, was dealing with a production issue (maybe
another topic for Cassandra Summit :-)). Are you running on your local
machine? Then yes, you do need to enable the config for all the following
enable_partition_denylist: true
enable_denylist_writes: true
Just checking, but for this to work, do I have to mess with these settings
in the YAML at all?
partition_denylist_enabled: true
denylist_reads_enabled: true
They're commented out by default.
Thanks,
Aaron
On Mon, Oct 17, 2022 at 4:53 PM Aaron Ploetz wrote:
> Thanks for the help with the
Please read
https://docs.datastax.com/en/upgrading/docs/datastax_enterprise/upgrdCstarToDSE.html#_general_restrictions
The document is written for DSE Cassandra, but must of it applies to
Apache Cassandra too.
In short, watch out for these:
Client side:
* Check client driver
Hi all,
What (if any) problems could we expect from an upgrade?
Ie., If we have 12 nodes and I upgrade them one-at-a-time, some will be on the
new version and others on the old.
Assuming that daily operations continue during this process, could problems
occur with streaming replica from one
Thanks for the help with the INSERT, Cheng! I'm further along than
before. But it still must not be matching up quite right, because I can
still select that partition.
I have several different combinations of the two keys (and I removed the
space) of "Minneapolis,MN" and 202210. Here's what
Another approach is, instead of using $$, you can put additional pair of
single quote around the 'Minneapolis, MN'
cqlsh> insert into system_distributed.partition_denylist (ks_name,
table_name, key) values ('stackoverflow', 'weather_sensor_data',
textAsBlob('''Minneapolis, MN'', 202210'));
Hi Aaron,
Yes, you can directly insert into the system_distributed.partition_denylist
instead of using JMX. Jordan wrote a blog post for denylist
https://cassandra.apache.org/_/blog/Apache-Cassandra-4.1-Denylisting-Partitions.html
And the syntax error, one way around is to put $$ around like
I have this table definition:
CREATE TABLE stackoverflow.weather_sensor_data (
city text,
month int,
recorded_time timestamp,
temp float,
PRIMARY KEY ((city, month), recorded_time)
) WITH CLUSTERING ORDER BY (recorded_time DESC)
Sample data looks like this:
> SELECT * FROM
The limit only bounds what you return not what you scan On Oct 3, 2022, at 10:56 AM, Regis Le Bretonnic wrote:Hi...We do the same (even if a lot of people will say it's bad and that you shouldn't...) with a "allow filtering" BUT ALWAYS WITHIN A PARTITION AND WITH A LIMIT CLAUSE TO AVOID A FULL
How many rows you are expecting within your partition?
On Mon, 3 Oct, 2022, 21:56 Karthik K, wrote:
> We have a table designed to retrieve products by name in ascending order.
> OrganisationID and ProductType will be the compound partition key, whereas
> the ProductName will be the clustering
Hi Team,
Guardrails config for allowFiltering is available for dse-5.1.26 or any
guardrail is available.
Please give a help link if available.
Thanks in advance
Adarsh
701 - 800 of 58413 matches
Mail list logo