I see a critical bug
https://issues.apache.org/jira/browse/CASSANDRA-18507 fixed
in Cassandra 4.0.10. But also see that one critical bug
https://issues.apache.org/jira/browse/CASSANDRA-18560 introduced and that
is going to be fixed in 4.0.11. Can fix of
https://issues.apache.org/jira/browse/CASSAN
hat is possible. 4.0.10 was already released. We can not
> backport anything to what is already released. I believe you need to update
> to 4.0.11.
>
> Regards
>
> ________
> From: manish khandelwal
> Sent: Monday, July 17, 2023 11:40
&g
Has anybody seen issues running Cassandra on RHEL 8.6. I am seeing
performance dip while running Cassandra on RHEL8.6. Also with RHEL I am
seeing swapping happening in spite of a lot of free memory available. In
RHEL 7.9 we did not see any swapping with same workload and same RPM of
Cassandra. Anyo
uide/sysctl/vm.html?highlight=swappiness#swappiness
>
> [4]
> https://www.ibm.com/docs/en/linux-on-systems?topic=settings-setting-swappiness-parameter-zero
>
>
> On 27. Jul 2023, at 10:14, manish khandelwal
> wrote:
>
> Has anybody seen issues running Cassandra on RHEL 8.6. I
This is obtrusive
> enough that our linux engineering team decided to push a patch through to
> all of our teams. Once we updated the appropriate setting per the link I
> shared, all swap issues went away.
>
> On Thu, Jul 27, 2023 at 5:33 AM manish khandelwal <
> manishkh
Hi All
Is there any special consideration to take while restoring tables with
'counter' type columns? For restoration I wipe out data directory and
replace with sstables from backup directory.
Regards
Manish
Micah Green did you apply any tuning parameters also to RHEL apart from the
solution provided. As I can see some nodes swapping still.
Regards
Manish
On Sat, Jul 29, 2023 at 5:52 PM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> Thanks Micah Green. Got the same resolution
What logs of /172.16.20.16:7000 say when repair failed. It indicates
"validation failed". Can you check system.log for /172.16.20.16:7000 and
see what they say. Looks like you have some issue with *doc/origdoc,
probably some corrupt sstable. *Try to run repair for individual table and
see for wh
>From 4.0.11's cassandra.yaml
*## EXPERIMENTAL FEATURES
### Enables materialized view creation on this
node.# Materialized views are considered experimental and are not
recommended for production use.enable_materialized_views: false*
So I thin
Hi
Is Cassandra upgrade from Cassandra 3.11.x to Cassandra 4.1.3 is supported.
NEWS.txt has a general guideline that
Snapshotting is fast (especially if you have JNA installed) and takes
effectively zero disk space until you start compacting the live data
files again. Thus, best practice is to A
ing upgrades is not supported; please ensure
> it is disabled before beginning the upgrade and re-enable after.
>
> – Scott
>
> On Jan 6, 2024, at 10:25 PM, manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
>
> Hi
>
> Is Cassandra upgrade from
In one of our two datacenter setup(3+3), one Cassndra node is getting lot
of data streamed from other nodes during repair to the extent that it fills
up and ends with full disk. I am not able to understand what could be the
reason that this node is misbehaving in the cluster. Cassandra version is
3
,
> "sstablemetadata" and "sstabledump" commands handy.
>
>
> On 23/01/2024 18:07, manish khandelwal wrote:
>
> In one of our two datacenter setup(3+3), one Cassndra node is getting lot
> of data streamed from other nodes during repair to the extent that
They(incremental and full repairs) are required to run separately at
different times. You need to identify a schedule, for example, running
incremental repairs every week for 3 weeks and then run full repair in the
4th week.
Regards
Manish
On Sat, Feb 3, 2024 at 7:29 AM Kristijonas Zalys wrote:
In a two datacenter cluster (11 nodes each) we are seeing repair getting
stuck. Issue is when repair is triggered on a particular keyspace repair
session is lost and cassandra never returns for that particular session.
There are no "WARN" or "ERROR" logs in Cassandra logs. No message dropped
seen
It looks a crtirical bug for setup with multi DC using high number of
vnodes and running full repair with -PR option, since number of parallel
repair sessions can be as high as number of vnodes. Thus it can fill up
memory causing pom or direct buffer memory oom. It should get prioritized
for relea
Hi Prerna
Compactions are triggered automatically based on the compaction strategy.
Since you are seeing compactions triggered every 6 hours, the thing that
can be happening is you have such a kind of traffic where you have lots
of writes every 6 hours.
PS: Please use the user mailing list (us
Hi Team
I see this ticket https://issues.apache.org/jira/browse/CASSANDRA-18831
which is worked upon for building and running Cassandra 21. Currently it is
marked to be resolved with 5.x. Do we see it be released with Cassandra 5.0
or it needs to be discussed yet as some comments in JIRA suggested
ported by 5.0.0.
>
> Regards
>
> On Mon, Jun 17, 2024 at 6:02 AM manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
>> Hi Team
>>
>> I see this ticket https://issues.apache.org/jira/browse/CASSANDRA-18831
>> which is worked upon for building
Thans Erick, Jon and Stefan for the responses. So we can target Java17 for
Cassandra 5.0.
Regards
Manish
On Mon, Jun 17, 2024 at 3:29 PM Jon Haddad wrote:
> Erick, that blanket statement about it “never” going in 5.1 is incorrect.
> We could absolutely add Java 21 support in 5.1 if we wanted t
101 - 120 of 120 matches
Mail list logo