I have not tried to run 4.0 with Java 11, but according to the Docker image it
is running with Java 11, see here for reference
https://github.com/docker-library/cassandra/blob/f00a725cc0189ef166ac1d893227651bd81a6996/4.0/Dockerfile
-Tobias
From: Joe Obernberger
Reply to:
Thanx all for the feeback
-Tobias
From: Erick Ramirez
Reply to: "user@cassandra.apache.org"
Date: Monday, 3 May 2021 at 11:16
To: "user@cassandra.apache.org"
Subject: Re: V3.11.10 Docker uses Java 1.8-282, why not Java 3.11
There are lots of vendors who will continue to support Java 8 given
We are moving to 3.11.10, and we noticed that the Docker image here is based on
Java 1.8 – 282
My understanding is that Java 1.8 is no longer (debatable perhaps ) ,
statements suggests that it is no longer supported since Jan-2019
Our company has gone Java 11 since so time back, and it does
the write at #1
(or, if you have an unhealthy cluster that's regularly dropping writes due to
much larger problems, then it's even more likely)
On Tue, Sep 1, 2020 at 12:10 AM Tobias Eriksson
mailto:tobias.eriks...@qvantel.com>> wrote:
Hi
We are seeing READ REPAIRs happening, and
Hi
We are seeing READ REPAIRs happening, and my understanding is this
Setup 2 DCs with lots of Nodes, Replication Factor = 3
1. Data written (on INSERT/UPDATE)
2. Data replicated by Cassandra, but will not finish before (4) below
3. Wait 80 ms on average
4. Data read again with
Isn’t it so that explicitly setting a column to NULL also result in a tombstone
Then as mentioned the use of list,set,map can also result in tombstones
See
https://www.instaclustr.com/cassandra-collections-hidden-tombstones-and-how-to-avoid-them/
-Tobias
From: Oleksandr Shulgin
Reply to:
Hi
I am curious about the internals of the Secondary Index, and in particular
how the data is stored
The article dated 2016 here (is that still valid ? )
https://www.datastax.com/blog/2016/04/cassandra-native-secondary-index-deep-dive
indicates that a Secondary Index is really represented just
Thanx Erick
Is there a way to turn on tracing based on certain criteria,
I would like to start tracing when there is some sort of failure, i.e. in this
case when a READ REPAIR is triggered as I would like to know why we sometimes
can’t reach one of the nodes
-Tobias
From: Erick Ramirez
Reply
Thanx Erick,
Perhaps this is super obvious but I need a confirmation as you say “…not
subsequent reads for other data unrelated to the read being repaired…”
But this is subsequent reads to the _same_ partition key
So to be more explicit
READ 1 with Local Quorum : SELECT * FROM products WHERE id =
EAD REPAIR ?
Your schema may have read repair (non-blocking, background) set to 10% (0.1,
for dclocal).
You may have GC pauses causing writes (or reads) to be delayed.
You may be hitting a cassandra bug.
Would need the `TRACING` output to know for sure.
On Mon, Aug 10, 2020 at 10:10 PM Tobia
Hi
We have a Cassandra solution with 2 DCs where each DC has >30 nodes
From time to time we see problems with READ REPAIR, but I am stuck with the
analysis
We have a pattern for these faults where we do
1. INSERT with Local Quorum (2 out of 3)
2. Wait for 0.5 - 1 seconds time window
3.
Hi Jeff
Could you elaborate on the statement that you made :
“CQL Row level tombstones don’t matter in cassandra 3+ - they’re just point
deletes after the storage engine rewrite.”
Are you saying that a row level delete is not like other tomestones ? if so how
are they different ?
I tried to
Thanx all for the good tips
-Tobias
From: Eric Evans
Reply to: "user@cassandra.apache.org"
Date: Tuesday, 21 April 2020 at 16:02
To: "user@cassandra.apache.org"
Subject: Re: Write to SSTables to do really fast initial load of database (e.g.
for migration)
On Tue, Apr 21, 2020 at 4:16 AM
Hi
I would like to evaluate if writing to my “own” SSTables files and then do
import into Cassandra
My goal is to beat the performance of traditional CQL INSERTS (yes even with
prepared statements)
I have been trying to find code examples for how to write to SSTables but could
only really find
ssues (see Jira). We used
them quite a bit when they first came out and have since rolled back all uses
except for the absolute most basic cases (ex: a table with 30K rows that isn’t
updated). If we were to do it over, we would not use them at all.
- Max
On Feb 28, 2020, a
Hi
A debate has surfaced in my company, whether to keep or remove Materialized
Views
The Datastax FAQ says sure thing, go ahead and use it
https://docs.datastax.com/en/dse/5.1/cql/cql/cql_using/faqMV.html
But know the limitations
Hi
I would like to monitor my Cassandra cluster looking at the JMX beans and
transfer stats to Graphite.
I have done some googling and some suggest using jmxtrans, and some has moved
on to GenericJMX
What would you suggest ?
-Tobias
--
Tobias Eriksson
Chief Architect Research – CTO Office
day, 19 June 2017 at 17:44
To: Hannu Kröger <hkro...@gmail.com>
Cc: "user@cassandra.apache.org" <user@cassandra.apache.org>, Tobias Eriksson
<tobias.eriks...@qvantel.com>
Subject: Re: SASI index on datetime column does not filter on minutes
The + in the date format
Hi
I have a table like this (Cassandra 3.5)
Table
id uuid,
lastname text,
firstname text,
address_id uuid,
dateofbirth timestamp,
PRIMARY KEY (id, lastname, firstname)
And a SASI index like this
create custom index indv_birth ON playground.individual(dateofbirth) USING
Hi
I am trying to copy a file of CSV data into a table
But I get an error since sometimes one of the columns (which is a UUID) is empty
Is this a bug or what am I missing?
Here is how it looks like
Table
id uuid,
lastname text,
firstname text,
address_id uuid,
dateofbirth
ng into
Cassandra (memory datagrid I think it is called)
Basically, think of it as a big cache
It is an in-memory thingi ☺
And then you can run some super fast queries
-Tobias
From: DuyHai Doan <doanduy...@gmail.com>
Date: Thursday, 8 June 2017 at 15:42
To: Tobias Eriksson <tobias.eriks...@
Hi
Something to consider before moving to Apache Spark and Cassandra
I have a background where we have tons of data in Cassandra, and we wanted to
use Apache Spark to run various jobs
We loved what we could do with Spark, BUT….
We realized soon that we wanted to run multiple jobs in parallel
Some
sink-the-perfect-match/
On Wed, Apr 26, 2017 at 2:49 PM, Tobias Eriksson
<tobias.eriks...@qvantel.com<mailto:tobias.eriks...@qvantel.com>> wrote:
Hi
I would like to make a dump of the database, in JSON format, to KAFKA
The database contains lots of data, millions and in some cases
each row to JSON and dumps the
output into a Kafka topic. It should be fairly straightforward to implement.
Spark will manage the partitioning of "Producer" processes for you - no need
for a "Coordinator" topic.
On Thu, 27 Apr 2017 at 05:49 Tobias Eriksson
<tobias.eriks...
Hi
I would like to make a dump of the database, in JSON format, to KAFKA
The database contains lots of data, millions and in some cases billions of
“rows”
I will provide the customer with an export of the data, where they can read it
off of a KAFKA topic
My thinking was to have it scalable such
25 matches
Mail list logo