The Cassandra team is pleased to announce the release of Apache Cassandra
version 3.0.24.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source
The Cassandra team is pleased to announce the release of Apache Cassandra
version 3.11.10.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source
Could you post full table schema (names obfuscated, if required) with index
creation statements and queries?
On Mon, Feb 4, 2019 at 10:04 AM Jacques-Henri Berthemet <
jacques-henri.berthe...@genesys.com> wrote:
> I’m not sure why it`s not allowed by the Datastax driver, but maybe you
> could try
Hi everyone,
There were many people asking similar questions about the CASSANDRA-13004.
It might be that the issue itself and release notes are somewhat hard to
grasp or might sound ambiguous, so here's a bit more elaborate explanation
what 13004 means in terms of the upgrade process, how it
You can find the information about that in Cassandra source code, for
example. Search for serializers, like BytesSerializer:
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BytesSerializer.java
to
get an idea how the data is serialized.
But I'd also check
The problem you're seeing is because of the thrift max_message_length,
which is set to 16mb and is not configurable from outside / in the yaml
file.
If the JDBC wrapper supports paging, you might want to look into
configuring it.
On Tue, Jul 19, 2016 at 8:27 PM Saurabh Kumar
Bloom filters are used to avoid disk seeks on accessing sstables. As we
don't know where exactly the partition resides, we have to narrow down the
search to paticular sstables where the data most probably is.
Given that most likely you won't store 50B rows on the single node, you
will most likely
If I understand the problem correctly, tombstone_failure_theshold is never
reached because the ~2M objects might have been collected for different
queries running in parallel, not for one query. Every separate query never
reached the threshold although all together they contributed to the OOM.
Sorry I completely forgot to mention it in an original message: we have
rather large commitlog directory (which is usually rather small), 8G of
commitlogs. Draining and flushing didn't help.
On Sat, Jun 13, 2015 at 1:39 PM, Oleksandr Petrov
oleksandr.pet...@gmail.com wrote:
Hi,
We're using
Hi,
We're using Cassandra, recently migrated to 2.1.6, and we're experiencing
constant OOMs in one of our clusters.
It's a rather small cluster: 3 nodes, EC2 xlarge: 2CPUs, 8GB RAM, set up
with datastax AMI.
Configs (yaml and env.sh) are rather default: we've changed only concurrent
compactions
Cassaforte [1] is a Clojure client for Apache Cassandra 1.2+. It is built
around CQL 3
and focuses on ease of use. You will likely find that using Cassandra from
Clojure has
never been so easy.
1.2.0 is a minor release that introduces one minor feature, fixes a couple
of bugs, and
makes
Hi Tony, you can check out a guide here:
http://clojurecassandra.info/articles/kv.html which explains pretty most of
things you need to know about queries for starters.
It includes CQL code examples, just disregard Clojure ones, there's nothing
strictly Clojure-driver specific in that guide.
On
Maybe i'm a bit late to the party, but that can be still useful for
reference in future.
We've tried to keep documentation for Clojure cassandra driver as elaborate
and generic as possible, and it contains raw CQL examples,
so you can refer to the docs even if you're using any other driver.
Tokens are very useful for pagination and world iteration. For example,
when you want to scan an entire table, you want to use token() function.
You can refer two guides we've written for Clojure driver (although they do
not contain much clojure-specific information.
First one is Data Modelling /
You can refer to the Data Modelling guide here:
http://clojurecassandra.info/articles/data_modelling.html
It includes several things you've mentioned (namely, range queries and
dynamic tables).
Also, it seems that it'd be useful for you to use indexes, and performing
filtering (for things related
WITH COMPACT STORAGE should allow accessing your dataset from CQL2,
actually.
There're newer driver that supports binary CQL, namely
https://github.com/iconara/cql-rb which is written by guys from Bart, who
know stuff about cassandra :)
We're using COMPACT STORAGE for tables we access through
I've submitted a patch that fixes the issue for 1.2.3:
https://issues.apache.org/jira/browse/CASSANDRA-5504
Maybe guys know a better way to fix it, but that helped me in a meanwhile.
On Mon, Apr 22, 2013 at 1:44 AM, Oleksandr Petrov
oleksandr.pet...@gmail.com wrote:
If you're using Cassandra
.
Setting key back fixed issue for me.
On Sat, Apr 20, 2013 at 3:05 PM, Oleksandr Petrov
oleksandr.pet...@gmail.com wrote:
Tried to isolate the issue in testing environment,
What I currently have:
That's a setup for test:
CREATE KEYSPACE cascading_cassandra WITH replication = {'class
I can confirm running same problem.
Tried ConfigHelper.setThriftMaxMessageLengthInMb();, and tuning server
side, reducing/increasing batch size.
Here's stacktrace from Hadoop/Cassandra, maybe it could give a hint:
Caused by: org.apache.thrift.protocol.TProtocolException: Message length
!
On Sat, Apr 20, 2013 at 1:56 PM, Oleksandr Petrov
oleksandr.pet...@gmail.com wrote:
I can confirm running same problem.
Tried ConfigHelper.setThriftMaxMessageLengthInMb();, and tuning server
side, reducing/increasing batch size.
Here's stacktrace from Hadoop/Cassandra, maybe it could give
Hi,
I'm trying to persist some event data, I've tried to identify the
bottleneck, and it seems to work like that:
If I create a table with primary key based on (application, environment,
type and emitted_at):
CREATE TABLE events (application varchar, environment varchar, type
varchar,
significant performance decrease.
A decrease in performance doing what ?
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 19/04/2013, at 4:43 AM, Oleksandr Petrov oleksandr.pet...@gmail.com
wrote:
Hi,
I'm
Yes, execute_cql3_query, exactly.
On Wed, Jan 30, 2013 at 4:37 PM, Michael Kjellman
mkjell...@barracuda.comwrote:
Are you using execute_cql3_query() ?
On Jan 30, 2013, at 7:31 AM, Oleksandr Petrov
oleksandr.pet...@gmail.com wrote:
Hi,
I'm creating a table via cql3 query like
, names are case insensitive by default, while they were case
sensitive in CQL2. You can force whatever case you want in CQL3
however using double quotes. So in other words, in CQL3,
USE TestKeyspace;
should work as expected.
--
Sylvain
On Sun, Sep 23, 2012 at 9:22 PM, Oleksandr Petrov
Maybe I'm missing the point, but counting in a standard column family would
be a little overkill.
I assume that distributed counting here was more of a map/reduce
approach, where Hadoop (+ Cascading, Pig, Hive, Cascalog) would help you a
lot. We're doing some more complex counting (e.q. based on
25 matches
Mail list logo