Thanks for confirming the tombstones will only get removed during compaction
if they are older than GC_Grace_Seconds for that CF. I didn't find such a
clarification in the documentation. That answered my question.
Since the table that has too many tombstones is in the system keyspace, I
This might well be
https://issues.apache.org/jira/browse/CASSANDRA-8325
https://issues.apache.org/jira/browse/CASSANDRA-8325
try the latest patch for that if you can.
On Jan 13, 2015, at 4:50 AM, Bernardino Mota bernardino.m...@inovaworks.com
wrote:
Hi,
Yes, with JDK1.7 it works but
Your response is full of information, after I read it I think that I design
something wrong in my system. I will try to present what hardware I have
and what I am trying to achieve.
*Hardware:*
I have 9 machines, every machine has 10 hdd for data (not SSD) and 64 GB of
RAM.
*Requirements*
The
I have read that snapshots are basicaly symlinks and they do not take that much
space.Why if I run nodetool clearsnapshot it frees a lot of space? I am seeing
GBs freed...
Hi All,
I want to store PDF documents on Cassandra db.What is the best way to store
this type of data on Cassandra.How can I insert and select PDF file to the
database.If possible can you explain with sample CQL statements
Thanks in Advance
Nil
Hi,
I have read that snapshots are basicaly symlinks and they do not take
that much space.
Why if I run nodetool clearsnapshot it frees a lot of space? I am
seeing GBs freed...
both together makes sense. Creating a snaphot just creates links for all
files unter the snapshot directory. This
OK Thanks,
But I also read that repair will take a snapshot. Due to the fact that I have
Replication factor 3 for my keyspace, I run nodetool clearsnapshot to keep disk
space use to a minimum. Will this impact my repair?
On Tuesday, January 13, 2015 4:19 PM, Jan Kesten
Hello,
The data distribution is ok, as I sayd there area few milion distinct values
for the keys. I am running repairs like this :Node 1st runs repair on 1 and
15th of the monthNode 3rd runs repair on 3 and 15th of the month and so on...
On Tuesday, January 13, 2015 1:47 PM, Rahul
you want to store the raw bytes, so look at examples for saving raw bytes.
I generally recommend using Thrift if you're going to do a lot of
read/write of binary data. CQL is good for primitive types, and maps/lists
of primitive types. I'm bias, but it's simpler and easier to use thrift for
For a new user, there's no point in learning Thrift if that user intends on
upgrading past the version that they start with. Thrift is a deprecated
protocol and there's no new functionality going into it. In 3.0 the
sstable format is being upgraded to work primarily with native CQL
partitions /
Snapshot during repair is automatically cleared if repair succeeds.
Unfortunately, you have to delete it manually if repair failed or stalled.
On Tue, Jan 13, 2015 at 8:30 AM, Batranut Bogdan batra...@yahoo.com wrote:
OK Thanks,
But I also read that repair will take a snapshot. Due to the fact
Hi,
Yes, with JDK1.7 it works but only in 32bits mode. It seems the problem
is with the 64bits version of JDK8 and 7. Didn't try with other older
versions.
Unfortunately with 32bits I'm more limited in the memory I can make
available for the JVM...
Looking the Web, there are other's
Ad 4) For sure I got a big problem. Because pending tasks: 3094
The question is what should I change/monitor? I can present my whole
solution design, if it helps
On Mon, Jan 12, 2015 at 8:32 PM, Ja Sam ptrstp...@gmail.com wrote:
To precise your remarks:
1) About 30 sec GC. I know that after
Hello,
I have a cluster of 6 C* nodes. All machines have the same hardware. I have
noticed in opscenter that when I start reading a lot from the cluster 2 nodes
have read latencies, but the rest do not have such high values. The replication
factor for the keyspace is 3. Also those 2 nodes have
Hi,
Thanks for the reply. Yes, we are definitely CPU bound and disabling
compression increases IO utilization a lot. However, the compactions still do
not utilize the IO capacity of the machines while spiking the CPU (increasing
the number of concurrent compactors do not seem to help). Oddly
Is the data distribution OK? Have you tried running repairs?
Rahul
On Jan 13, 2015, at 5:01 AM, Batranut Bogdan batra...@yahoo.com wrote:
Hello,
I have a cluster of 6 C* nodes. All machines have the same hardware. I have
noticed in opscenter that when I start reading a lot from the
Why can't you use sstable2json?
Rahul
On Jan 12, 2015, at 11:24 PM, Rahul Bhardwaj rahul.bhard...@indiamart.com
wrote:
Hi All,
We are using C* 2.1. we need to export data of one table (consist 10 lacs
records) using COPY command. After executing copy command cqlsh hangs and get
I am not sure about the tombstone_failure_threshold, but the tombstones will
only get removed during compaction if they are older than GC_Grace_Seconds for
that CF. How old are these tombstones?
Rahul
On Jan 12, 2015, at 11:27 PM, Xu Zhongxing xu_zhong_x...@163.com wrote:
Hi,
When I
Hi everybody!
We have a problem that we encountered during testing over the weekend.
During the tests we noticed that repairs started to fail. We are running on a
three node cluster with three in replication. It uses a default C*
installation. This error has occured on multiple non-coordinator
Got it,
Thank you!
On Tuesday, January 13, 2015 5:00 PM, Yuki Morishita mor.y...@gmail.com
wrote:
Snapshot during repair is automatically cleared if repair succeeds.
Unfortunately, you have to delete it manually if repair failed or stalled.
On Tue, Jan 13, 2015 at 8:30 AM, Batranut
Hi!
We're using cassandra to store a time series, using a table similar to:
CREATE TABLE timeseries (
source_id uuid,
tstamp timestamp,
value text,
PRIMARY KEY (source_id, tstamp)
) WITH CLUSTERING ORDER BY (tstamp DESC);
With that, we do a ranged query with tstamp x and tstamp y to gather
If you have fallen far behind on compaction, this is a hard situation to
recover from. It means that you're writing data faster than your cluster
can absorb it. The right path forward depends on a lot of factors, but in
general you either need more servers or bigger servers, or else you need to
22 matches
Mail list logo