Hi again,
and thanks for the input. It's not tombstoned data I think, but over a
really long time very many rows are inserted over and over again - but
with some significant pauses between the inserts. I found some examples
where a specific row (for example pk=xyz, value=123) exists in more
Hi all,
I have some problem with really large sstables which dont get compacted
anymore and I know there are many duplicated rows in them. Splitting the
tables into smaller ones to get them compacted again would help I
thought, so I tried sstablesplit, but:
cassandra@cassandra01
Centers and a
RF of 3.
Has anyone encountered this problem and if yes what steps have you
taken to solve it
Thanks,
Charu
--
Jan Kesten, mailto:j.kes...@enercast.de
Tel.: +49 561/4739664-0 FAX: -9 Mobil: +49 160 / 90 98 41 68
enercast GmbH Universitätsplatz 12 D-34127 Kassel HRB15471
http
-1.0-SNAPSHOT-jar-with-dependencies.jar .
Thank you for any help.
--
Jan Kesten, mailto:j.kes...@enercast.de
Tel.: +49 561/4739664-0 FAX: -9 Mobil: +49 160 / 90 98 41 68
enercast GmbH Universitätsplatz 12 D-34127 Kassel HRB15471
http://www.enercast.de Online-Prognosen für erneuerbare Energien
Hi,
can you check the size of your data directories on that machine to verify in
comparison to the others?
Have a look for snapshot directories which could still be there from a former
table or keyspace.
Regards,
Jan
Am 26. Oktober 2016 06:53:03 MESZ, schrieb Harikrishnan A
Hi Lahiru,
2.1.0 is also quite old (Sep 2014) - and just from my memory I
remembered that there was an issue whe had with cold_reads_to_omit:
http://grokbase.com/t/cassandra/user/1523sm4y0r/how-to-deal-with-too-many-sstables
Hi Lahiru,
maybe your node was running out of memory before. I saw this behaviour
if available heap is low forcing to flush out memtables to sstables
quite often.
If this is that what is hitting you, you should see that the sstables
are really small.
To cleanup, nodetool compact would do
Hi Robert,
why do you need the actual text as a key? I sounds a bit unatural at
least for me. Keep in mind that you cannot do "like" queries on keys in
cassandra. For performance and keeping things more readable I would
prefer hashing your text and use the hash as key.
You should also take
Hi Mickey,
I would strongly suggest to setup a NTP server on your site - this is not
really a big deal and with some tutorials on the net done quickly. Then
configure your cassandra nodes (and all the rest if you like) to use your ntp
instead of public ones. As I have learned the hard way -
Hi,
while migrating the reminder of thrift operations in my application I
came across a point where I cant find a good hint.
In our old code we used a composite with two strings as row / partition
key and a similar composite as column key like this:
public Composite rowKey() {
Hi Branton,
two cents from me - I didnt look through the script, but for the rsyncs I do
pretty much the same when moving them. Since they are immutable I do a first
sync while everything is up and running to the new location which runs really
long. Meanwhile new ones are created and I sync
Hi,
the embedded cassandra to speedup entering the project may will work for
developers, we used it for junit. But a simple clone and maven build - I guess
it will end in a single node cassandra cluster. Remember cassandra is a
distributed database, one will need more than one node to get
Hi,
what kind of compaction strategy do you use? What you are about to see is a
compaction likely - think of 4 sstables of 50gb each, compacting those can take
up 200g while rewriting the new sstable. After that the old ones are deleted
and space will be freed again.
If using
tool listsnapshots
> Snapshot Details:
> There are no snapshots
>
> Can I safely delete those snapshots? why listsnapshots is not
> showing the snapshots? Also in future, how can we find out if there
> are snapshots?
>
> Thanks,
> Rahul
&
Hi Rahul,
just an idea, did you have a look at the data directorys on disk
(/var/lib/cassandra/data)? It could be that there are some from old keyspaces
that have been deleted and snapshoted before. Try something like "du -sh
/var/lib/cassandra/data/*" to verify which keyspace is consuming
Hi,
I have some problems recently on my cassandra cluster. I am running 12
nodes with 2.2.4 and while repairing with a plain "nodetool repair". In
system.log I can find
ERROR [STREAM-IN-/172.17.2.233] 2016-01-08 08:32:38,327
StreamSession.java:524 - [Stream #5f96e8b0-b5e2-11e5-b4da-4321ac9959ef]
Hi,
I found something strange this morning on our secondary cluster. I
upgraded to 2.1.3 - hoping for incremental repairs to work - recently
and this morning OpsCenter showed me disk usages to be very unequal.
Most irritating is that some nodes show data sizes of 3TB on one node,
but they
Hi Batranut,
apart from the other suggestions - do you have ntp running on all your
cluster nodes and are times in sync?
Jan
Hi,
a short hint for those upgrading: If you upgrade to 2.1.3 - there is a
bug in the config builder when rpc_interface is used. If you use
rpc_address in your cassandra.yaml you will be fine - I ran into it this
morning and filed an issue for it.
Hi,
I have read that snapshots are basicaly symlinks and they do not take
that much space.
Why if I run nodetool clearsnapshot it frees a lot of space? I am
seeing GBs freed...
both together makes sense. Creating a snaphot just creates links for all
files unter the snapshot directory. This
, Dec 19, 2014 at 8:41 AM, Jan Kesten
j.kes...@enercast.de mailto:j.kes...@enercast.de
wrote:
Hi Or,
I did some sort of this a while ago. If your
machines do have a free disk slot - just put
Hi,
while curious on the new incremental repairs I updated our cluster to C*
version 2.1.2 via the Debian apt-repository. Everything went quite well,
but trying to start the tools sstablemetadata and sstablerepairedset
lead to the following error:
root@a01:/home/ifjke#
Hi Or,
I did some sort of this a while ago. If your machines do have a free
disk slot - just put another disk there and use it as another
data_file_directory.
If not - as in my case:
- grab an usb dock for disks
- put the new one in there, plug in, format, mount to /mnt etc.
- I did an
Hi Jens,
maybe you should have a look at mutagen for cassandra:
https://github.com/toddfast/mutagen-cassandra
It is a litte quiet around this for some months, but maybe still worth it.
Cheers,
Jan
Am 25.11.2014 um 10:22 schrieb Jens Rantil:
Hi,
Anyone who is using, or could recommend, a
Hello together,
I'm running a cassandra cluster with 2.0.6 and 6 nodes. As far as I
know, routine repairs are still mandatory for handling tombstones - even
I noticed that the cluster now does a snapshot-repair by default.
Now my cluster is running a while and has a load of about 200g per
Hi Duncan,
is it actually doing something or does it look like it got stuck?
2.0.7 has a fix for a getting stuck problem.
it starts with sending merkle trees and streaming for some time (some
hours in fact) and then seems just to hang. So I'll try to update and
see it that's solves the
this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--
Jan Kesten, mailto:j.kes...@enercast.de
Tel.: +49 561/4739664-0 FAX: -9
enercast GmbH Friedrich-Ebert-Str. 104 D-34119 Kassel HRB15471
Am 07.04.2014 13:24, schrieb Hari Rajendhran:
1) I am confused why cassandra uses the entire disk space ( /
Directory) even when we specify /var/lib/cassandra/data as the
directory in Cassandra.yaml file
2) Is it only during compaction ,cassandra will use the entire Disk
space ?
3) What is the
with exception but no hint which file was affected. So I
replayed the sstables one by one and finally found the corrupt one.
Thanks to all,
Jan
--
Jan Kesten, mailto:j.kes...@enercast.de
Tel.: +49 561/4739664-0 FAX: -9
enercast GmbH Friedrich-Ebert-Str. 104 D-34119 Kassel HRB15471
http
- and a
scrub and repair should fix that I suppose.
Since the original cluster has a replication factor of 3 - shoudn't the
import from 5 of 6 snapshots contain all data? Or is the sstableloader
tool too clever and avoids importing double data?
Thanks for hints,
Jan
--
Jan Kesten, mailto:j.kes
and avoids importing double data?
Thanks for hints,
Jan
--
Jan Kesten, mailto:j.kes...@enercast.de
Tel.: +49 561/4739664-0 FAX: -9
enercast GmbH Friedrich-Ebert-Str. 104 D-34119 Kassel HRB15471
http://www.enercast.de Online-Prognosen für erneuerbare Energien
Geschäftsführung: Dipl. Ing
for your data than deleting the entire set
of data on this replica. When that's done, restart the repair.
=Rob
--
Jan Kesten, mailto:j.kes...@enercast.de
Tel.: +49 561/4739664-0 FAX: -9
enercast GmbH Friedrich-Ebert-Str. 104 D-34119 Kassel HRB15471
http://www.enercast.de Online
Hello together,
after my inital tests all is up and running, replacing a dead node was
no problem at all. Now I tried to setup encryption between nodes. I set
up keystores and a truststore as described in the docs. Every node has
it's own keystore with one private key and a truststore with
Hello,
while trying out cassandra I read about the steps necessary to replace a
dead node. In my test cluster I used a setup using num_tokens instead of
initial_tokens. How do I replace a dead node in this scenario?
Thanks,
Jan
Hello Aaron,
thanks for your reply.
Found it just an hour ago on my own, yesterday I accidentally looked at
the 1.0 docs. Right now my replacement node is streaming from the others
- than more testing can follow.
Thanks again,
Jan
35 matches
Mail list logo