There are a couple of steps you can take if compaction is causing GC.
- if you have a lot of wide rows consider reducing the
in_memory_compaction_limit_in_mb yaml setting. This will slow down compaction
but will reduce the memory usage.
- reduce concurrent_compactors
Both of these may slow
optimize the Cassandra for performance in general
It's a lot easier to answer specific questions. Cassandra is fast, and there
are way to make it faster in specific use cases.
improve the performance for select * from X type of queries
Ah. Are you specifying a row key or are you trying to get
Thanks Omid
I've changed into Sun's java and now it works just fine
rds /Robban
From: Omid Aladini [mailto:omidalad...@gmail.com]
Sent: den 13 augusti 2012 18:14
To: user@cassandra.apache.org
Subject: Re: 1.1.3 crasch when initializing column family
It works
DeĀ : mdione@orange.com [mailto:mdione@orange.com]
In particular, I'm thinking on a restore like this:
* the app does something stupid.
* (if possible) I stop writes to the KS or CF.
In fact, given that I'm about to restore the KS/CF to an old state, I can
safely do this:
*
Hi!
It helps, but before I do more actions I want to give you some more info,
and ask some questions:
*Related Info*
1. According to my yaml file (where do I see these parameters in the jmx?
I couldn't find them):
in_memory_compaction_limit_in_mb: 64
concurrent_compactors: 1, but it
in the initial incremental backup implementation,
the hardlinking to the backup dir was in the CFS.addSSTable() code, so
it's part of the Cassandra code.
I looked at Priam,
https://github.com/Netflix/Priam/blob/master/priam/src/main/java/com/netflix/priam/backup/IncrementalBackup.java
this code
According to cfstats there are the some CF with high Comacted row maximum
sizes (1131752, 4866323 and 25109160). Others max sizes are 100. Are
these considered to be problematic, what can I do to solve that?
They are only 1, 4 and 25 MB. Not too big.
What should be the values of
The Priam code is looking for the keyspace/columnfamily/backups directory
created by cassandra during incremental backups. If it finds it the files are
uploaded to S3.
It's taking the built in incremental backups off node. (AFAIK)
Cheers
-
Aaron Morton
Freelance Developer
The datastax documentation concisely describes how to configure and
assure that the properties are used in client access. Question is this:
if using the thrift api login, does C* use the Authentication class to
determine access privileges based on the access/passwd properties?
These questions
ah... my bad, thanks for the explanation!
On Tue, Aug 14, 2012 at 1:57 PM, aaron morton aa...@thelastpickle.comwrote:
The Priam code is looking for the keyspace/columnfamily/backups
directory created by cassandra during incremental backups. If it finds it
the files are uploaded to S3.
It's
access.properties and passwd.properties are only used by the example
implementations, SimpleAuthenticator and SimpleAuthority. Your own
implementation (which requires a custom class) certainly does not have to
use these, it can use any other source to make the authn/authz decision.
On Tue, Aug
previously when a node dies, I remember the documents describes that it's
better to assign T-1 to the new node,
where T was the token of the dead node.
the new doc for 1.x here
http://wiki.apache.org/cassandra/Operations#Replacing_a_Dead_Node
shows a new way to pass in
Using this method, when choosing the new Token, should we still use the T-1
?
(AFAIK) No.
replace_token is used when you want to replace a node that is dead. In this
case the dead node will be identified by its token.
if so, would the duplicate token (same token but different ip) cause
Hi, I have a CF with a composite type (LongType, IntegerType) with some data
like this:
RowKey: hihi
= (column=1000:1, value=616263)
= (column=1000:2, value=6465)
= (column=1000:3, value=66)
= (column=1000:4, value=6768)
= (column=2000:1, value=616263)
= (column=2000:2, value=6465)
=
Aaron,
Thank you very much. I will do as you suggested.
One last question regarding restart:
I assume, I should do it node by node.
Is there anything to do before that? like drain or flush?
I am also considering enabling incremental backups on my cluster. Currently
I take a daily full snapshot
thanks Aaron, it has been a while since i last checked the code, I'll read
it to understand it more
On Aug 14, 2012 8:48 PM, aaron morton aa...@thelastpickle.com wrote:
Using this method, when choosing the new Token, should we still use the
T-1 ?
(AFAIK) No.
replace_token is used when you
We use priam to replace nodes using replace_token. We do see some issues
(currently on 1.0.9, as well as earlier versions) with replace_token.
Apparently there are some known issues with replace_token. We have experienced
the old nodes sometimes hanging around as unreachable nodes when
17 matches
Mail list logo