Hi!
I posted this question on hector users list but no one answered, so I am
trying here as well.
I have production cluster running Cassandra 1.0.8 and a test cluster with
Cassandra 1.1.6.
In my Java app I do not user maven, but rather have my lib directory with
the jar files I use.
When I ran
If you are talking about the CQL-client that comes with Cassandra (cqlsh), it
is actually written in Python:
https://github.com/apache/cassandra/blob/trunk/bin/cqlsh
For information on datatypes (and conversion) take a look at the CQL definition:
I am wondering whether the huge commitlog size is the expected behavior or
not?
Nope.
Did you notice the large log size during or after the inserts ?
If after did the size settle ?
Are you using commit log archiving ? (in commitlog_archiving.properties)
and around 700 mini column family
. But what is the upper bound? And rules of thumb?
If you are using the off heap cache the upper bound is memory. If you are using
the on head it's the JVM heap.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On
1. How much GCInspector warnings per hour are considered 'normal'?
None.
A couple during compaction or repair is not the end of the world. But if you
have enough to thinking about per hour it's too many.
2. What should be the next thing to check?
Try to determine if the GC activity
As per datastax documentation, a manual compaction forces the admin to start
compaction manually and disables the automated compaction (atleast for major
compactions but not minor compactions )
It does not disable compaction.
it creates one big file, which will not be compacted until there
time (UTC) 01 2 3 4 5 6
7 8 9 10 11 12 13
Good value 88 44 26 35 26 86 187 251
455 389 473 367 453 373
C* counter149 82 45 68
If you are using the off heap cache the upper bound is memory. If you are
using the on head it's the JVM heap.
But as I said earlier, I could not watch the usage of JVM heap while
reading saved caches
Hi Aaron,
Thank you very much for the replying.
The 700 CFs were created in the beginning (before any insertion.)
I did not do anything with commitlog_archiving.properties, so I guess
I was not using commit log archiving.
What I did was doing a lot of insertions (and some deletions)
using
What consistency level are you writing with? If you were writing with ANY,
try writing with a higher consistency level.
-Tupshin
On Nov 18, 2012 9:05 PM, Chuan-Heng Hsiao hsiao.chuanh...@gmail.com
wrote:
Hi Aaron,
Thank you very much for the replying.
The 700 CFs were created in the
I have RF = 3. Read/Write consistency has already been set as TWO.
It did seem that the data were not consistent yet.
(There are some CFs that I expected empty after the operations, but still
got some data, and the number of data were decreasing after retrying
to get all data
from that CF)
Hello Aaron,
Thanks a lot for the reply.
Looks like the documentation is confusing. Here is the link I am referring
to: http://www.datastax.com/docs/1.1/operations/tuning#tuning-compaction
It does not disable compaction.
As per the above url, After running a major compaction, automatic
yes, https://issues.apache.org/jira/browse/CASSANDRA-1302
thanks
On Wed, Nov 14, 2012 at 2:04 AM, Tyler Hobbs ty...@datastax.com wrote:
As far as I know, the row cache has never been populated by
get_range_slices(), only normal gets/multigets. The behavior is this way
because
I think Timmy might be referring to the upcoming native CQL Java driver
that might be coming with 1.2 - It was mentioned here:
http://www.datastax.com/wp-content/uploads/2012/08/7_Datastax_Upcoming_Changes_in_Drivers.pdf
I would also be interested on testing that but I can't find it from
14 matches
Mail list logo