If you are talking about 1.2.x then I also have memory problems on the
idle cluster: java memory constantly slow grows up to limit, then spend
long time for GC. I never seen such behaviour for 1.0.x and 1.1.x, where
on idle cluster java memory stay on the same value.
No I am running Cassandra
Just got a very long GC again. What am I to look for in the logging I just
enabled?
2013/6/17 Joel Samuelsson samuelsson.j...@gmail.com
If you are talking about 1.2.x then I also have memory problems on the
idle cluster: java memory constantly slow grows up to limit, then spend
long time
Hi,
Thank you for the information.I have increased the rf, and I think the
increase we have seen in cpu load etc is due to the counter cf's,
which is almost write-only (reads a few times a day). The load
increase is noticeable, but no problem.Repair went fine. But I
noticed that when I increased
My bet is 5MB is the low end since many people go with the default. We upped
it to 10MB as at that time no one knew of what size was a good size to be and
the default was only 5MB.
Dean
From: Franc Carter franc.car...@sirca.org.aumailto:franc.car...@sirca.org.au
Reply-To:
Hi,
I've been running a benchmark on Cassandra and I'm facing a problem
regarding to the size of the database.
I performed a load phase and then, when running nodetool ring, I got the
following output:
*ubuntu@domU-12-31-39-0E-11-F1:~/cassandra$ bin/nodetool ring *
*Address DC
A bit of background:
We are in Beta, we have a very small (2 node) cluster that we created with
1.2.1. Being new to this we did not enable vnodes, and we got bit hard by the
default token generation in production after setting up lots of development
QA clusters without running into the
Load is the size of the storage on disk as I understand it. This can
fluctuate during normal usage even if records are not being added or
removed, a node's load may be reduced during compaction for example.
During compaction, especially if you use Size Tiered Compaction strategy
(the default),
At the DataStax Cassandra Summit 2013 last week, Al Tobey from Ooyala
recommended ss_table_size_in_mb be set at 256mb unless you have a fairly
small data set. The talk was Extreme Cassandra Optimization, and it was
superbly informative, I highly recommend it once DataStax gets the videos
online.
On Sun, Jun 16, 2013 at 5:46 PM, Radim Kolar h...@filez.com wrote:
in case you do not know yet, opscenter is sending certain data about your
cassandra instalation back to datastax.
This fact is not visibly presented to user, its same spyware crap like
EHCache.
Could you expand on this? What
On Mon, Jun 17, 2013 at 8:37 AM, Ben Boule ben_bo...@rapid7.com wrote:
We are in Beta, we have a very small (2 node) cluster that we created with
1.2.1.
https://issues.apache.org/jira/browse/CASSANDRA-5525
May be relevant?
What RF is this cluster? Given beta and cluster size and data size
On Mon, Jun 17, 2013 at 5:33 AM, Vegard Berget p...@fantasista.no wrote:
invalid counter shard detected; (X, Y, Z) and (X, Y, Z2) differ only in
count; will pick highest to self-heal; this indicates a bug or corruption
generated a bad counter shard
That looks correct, and I just double checked that xget behaves normally
for me for that case. What does it actually print? Can you try not
unpacking the tuple in your inner for-loop and print that?
Also, there's a pycassa mailing list (pycassa-disc...@googlegroups.com)
that would be a better
On Wed, May 29, 2013 at 9:33 AM, Hiller, Dean dean.hil...@nrel.gov wrote:
QUESTION: I am assuming 10 compactions should be enough to put enough load
on the disk/cpu/ram etc. etc. or do you think I should go with 100CF's.
98% of our data is all in this one CF.
Compaction can only really
Hi all,
I'm experiencing very similar effects. Did you (or anyone for that
matter) have/solvethis issue?
I have a 3 node cluster with vnodes having the same #tokens (256).
Infact, all nodes are configured identical and share similar/same
hardware. Cassandra.yaml settings are fairly standard
OpsCenter collects anonymous usage data and reports it back to DataStax.
For example, number of nodes, keyspaces, column families, etc. Stat
reporting isn't required to run OpsCenter however. To turn this feature
off, see the docs here (stat_reporter):
Cassandra makes the totally reasonable assumption that the entire
cluster is in one routable address space. We unfortunately had a
situation where:
* nodes can talk to each other in the same dc on an internal address,
but not talk to each other over their external 1:1 NAT address.
* nodes can
On Mon, May 13, 2013 at 9:19 PM, Bryan Talbot btal...@aeriagames.com wrote:
Can the index sample storage be treated more like key cache or row cache
where the total space used can be limited to something less than all
available system ram, and space is recycled using an LRU (or configurable)
From: Robert Coli rc...@eventbrite.com
To: user@cassandra.apache.org
Sent: Monday, June 17, 2013 3:28 PM
Subject: Re: index_interval
On Mon, May 13, 2013 at 9:19 PM, Bryan Talbot btal...@aeriagames.com wrote:
Can the index sample storage be treated more
Hi,
We have a custom authenticator that works well with Cassandra 1.1.5.
When upgrading to C* 1.2.5, authentication failed. Turn out that in
ClientState.login, we make a call to Auth.isExistingUser(user.getName())
if the AuthenticatedUser is not Anonymous user. This isExistingUser method
It seems to me that isExistingUser should be pushed down to the
IAuthenticator implementation.
Perhaps you should add a ticket to
https://issues.apache.org/jira/browse/CASSANDRA
On 06/17/2013 05:12 PM, Bao Le wrote:
Hi,
We have a custom authenticator that works well with Cassandra
Find promotion failure. Bingo if it happened at the time.
Otherwise, post the relevant portion of the log here. Someone may find a
hint.
On Mon, Jun 17, 2013 at 5:51 PM, Joel Samuelsson
samuelsson.j...@gmail.comwrote:
Just got a very long GC again. What am I to look for in the logging I just
I have a node in my ring (1.2.5) that when it was set up, had the wrong
number of vnodes assigned (double the amount it should have had).
As a result, and because we can't reduce the number of vnodes on a machine
(at least at this point), I need to decommission the node.
The problem is that
22 matches
Mail list logo