well, that didn't go away after I remove all the caches. What should I do
now?
On Wed, Oct 10, 2012 at 2:15 PM, Manu Zhang owenzhang1...@gmail.com wrote:
exception encountered during startup: Attempting to load already loaded
column family system_traces.sessionsjava.lang.RuntimeException:
Hi!
I am re-posting this, now that I have more data and still *unbalanced ring*:
3 nodes,
RF=3, RCL=WCL=QUORUM
Address DC RackStatus State Load
OwnsToken
113427455640312821154458202477256070485
x.x.x.xus-east 1c Up Normal 24.02 GB
Hi Guys,
What known critical bugs are there that couldn't allow to use 1.2 beta 1 in
production?
We don't use cql and secondary indexes.
--
Best regards**
Zotov Alexey
Grid Dynamics
Skype: azotcsit
Hi,
Same thing here:
2 nodes, RF = 2. RCL = 1, WCL = 1.
Like Tamar I never ran a major compaction and repair once a week each node.
10.59.21.241eu-west 1b Up Normal 133.02 GB
50.00% 0
10.58.83.109eu-west 1b Up Normal 98.12 GB
50.00%
Hi List
I'd like to migrate my nodes in a cluster to new hardware, moving one node
at a time.
I'm running the cluster in Amazon, so I don't get to pick the ip number of
each host myself.
I'd like to decommision, say, the node with token 0, and bring that node up
on the new hardware (which will
Main problem that this sweet spot is very narrow. We can't have lots
of CF, we can't have long rows and we end up with enormous amount of
huge composite row keys and stored metadata about that keys (keep in
mind overhead on such scheme, but looks like that nobody really cares
about it
I think Cassandra should provide an configurable option on per column
family basis to do columns sorting by time-stamp rather than column names.
This would be really helpful to maintain time-sorted columns without using
up the column name as time-stamps which might otherwise be used to store
most
On Tuesday 09 of October 2012, Brian Tarbox wrote:
I can't imagine why this would be a problem but I wonder if anyone has
experience with running a mix of 32 and 64 bit nodes in a cluster.
We are running mixed userspace 64/32bit (all kernels 64bit) linux 1.0.10
cluster for our daily
I think that would be cool.
/Martin Koch - Issuu - Senior Software Architect
On Wed, Oct 10, 2012 at 11:44 AM, Ertio Lew ertio...@gmail.com wrote:
I think Cassandra should provide an configurable option on per column
family basis to do columns sorting by time-stamp rather than column names.
Well, you could use amazon VPC in which case you DO pick the IP yourself ;)….it
makes life a bit easier.
Dean
From: Martin Koch m...@issuu.commailto:m...@issuu.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
I do believe they could solve this if they wanted to. We are now streaming
5000 virtual CF's into one CF with PlayOrm. Our plan now is to use storm to do
the processing in place of map/reduce. Each virtual CF can also be
partitioned(you choose the column that is the partition key).
So I
I know what happened here. The node encountering exception during startup
is 1.2 while there is another node of 1.2-beta2.
https://issues.apache.org/jira/browse/CASSANDRA-4416 includes metadata for
system keyspace itself in schema_* tables. Hence, when both nodes were up,
1.2-beta2 node streamed
https://issues.apache.org/jira/browse/CASSANDRA/fixforversion/12323284
On Wed, Oct 10, 2012 at 1:41 AM, Alexey Zotov azo...@griddynamics.com wrote:
Hi Guys,
What known critical bugs are there that couldn't allow to use 1.2 beta 1 in
production?
We don't use cql and secondary indexes.
--
major compaction in production is fine, however it is a heavy operation on
the node and will take I/O and some CPU.
the only time i have seen this happen is when i have changed the tokens in
the ring, like nodetool movetoken. cassandra does not auto-delete data
that it doesn't use anymore just
if you have N nodes in your cluster, add N new nodes using the new
hardware, then decommision the old N nodes.
(and migrate to VPC like dean said)
On Wed, Oct 10, 2012 at 5:23 AM, Hiller, Dean dean.hil...@nrel.gov wrote:
Well, you could use amazon VPC in which case you DO pick the IP yourself
Hi!
Apart from being heavy load (the compact), will it have other effects?
Also, will cleanup help if I have replication factor = number of nodes?
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
I witnessed the same behavior as reported by Edward and James.
Removing the host from its own seed list does not solve the problem. Removing
it from config of all nodes and restarting each, then restarting the failed
node worked.
Ron
On Sep 12, 2012, at 4:42 PM, Edward Sargisson wrote:
it should not have any other impact except increased usage of system
resources.
and i suppose, cleanup would not have an affect (over normal compaction) if
all nodes contain the same data
On Wed, Oct 10, 2012 at 12:12 PM, Tamar Fraenkel ta...@tok-media.comwrote:
Hi!
Apart from being heavy
Hi!
Thanks for the answer.
I don't see much change in the load this Cassandra cluster is under, so why
is the sudden surge of such messages?
What I did noticed while looking at the logs (which are also running
OpsCenter), is that there is some correlation between the dropped reads and
flushes of
On Tue, Oct 9, 2012 at 12:56 PM, Oleg Dulin oleg.du...@gmail.com wrote:
My understanding is that the repair has to happen within gc_grace period.
[ snip ]
So the question is, is this still needed ? Do we even need to run nodetool
repair ?
If Hinted Handoff works in your version of Cassandra,
20 matches
Mail list logo