- use the join command
normally join-ring is only used when the node is having some sort of problem.
The best approach to bringing up a new is to explicitly set the token and start
it with auto_bootstrap=true.
There have been two similar reports in the last few days.
The joining node is unable to discover the cluster information it needs to find
the correct place to join. However there code appears to have checks in place
to prevent that from happening.
If you have time can you please try to
Ok I'll give it a try.
Thank you Aaron
On 3/15/12 9:16 AM, aaron morton wrote:
- use the join command
normally join-ring is only used when the node is having some sort of
problem.
The best approach to bringing up a new is to explicitly set the token
and start it with auto_bootstrap=true.
Hi,
I'm using cassandra (1.0.8) embedded in the same jvm as a
webapplication. All data is available on all nodes (R = N), read /
write CL.ONE.
Is it correct to assume in the following scenario that the newly added
node has all data locally and that secondary indexes are fully created
Is there anything going on in the logs ? Are nodes going up and down ? Can you
see any messages about delivering hints ?
If the query to read the hints errors it will log HintsCF getEPPendingHints
timed out at INFO level.
Also checking, do the hinted_handoff_* settings in cassandra.yaml
is there any disadvantage to using supercolumns here?
There are some http://wiki.apache.org/cassandra/CassandraLimitations
I would avoid them if you can. The one thing you cannot do when using
CompositeTypes for column names is a range delete. If you delete a super
column, then you delete
After clearing the system files I was able to successfully join the node.
thanks.
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Thursday, March 15, 2012 2:13 PM
To: user@cassandra.apache.org
Subject: Re: Adding nodes to cluster (Cassandra 1.0.8)
There have been two similar reports
The simple thing to say is: If you send a batch_mutate the order which the rows
are written is undefined. So you should not make any assumptions such as if
rows C is stored, rows A and B also have.
They may do but AFAIK it is not part of the API contract.
For the thrift API batch_mutate
Thanks Tyler,
I see that cassandra.yaml has endpoint_snitch:
com.datastax.bdp.snitch.DseSimpleSnitch. Will this pick up the
configuration from the cassandra-topology.properties file as does the
PropertyFileSnitch ? Or is there some other way of telling it which nodes
are in withc DC?
Cheers,
On Thursday 15 of March 2012, aaron morton wrote:
1. a running cluster of N=3, R=3
2. upgrade R to 4
You should not be allowed to set the RF higher than the number of nodes.
I wonder why is that restriction ?
It would be easier to increase RF first (similar as having new node down), add
Watched the video, really good!
One question:
I wonder if it is possible to mix counter columns in Cassandra 1.0.7 with
regular columns in the same CF.
Even if it is possible, I am not sure I should mix them as I use Hector,
and this means I will have to user both Hector and CQL,
Am I right?
No, it's not possible.
On 15/03/2012 10:53, Tamar Fraenkel wrote:
Watched the video, really good!
One question:
I wonder if it is possible to mix counter columns in
Cassandra 1.0.7 with regular columns in the same CF.
I'm not sure why this is not allowed. As long as I do not use CL.all there
will be enough nodes available to satisfy the read / write (at least when I
look at ReadCallback and the WriteResponseHandler). Or am I missing
something here?
According to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Found the debian init script, you'll probably need to change the paths:
http://svn.apache.org/repos/asf/cassandra/trunk/debian/init
You can find the init script for other distros if you peel back to
'/cassandra/trunk/'.
- --
- -
Nick Summerlin
Hi,
We are working on a project that initially is going to have very little data,
but we would like to use Cassandra to ease the future scalability. Due to
budget constraints, we were thinking to run a single node Cassandra for now and
then add more nodes as required.
I was wondering if it is
I added a second node to a single-node ring. RF=1. I can't get the new
node to receive any data. Logs look fine. Here's what nodetool reports:
# nodetool -h localhost ring
Address DC RackStatus State Load
OwnsToken
Hi Drew,
One other disadvantage is the lack of consistency level and
replication. Both ware part of the high availability / redundancy. So you
would really need to backup your single-node-cluster to some other
external location.
Good luck!
2012/3/15 Drew Kutcharian d...@venarc.com
Hi,
We
Hi,
I'm using a 4 node cassandra cluster running 0.8.10 with rf=3. Its a brand
new setup.
I have a single col family which contains about 10 columns. I have enabled
secondary indices on 3 of them. I used sstableloader to bulk load some data
into this cluster.
I poked around the logs and saw the
Hi, I'm testing the community-version of Cassandra 1.0.8.
We are currently on 0.8.7 in our production-setup.
We have 3 Column Families that each takes between 20 and 35 GB on disk
per node. (8*2 nodes total)
We would like to change to Leveled Compaction - and even try compression
as well to
Sorry for that last message, I was confused because I thought I needed to
use the DseSimpleSnitch but of course I can use the PropertyFileSnitch and
that allows me to get the configuration with 3 data centers explained.
Cheers,
Alex
On Thu, Mar 15, 2012 at 10:56 AM, Alexandru Sicoe
Heya
I'd suggest staying away from Leveled Compaction until 1.0.9.
For the why see this great explanation I got from Maki Watanabe on the
list:
http://mail-archives.apache.org/mod_mbox/cassandra-user/201203.mbox/%3CCALqbeQbQ=d-hORVhA-LHOo_a5j46fQrsZMm+OQgfkgR=4rr...@mail.gmail.com%3E
Keep an eye
So long as data loss and downtime are acceptable risks a one node cluster
is fine.
Personally this is usually only acceptable on my workstation, even my dev
environment is redundant, because servers fail, usually when you least want
them to, like for example when you've decided to save costs by
Thanks for the comments, I guess I will end up doing a 2 node cluster with
replica count 2 and read consistency 1.
-- Drew
On Mar 15, 2012, at 4:20 PM, Thomas van Neerijnen wrote:
So long as data loss and downtime are acceptable risks a one node cluster is
fine.
Personally this is usually
The documentation is correct.
I was mistakenly remembering discussions in the past about RF #nodes.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 16/03/2012, at 4:34 AM, Doğan Çeçen wrote:
I'm not sure why this is not allowed. As
24 matches
Mail list logo