A restart of node1 fixed the problem.
The only thing I saw in the log of node1 before the problem was the following:
InetAddress /172.27.70.135 is now dead.
InetAddress /172.27.70.135 is now UP
After this, the nodetool ring command showed node 172.27.70.135 as dead.
You mention a stored ring
Yes, I already did a repair and cleanup. Currently my ring looks like this:
Address DC RackStatus State LoadOwns
Token
***.89datacenter1 rack1 Up Normal 2.44 GB 50.00% 0
***.135datacenter1 rack1 Up Normal 6.99 GB
the bulk loader http://www.datastax.com/dev/blog/bulk-loading, or
the basic and well-known tools: export import SStable
http://wiki.apache.org/cassandra/Operations#Import_.2BAC8_export
-pprun
On 2012年02月02日 15:16, Hefeng Yuan wrote:
Hi,
We need clone the data between 2 clusters. These 2
Hello, when you set 'commitlog_sync: batch' on all the nodes in a
multi-DC cluster and call writes with CL=ALL, does the operation wait
till the write is flushed to all the disks on all the nodes ?
Thanks.
Thank you Eric :)
Regards,
Tamil Selvan
On Wed, Feb 1, 2012 at 10:08 PM, Eric Evans eev...@acunu.com wrote:
On Wed, Feb 1, 2012 at 5:50 AM, Tamil selvan R.S tamil.3...@gmail.com
wrote:
Where to follow the progress of Cassandra CQL development progress and
release schedule?
We don't
Hi!
We're experimenting with streaming from Hadoop to Cassandra using
BulkoutputFormat, on cassandra-1.1 branch.
Are there any specific settings we should tune on the Cassandra servers
in order to get the best streaming performance?
Our Cassandra hardware are 16 core (including HT cores)
I'm new to administering Cassandra so please be kind!
I've been tasked with upgrading a .6 cluster to 1.0.7. In doing this I
need a rollback plan in case things go sideways since my window for the
upgrade is fairly small. So we've decided to stand up a brand new cluster
running 1.0.7 and then
You mention a “stored ring view”. Can it be that this stored ring view was
out of sync with the actual (gossip) situation?
After checking the code, not as much as I thought it did :)
Stored ring state is just the map from ip address to token, I thought it has a
little more in there.
Cheers
Yes.
Be aware that the commit log will block and not flush for
commitlog_sync_batch_window_in_ms
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 3/02/2012, at 5:44 AM, A J wrote:
Hello, when you set 'commitlog_sync: batch' on all the
It will make your life *a lot* easier by doing a 1 to 1 migration from the 0.6
cluster to the 1.X one. If you want to add nodes do it once you have 1.X happy
and stable, if you need to reduce nodes threaten to hold your breath until you
pass out.
You can then simply:
* drain and shapshot
hey all!
i have a CF that has ~10 columns in it and now i'm finding the need to
use composite column names. can you, or should mix and match composite
and non-composite column names in the same CF? if you can/should how
does sorting work with a single comparator?
thanks,
deno
sorry to be dense, but which is it? do i get the old version or the new
version? or is it indeterminate?
On 02/02/2012 01:42, Peter Schuller wrote:
i have RF=3, my row/column lives on 3 nodes right? if (for some reason, eg
a timed-out write at quorum) node 1 has a 'new' version of the
sorry to be dense, but which is it? do i get the old version or the new
version? or is it indeterminate?
Indeterminate, depending on which nodes happen to be participating in
the read. Eventually you should get the new version, unless the node
that took the new version permanently crashed
Short answer is no. The slightly longer answer is nope.
All column names in a CF are compared using the same comparator. You will need
to create a new CF.
Cheers.
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 3/02/2012, at 10:25 AM, Deno
this is what i thought. thanks for clarifying.
On 2/2/2012 10:44 PM, aaron morton wrote:
Short answer is no. The slightly longer answer is nope.
All column names in a CF are compared using the same comparator. You
will need to create a new CF.
Cheers.
-
Aaron Morton
Well, it seems it's balancing itself, 24 hours later the ring looks like
this:
***.89datacenter1 rack1 Up Normal 7.36 GB 50.00% 0
***.135datacenter1 rack1 Up Normal 8.84 GB 50.00%
85070591730234615865843651857942052864
Looks pretty normal, right?
It will have a performance penalty, so it would be better to spread the
compactions over a period of time. But Cassandra will still take care of
any reads/writes (within the given timeout).
2012/2/3 myreasoner myreaso...@gmail.com
If every node in the cluster is running major compaction, would
If every node in the cluster is running major compaction, would it be able to
answer any read request? And is it wise to write anything to a cluster
while it's doing major compaction?
Compaction is something that is supposed to be continuously running in
the background. As noted, it will have
18 matches
Mail list logo