If I have a replication factor set up as 3 and I want data replicated
across data centers would I get 3 replicas per DC or is each replica placed
in a different Data centers?
On Sun, Dec 4, 2011 at 11:22 AM, Bill Hastings bllhasti...@gmail.comwrote:
If I have a replication factor set up as 3 and I want data replicated
across data centers would I get 3 replicas per DC or is each replica placed
in a different Data centers?
The strategy options dictate how many
Hi,
I'm trying to set up a test environment with 2 nodes on one physical
machine with two ips. I configured both as adviced in the
documentation:
cluster_name: 'MyDemoCluster'
initial_token: 0
seed_provider:
- seeds: IP1
listen_address: IP1
rpc_address: IP1
cluster_name: 'MyDemoCluster'
2011/12/4 Radim Kolar h...@sendmail.cz
C:\cassandra\binnodetool -h 10.0.0.9 repair
Starting NodeTool
Error connection to remote JMX agent!
java.rmi.ConnectException: Connection refused to host: 192.168.140.1;
nested exc
eption is:
java.net.ConnectException: Connection timed out:
I capped heap and the error is still there. So I keep seeing node dead
messages even when I know the nodes were OK. Where and how do I tweak
timeouts?
9d-cfc9-4cbc-9f1d-1467341388b8, endpoint /130.199.185.193 died
INFO [GossipStage:1] 2011-12-04 00:26:16,362 Gossiper.java (line 683)
I capped heap and the error is still there. So I keep seeing node dead
messages even when I know the nodes were OK. Where and how do I tweak
timeouts?
You can increase phi_convict_threshold in the configuration. However,
I would rather want to find out why they are being marked as down to
Thanks Peter!
I will try to increase phi_convict -- I will just need to restart the
cluster after
the edit, right?
I do recall that I see nodes temporarily marked as down, only to pop up
later.
In the current situation, there is no load on the cluster at all,
outside the
maintenance like
Please disregard the GC part of the question -- I found it.
On 12/4/2011 4:12 PM, Maxim Potekhin wrote:
Thanks Peter!
I will try to increase phi_convict -- I will just need to restart the
cluster after
the edit, right?
I do recall that I see nodes temporarily marked as down, only to pop
up
I will try to increase phi_convict -- I will just need to restart the
cluster after
the edit, right?
You will need to restart the nodes for which you want the phi convict
threshold to be different. You might want to do on e.g. half of the
cluster to do A/B testing.
I do recall that I see
I'm seeing this same problem after upgrade to 1.0.3 from .8
Nothing changed with the column family storing the counters, but now it just
constantly times out trying to increment them. No errors in the event logs or
any other issues with my cluster.
Did you find a resolution?
From: Carlos
I seem to recall problems when using a cf called indexRegistry, don't
remember
much detail now.
Maxim
On 11/30/2011 7:24 PM, Shu Zhang wrote:
Hi, just wondering if this is intentional:
[default@test] create column family index;
Syntax error at position 21: mismatched input 'index' expecting
On Fri, Dec 2, 2011 at 8:13 PM, liangfeng liangf...@made-in-china.com wrote:
1.There is no implementation in cassandra1.0 to ensure the conclusion Only
enough space for 10x the sstable size needs to be reserved for temporary use
by
compaction,so one special compaction may need big free disk
As a side effect of the failed repair (so it seems) the disk usage on the
affected node prevents compaction from working. It still works on
the remaining nodes (we have 3 total).
Is there a way to scrub the extraneous data?
Thanks
Maxim
On 12/4/2011 4:29 PM, Peter Schuller wrote:
I will try
As a side effect of the failed repair (so it seems) the disk usage on the
affected node prevents compaction from working. It still works on
the remaining nodes (we have 3 total).
Is there a way to scrub the extraneous data?
This is one of the reasons why killing an in-process repair is a bad
You can say the min compaction threshold to 2 and the max Compaction
Threshold to 3. If you have enough disk space for a few minor compaction
this should free up some disk space.
On Sun, Dec 4, 2011 at 7:17 PM, Peter Schuller
peter.schul...@infidyne.comwrote:
As a side effect of the failed
Jonathan Ellis jbellis at gmail.com writes:
You should look at the org.apache.cassandra.db.compaction package and
read the original leveldb implementation notes at
http://leveldb.googlecode.com/svn/trunk/doc/impl.html for more
details.
There is an important rule in
The digest is based on the results of the same query as applied on
different replicas. See the following for more details:
http://wiki.apache.org/cassandra/ReadRepair
http://www.datastax.com/docs/1.0/dml/data_consistency
On Wed, Nov 30, 2011 at 11:38 PM, Thorsten von Eicken
t...@rightscale.com
Lower your heap size, if you are testing multiple instance with single
node.
https://github.com/apache/cassandra/blob/trunk/conf/cassandra-env.sh#L64
On Sun, Dec 4, 2011 at 11:08 PM, Harald Falzberger
h.falzber...@gmail.comwrote:
Hi,
I'm trying to set up a test environment with 2 nodes on
18 matches
Mail list logo