number of replicas per data center?

2015-01-18 Thread Kevin Burton
How do people normally setup multiple data center replication in terms of
number of *local* replicas?

So say you have two data centers, do you have 2 local replicas, for a total
of 4 replicas?  Or do you have 2 in one datacenter, and 1 in another?

If you only have one in a local datacenter then when it fails you have to
transfer all that data over the WAN.



-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
https://plus.google.com/102718274791889610666/posts
http://spinn3r.com


Re: number of replicas per data center?

2015-01-18 Thread Kevin Burton
Ah.. six replicas.  At least its super inexpensive that way (sarcasm!)



On Sun, Jan 18, 2015 at 8:14 PM, Jonathan Haddad j...@jonhaddad.com wrote:

 Sorry, I left out RF.  Yes, I prefer 3 replicas in each datacenter, and
 that's pretty common.


 On Sun Jan 18 2015 at 8:02:12 PM Kevin Burton bur...@spinn3r.com wrote:

  3 what? :-P replicas per datacenter or 3 data centers?

 So if you have 2 data centers you would have 6 total replicas with 3
 local replicas per datacenter?

 On Sun, Jan 18, 2015 at 7:53 PM, Jonathan Haddad j...@jonhaddad.com
 wrote:

 Personally I wouldn't go  3 unless you have a good reason.


 On Sun Jan 18 2015 at 7:52:10 PM Kevin Burton bur...@spinn3r.com
 wrote:

 How do people normally setup multiple data center replication in terms
 of number of *local* replicas?

 So say you have two data centers, do you have 2 local replicas, for a
 total of 4 replicas?  Or do you have 2 in one datacenter, and 1 in another?

 If you only have one in a local datacenter then when it fails you have
 to transfer all that data over the WAN.



 --

 Founder/CEO Spinn3r.com
 Location: *San Francisco, CA*
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 https://plus.google.com/102718274791889610666/posts
 http://spinn3r.com




 --

 Founder/CEO Spinn3r.com
 Location: *San Francisco, CA*
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 https://plus.google.com/102718274791889610666/posts
 http://spinn3r.com




-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
https://plus.google.com/102718274791889610666/posts
http://spinn3r.com


Re: keyspace not exists?

2015-01-18 Thread Jason Wee
log does not show anything fishy, because it is just for fun cluster, we
can actually wipe our 3 nodes cluster casandra dir,
data,saved_caches,commitlog and start it all over, we encounter the same
problem.

two nodes running cassandra 2.1.2 and one running cassandra 2.1.1

I look a look at the issue given by Tyler link, and patch my cqlsh and
given more information below and thank you it works. Actually doing this
tutorial from this blog http://www.datastax.com/dev/blog/thrift-to-cql3

$ cqlsh 192.168.0.2 9042
Warning: schema version mismatch detected; check the schema versions of
your nodes in system.local and system.peers.
Connected to just4fun at 192.168.0.2:9042.
[cqlsh 5.0.1 | Cassandra 2.1.1 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh  DESCRIBE KEYSPACES;

system_traces  jw_schema1  system

cqlsh use jw_schema1;
cqlsh:jw_schema1 desc tables;

user_profiles

cqlsh:jw_schema1 quit;

$ cassandra-cli -h 192.168.0.2 -p 9160
Connected to: just4fun on 192.168.0.2/9160
Welcome to Cassandra CLI version 2.1.1

The CLI is deprecated and will be removed in Cassandra 3.0.  Consider
migrating to cqlsh.
CQL is fully backwards compatible with Thrift data; see
http://www.datastax.com/dev/blog/thrift-to-cql3

Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.

[default@unknown] show keyspaces;

WARNING: CQL3 tables are intentionally omitted from 'show keyspaces' output.
See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

Keyspace: jw_schema1:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
Options: [replication_factor:3]
  Column Families:
ColumnFamily: user_profiles
  Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
  Default column value validator:
org.apache.cassandra.db.marshal.BytesType
  Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 0.0
  DC Local Read repair chance: 0.1
  Caching: KEYS_ONLY
  Default time to live: 0
  Bloom Filter FP chance: 0.01
  Index interval: default
  Speculative Retry: 99.0PERCENTILE
  Built indexes: []
  Column Metadata:
Column Name: first_name
  Validation Class: org.apache.cassandra.db.marshal.UTF8Type
Column Name: year_of_birth
  Validation Class: org.apache.cassandra.db.marshal.Int32Type
Column Name: last_name
  Validation Class: org.apache.cassandra.db.marshal.UTF8Type
  Compaction Strategy:
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
  Compression Options:
sstable_compression: org.apache.cassandra.io.compress.LZ4Compressor
Keyspace: system:
..
..
..
Keyspace: system_traces:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
Options: [replication_factor:2]
  Column Families:
[default@unknown] use jw_schema1;
Authenticated to keyspace: jw_schema1
[default@jw_schema1] list user_profiles;
Using default limit of 100
Using default cell limit of 100

0 Row Returned.
Elapsed time: 728 msec(s).
[default@jw_schema1]

On Sat, Jan 17, 2015 at 6:41 AM, Tyler Hobbs ty...@datastax.com wrote:

 This might be https://issues.apache.org/jira/browse/CASSANDRA-8512 if
 your cluster has a schema disagreement.  You can apply the patch on that
 ticket with patch -p1  8512-2.1.txt from the top-level cassandra
 directory and see if it helps.

 On Fri, Jan 16, 2015 at 11:58 AM, Julien Anguenot jul...@anguenot.org
 wrote:

 Hey Jason,

 Your RF=3, do you have 3 nodes up and running in this DC? We have seen
 this issue with 2.1.x and cqlsh where schema changes would trigger the
 keyspace not found error in cqlsh if not all nodes were up and
 running when altering KS schema in a DC with NetworkTopologyStrategy
 and RF=3. For us, bringing all the nodes up to meet RF would then fix
 the problem.

 As well, you might want to restart the node and see if the keyspace
 not found still occurs: same here, since 2.1.x we've had cases where
 a restart was required for cqlsh and / or drivers to see the schema
 changes.

J.

 On Fri, Jan 16, 2015 at 3:56 AM, Jason Wee peich...@gmail.com wrote:
  $ cqlsh 192.168.0.2 9042
  Connected to just4fun at 192.168.0.2:9042.
  [cqlsh 5.0.1 | Cassandra 2.1.1 | CQL spec 3.2.0 | Native protocol v3]
  Use HELP for help.
  cqlsh DESCRIBE KEYSPACES
 
  empty
 
  cqlsh create keyspace foobar with replication =
 {'class':'SimpleStrategy',
  'replication_factor':3};
  errors={}, last_host=192.168.0.2
  cqlsh DESCRIBE KEYSPACES;
 
  empty
 
  cqlsh use foobar;
  cqlsh:foobar  DESCRIBE TABLES;
 
  Keyspace 'foobar' not found.
 
 
  Just trying cassandra 2.1 and encounter the above erorr, can anyone
 explain
  why is this and where to even begin troubleshooting?
 
  Jason




 --
 Tyler Hobbs
 DataStax http://datastax.com/



Re: Many really small SSTables

2015-01-18 Thread Roland Etzenhammer

Hi,

just as a short follow up, it worked - all nodes now have 20-30 sstables 
instead of thousands.


Cheers,
Roland


Re: Compaction failing to trigger

2015-01-18 Thread Flavien Charlon
It's set on all the tables, as I'm using the default for all the tables.
But for that particular table there are 41 SSTables between 60MB and 85MB,
it should only take 4 for the compaction to kick in.

As this is probably a bug and going back in the mailing list archive, it
seems it's already been reported:

   - Is there a workaround?
   - What is the JIRA ticket number?
   - Will it be fixed in 2.1.3?

Thanks
Flavien

On 19 January 2015 at 01:23, 严超 yanchao...@gmail.com wrote:

 Seems like Size Tier Compaction is based on table, which table did you
 set the compaction strategy?
 A minor compaction does not involve all the tables in a keyspace.
 Ref:

 http://datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_configure_compaction_t.html

 http://datastax.com/documentation/cql/3.1/cql/cql_reference/tabProp.html?scroll=tabProp__moreCompaction

 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*

 2015-01-19 3:51 GMT+08:00 Flavien Charlon flavien.char...@gmail.com:

 Hi,

 I am using Size Tier Compaction (Cassandra 2.1.2). Minor compaction is
 not triggering even though it should. See the SSTables on disk:
 http://pastebin.com/PSwZ5mrT

 You can see that we have 41 SSTable between 60MB and 85MB, which should
 trigger compaction unless I am missing something.

 Is that a bug?

 Thanks,
 Flavien





Re: number of replicas per data center?

2015-01-18 Thread Kevin Burton
 3 what? :-P replicas per datacenter or 3 data centers?

So if you have 2 data centers you would have 6 total replicas with 3 local
replicas per datacenter?

On Sun, Jan 18, 2015 at 7:53 PM, Jonathan Haddad j...@jonhaddad.com wrote:

 Personally I wouldn't go  3 unless you have a good reason.


 On Sun Jan 18 2015 at 7:52:10 PM Kevin Burton bur...@spinn3r.com wrote:

 How do people normally setup multiple data center replication in terms of
 number of *local* replicas?

 So say you have two data centers, do you have 2 local replicas, for a
 total of 4 replicas?  Or do you have 2 in one datacenter, and 1 in another?

 If you only have one in a local datacenter then when it fails you have to
 transfer all that data over the WAN.



 --

 Founder/CEO Spinn3r.com
 Location: *San Francisco, CA*
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 https://plus.google.com/102718274791889610666/posts
 http://spinn3r.com




-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
https://plus.google.com/102718274791889610666/posts
http://spinn3r.com


Re: keyspace not exists?

2015-01-18 Thread Jason Wee
Hi,

Immediately after a repair, I execute cqlsh, still the schema mismatch?

[2015-01-19 13:50:49,979] Repair session
19c67350-9f9f-11e4-8b56-a322c40b8b81 for range
(-725731847063341791,-718486959589605925] finished
[2015-01-19 13:50:49,980] Repair session
1a612cb0-9f9f-11e4-8b56-a322c40b8b81 for range
(-5366440687164990017,-5357952536457207248] finished
[2015-01-19 13:50:49,980] Repair session
1afcd070-9f9f-11e4-8b56-a322c40b8b81 for range
(-2871651679602006497,-2860883420245139806] finished
[2015-01-19 13:50:49,980] Repair session
1b99acb0-9f9f-11e4-8b56-a322c40b8b81 for range
(-394095345040964045,-391878264832686281] finished
[2015-01-19 13:50:49,981] Repair session
1c352960-9f9f-11e4-8b56-a322c40b8b81 for range
(8830377476646048271,8848086816619852308] finished
[2015-01-19 13:50:49,981] Repair session
1cd1de90-9f9f-11e4-8b56-a322c40b8b81 for range
(4538653889569069241,4549572313549299652] finished
[2015-01-19 13:50:49,985] Repair session
1d6ebad0-9f9f-11e4-8b56-a322c40b8b81 for range
(6052068628404624993,6058413940102734921] finished
[2015-01-19 13:50:49,986] Repair command #1 finished
jason@localhost:~$ cqlsh 192.168.0.2 9042
Warning: schema version mismatch detected; check the schema versions of
your nodes in system.local and system.peers.
Connected to just4fun at 192.168.0.2:9042.
[cqlsh 5.0.1 | Cassandra 2.1.1 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh
cqlsh desc keyspaces;

system_traces  jw_schema1  system

cqlsh use jw_schema1;
cqlsh:jw_schema1 desc tables;

user_profiles

cqlsh:system  select host_id,schema_version from system.peers;

 host_id  | schema_version
--+--
 d21e3d11-5bfb-4888-97cd-62af90e83f56 | b5291c1d-6635-3627-928f-f5a0f0c27ec1
 d21e3d11-5bfb-4888-97cd-62af90e83f56 | c7a2ebda-89f7-36f0-a735-a0dffc400124
 69bd2306-c919-411b-83f3-341b4f7f54b4 | f6f3835e-ed12-34f4-9f4b-f2a72bb57c30
 e1444216-4412-45d5-9703-a463ee50aec2 | f6f3835e-ed12-34f4-9f4b-f2a72bb57c30

(4 rows)
cqlsh:system select host_id,schema_version from system.local;

 host_id  | schema_version
--+--
 d21e3d11-5bfb-4888-97cd-62af90e83f56 | f6f3835e-ed12-34f4-9f4b-f2a72bb57c30

(1 rows)


On Mon, Jan 19, 2015 at 12:55 PM, Jason Wee peich...@gmail.com wrote:

 log does not show anything fishy, because it is just for fun cluster, we
 can actually wipe our 3 nodes cluster casandra dir,
 data,saved_caches,commitlog and start it all over, we encounter the same
 problem.

 two nodes running cassandra 2.1.2 and one running cassandra 2.1.1

 I look a look at the issue given by Tyler link, and patch my cqlsh and
 given more information below and thank you it works. Actually doing this
 tutorial from this blog http://www.datastax.com/dev/blog/thrift-to-cql3

 $ cqlsh 192.168.0.2 9042
 Warning: schema version mismatch detected; check the schema versions of
 your nodes in system.local and system.peers.
 Connected to just4fun at 192.168.0.2:9042.
 [cqlsh 5.0.1 | Cassandra 2.1.1 | CQL spec 3.2.0 | Native protocol v3]
 Use HELP for help.
 cqlsh  DESCRIBE KEYSPACES;

 system_traces  jw_schema1  system

 cqlsh use jw_schema1;
 cqlsh:jw_schema1 desc tables;

 user_profiles

 cqlsh:jw_schema1 quit;

 $ cassandra-cli -h 192.168.0.2 -p 9160
 Connected to: just4fun on 192.168.0.2/9160
 Welcome to Cassandra CLI version 2.1.1

 The CLI is deprecated and will be removed in Cassandra 3.0.  Consider
 migrating to cqlsh.
 CQL is fully backwards compatible with Thrift data; see
 http://www.datastax.com/dev/blog/thrift-to-cql3

 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.

 [default@unknown] show keyspaces;

 WARNING: CQL3 tables are intentionally omitted from 'show keyspaces'
 output.
 See https://issues.apache.org/jira/browse/CASSANDRA-4377 for details.

 Keyspace: jw_schema1:
   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
   Durable Writes: true
 Options: [replication_factor:3]
   Column Families:
 ColumnFamily: user_profiles
   Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
   Default column value validator:
 org.apache.cassandra.db.marshal.BytesType
   Cells sorted by: org.apache.cassandra.db.marshal.UTF8Type
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.0
   DC Local Read repair chance: 0.1
   Caching: KEYS_ONLY
   Default time to live: 0
   Bloom Filter FP chance: 0.01
   Index interval: default
   Speculative Retry: 99.0PERCENTILE
   Built indexes: []
   Column Metadata:
 Column Name: first_name
   Validation Class: org.apache.cassandra.db.marshal.UTF8Type
 Column Name: year_of_birth
   Validation Class: org.apache.cassandra.db.marshal.Int32Type
 Column Name: last_name
   Validation Class: 

Re: Compaction failing to trigger

2015-01-18 Thread Roland Etzenhammer

Hi Flavien,

I hit some problem with minor compations recently (just some days ago) - 
but with many more tables. In my case compactions got not triggered, you 
can check this with nodetool compactionstats.


Reason for me was that those minor compactions did not get triggered 
since there were almost no reads on that tables. Setting 
'cold_reads_to_omit' to 0 did the job for me:


ALTER TABLE tablename WITH compaction = {'class': 
'SizeTieredCompactionStrategy', 'min_threshold': '4', 'max_threshold': '32', 
'cold_reads_to_omit': 0.0};

Credits to Tyler and Eric for the pointers.

Cheers,
Roland


Re: number of replicas per data center?

2015-01-18 Thread Jonathan Haddad
Sorry, I left out RF.  Yes, I prefer 3 replicas in each datacenter, and
that's pretty common.

On Sun Jan 18 2015 at 8:02:12 PM Kevin Burton bur...@spinn3r.com wrote:

  3 what? :-P replicas per datacenter or 3 data centers?

 So if you have 2 data centers you would have 6 total replicas with 3 local
 replicas per datacenter?

 On Sun, Jan 18, 2015 at 7:53 PM, Jonathan Haddad j...@jonhaddad.com
 wrote:

 Personally I wouldn't go  3 unless you have a good reason.


 On Sun Jan 18 2015 at 7:52:10 PM Kevin Burton bur...@spinn3r.com wrote:

 How do people normally setup multiple data center replication in terms
 of number of *local* replicas?

 So say you have two data centers, do you have 2 local replicas, for a
 total of 4 replicas?  Or do you have 2 in one datacenter, and 1 in another?

 If you only have one in a local datacenter then when it fails you have
 to transfer all that data over the WAN.



 --

 Founder/CEO Spinn3r.com
 Location: *San Francisco, CA*
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 https://plus.google.com/102718274791889610666/posts
 http://spinn3r.com




 --

 Founder/CEO Spinn3r.com
 Location: *San Francisco, CA*
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 https://plus.google.com/102718274791889610666/posts
 http://spinn3r.com




Re: number of replicas per data center?

2015-01-18 Thread Jonathan Haddad
Personally I wouldn't go  3 unless you have a good reason.

On Sun Jan 18 2015 at 7:52:10 PM Kevin Burton bur...@spinn3r.com wrote:

 How do people normally setup multiple data center replication in terms of
 number of *local* replicas?

 So say you have two data centers, do you have 2 local replicas, for a
 total of 4 replicas?  Or do you have 2 in one datacenter, and 1 in another?

 If you only have one in a local datacenter then when it fails you have to
 transfer all that data over the WAN.



 --

 Founder/CEO Spinn3r.com
 Location: *San Francisco, CA*
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 https://plus.google.com/102718274791889610666/posts
 http://spinn3r.com




Re: number of replicas per data center?

2015-01-18 Thread Colin
I like to have 3 replicas across 3 racks in each datacenter as a rue of thumb.  
You can vary that, but it depends upon the use case, and the SLA's for latency.

This can get a little complicated if you're using the cloud and automated 
deployment strategies as I like to use the same abstractions externally as 
internally.

--
Colin Clark 
+1-320-221-9531
 

 On Jan 18, 2015, at 9:49 PM, Kevin Burton bur...@spinn3r.com wrote:
 
 How do people normally setup multiple data center replication in terms of 
 number of *local* replicas?
 
 So say you have two data centers, do you have 2 local replicas, for a total 
 of 4 replicas?  Or do you have 2 in one datacenter, and 1 in another?
 
 If you only have one in a local datacenter then when it fails you have to 
 transfer all that data over the WAN.
 
 
 
 -- 
 Founder/CEO Spinn3r.com
 Location: San Francisco, CA
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 


Re: Which files should I backup for data restoring/data migration?

2015-01-18 Thread 严超
Check out this doc, I think it may help you:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_backup_restore_c.html

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*

2015-01-18 21:06 GMT+08:00 孔嘉林 kongjiali...@gmail.com:

 Hi,
 I want to backup the files needed for data restoring/data migration. There
 are several directories:
 /var/lib/cassandra/
- commitlog/
- data/
 - mytable/
 - system/
 - system_traces/
- saved_caches/
 /var/log/cassandra/

 So if I want a new machine start cassandra service on having the old data,
 which of the files are needed? Is only the
 /var/lib/cassandra/data/mytable/ directory enough? Or I should copy all
 of the above files on the new machine?

 Thanks very much,
 Joy



Re: Compaction failing to trigger

2015-01-18 Thread 严超
Seems like Size Tier Compaction is based on table, which table did you set
the compaction strategy?
A minor compaction does not involve all the tables in a keyspace.
Ref:
http://datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_configure_compaction_t.html
http://datastax.com/documentation/cql/3.1/cql/cql_reference/tabProp.html?scroll=tabProp__moreCompaction

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*

2015-01-19 3:51 GMT+08:00 Flavien Charlon flavien.char...@gmail.com:

 Hi,

 I am using Size Tier Compaction (Cassandra 2.1.2). Minor compaction is not
 triggering even though it should. See the SSTables on disk:
 http://pastebin.com/PSwZ5mrT

 You can see that we have 41 SSTable between 60MB and 85MB, which should
 trigger compaction unless I am missing something.

 Is that a bug?

 Thanks,
 Flavien



Re: Not enough replica available” when consistency is ONE?

2015-01-18 Thread Kevin Burton
OK.. so if I’m running with 2 replicas, then BOTH of them need to be online
for this to work.  Correct?  Because with two replicas I need 2 to form a
quorum.

This is somewhat confusing them.  Because if you have two replicas, and
you’re depending on these types of transactions, then this is a VERY
dangerous state.  Because if ANY of your Cassandra nodes goes offline, then
your entire application crashes.  So the more nodes you have, the HIGHER
the probability that your application will crash.

Which is just what happened to me.  And in retrospect, this makes total
sense, but of course I just missed this in the application design.

So ConsistencyLevel.ONE and if not exists are essentially mutually
incompatible and shouldn’t the driver throw an exception if the user
requests this configuration?

Its dangerous enough that it probably shouldn’t be supported.



On Sun, Jan 18, 2015 at 7:43 AM, Eric Stevens migh...@gmail.com wrote:

 Check out
 http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_tunable_consistency_c.html

  Cassandra 2.0 uses the Paxos consensus protocol, which resembles
 2-phase commit, to support linearizable consistency. All operations are
 quorum-based ...

 This kicks in whenever you do CAS operations (eg, IF NOT EXISTS).
 Otherwise a cluster which became network partitioned would end up being
 able to have two separate CAS statements which both succeeded, but which
 disagreed with each other.

 On Sun, Jan 18, 2015 at 8:02 AM, Kevin Burton bur...@spinn3r.com wrote:

 I’m really confused here.

 I”m calling:

 acquireInsert.setConsistencyLevel( ConsistencyLevel.ONE );

 but I”m still getting the exception:

 com.datastax.driver.core.exceptions.UnavailableException: Not enough
 replica available for query at consistency SERIAL (2 required but only 1
 alive)

 Does it matter that I’m using:

 ifNotExists();

 and that maybe cassandra needs two because it’s using a coordinator ?

 If so then an exception should probably be thrown when I try to set a
 wrong consistency level.

 which would be weird because I *do* have at least two replicas online. I
 have 4 nodes in my cluster right now...

 --

 Founder/CEO Spinn3r.com
 Location: *San Francisco, CA*
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 https://plus.google.com/102718274791889610666/posts
 http://spinn3r.com





-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
https://plus.google.com/102718274791889610666/posts
http://spinn3r.com


Not enough replica available” when consistency is ONE?

2015-01-18 Thread Kevin Burton
I’m really confused here.

I”m calling:

acquireInsert.setConsistencyLevel( ConsistencyLevel.ONE );

but I”m still getting the exception:

com.datastax.driver.core.exceptions.UnavailableException: Not enough
replica available for query at consistency SERIAL (2 required but only 1
alive)

Does it matter that I’m using:

ifNotExists();

and that maybe cassandra needs two because it’s using a coordinator ?

If so then an exception should probably be thrown when I try to set a wrong
consistency level.

which would be weird because I *do* have at least two replicas online. I
have 4 nodes in my cluster right now...

-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
https://plus.google.com/102718274791889610666/posts
http://spinn3r.com


Re: Not enough replica available” when consistency is ONE?

2015-01-18 Thread Eric Stevens
Check out
http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_tunable_consistency_c.html

 Cassandra 2.0 uses the Paxos consensus protocol, which resembles 2-phase
commit, to support linearizable consistency. All operations are quorum-based
 ...

This kicks in whenever you do CAS operations (eg, IF NOT EXISTS).
Otherwise a cluster which became network partitioned would end up being
able to have two separate CAS statements which both succeeded, but which
disagreed with each other.

On Sun, Jan 18, 2015 at 8:02 AM, Kevin Burton bur...@spinn3r.com wrote:

 I’m really confused here.

 I”m calling:

 acquireInsert.setConsistencyLevel( ConsistencyLevel.ONE );

 but I”m still getting the exception:

 com.datastax.driver.core.exceptions.UnavailableException: Not enough
 replica available for query at consistency SERIAL (2 required but only 1
 alive)

 Does it matter that I’m using:

 ifNotExists();

 and that maybe cassandra needs two because it’s using a coordinator ?

 If so then an exception should probably be thrown when I try to set a
 wrong consistency level.

 which would be weird because I *do* have at least two replicas online. I
 have 4 nodes in my cluster right now...

 --

 Founder/CEO Spinn3r.com
 Location: *San Francisco, CA*
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 https://plus.google.com/102718274791889610666/posts
 http://spinn3r.com




Which files should I backup for data restoring/data migration?

2015-01-18 Thread 孔嘉林
Hi,
I want to backup the files needed for data restoring/data migration. There
are several directories:
/var/lib/cassandra/
   - commitlog/
   - data/
- mytable/
- system/
- system_traces/
   - saved_caches/
/var/log/cassandra/

So if I want a new machine start cassandra service on having the old data,
which of the files are needed? Is only the
/var/lib/cassandra/data/mytable/ directory enough? Or I should copy all
of the above files on the new machine?

Thanks very much,
Joy


Compaction failing to trigger

2015-01-18 Thread Flavien Charlon
Hi,

I am using Size Tier Compaction (Cassandra 2.1.2). Minor compaction is not
triggering even though it should. See the SSTables on disk:
http://pastebin.com/PSwZ5mrT

You can see that we have 41 SSTable between 60MB and 85MB, which should
trigger compaction unless I am missing something.

Is that a bug?

Thanks,
Flavien