Can I run upgrade sstables on many nodes on one time
Hi all, I'm trying to update my 6 node cluster from 2.0.11 to 2.1.8. I'm following this update procedure: http://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgradeCassandraDetails.html and the point 8 says: If you are upgrading from a major version (for example, from Cassandra 1.2 to 2.0) or a major point release (for example, from Cassandra 2.0 to 2.1), upgrade the SSTables on each node. $ nodetool upgradesstables As far as I understood it correctly I should run nodetool upgradesstables on every node after upgrading the version on each node. Is that right? As it is a really time consuming operation I wonder if I could run upgradesstables on multiple nodes at one time ( parallelly)? Regards, Ola
RE: Can I run upgrade sstables on many nodes on one time
Yes, you should run upgradesstables on each node. If the sstable structure has changed, you will need this completed before you can do streaming operations like repairs or adding nodes. As for running in parallel, that should be fine. It is a “within the node” operation that pounds I/O (but is capped by compaction threshold). You need to look at the level of activity from normal operations, though. If Cassandra is running without much stress/sweat, go ahead and run 2 at once. (Conservatively, that’s all I would do on 6 nodes.) If the cluster is inactive, let it fly on all nodes. Sean Durity Lead Cassandra Admin, Big Data Team From: Ola Nowak [mailto:ola.nowa...@gmail.com] Sent: Thursday, August 13, 2015 5:30 AM To: user@cassandra.apache.org Subject: Can I run upgrade sstables on many nodes on one time Hi all, I'm trying to update my 6 node cluster from 2.0.11 to 2.1.8. I'm following this update procedure: http://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgradeCassandraDetails.html and the point 8 says: If you are upgrading from a major version (for example, from Cassandra 1.2 to 2.0) or a major point release (for example, from Cassandra 2.0 to 2.1), upgrade the SSTables on each node. $ nodetool upgradesstables As far as I understood it correctly I should run nodetool upgradesstables on every node after upgrading the version on each node. Is that right? As it is a really time consuming operation I wonder if I could run upgradesstables on multiple nodes at one time ( parallelly)? Regards, Ola The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.
RE: limit the size of data type LIST
This sounds like something you do on the client side BEFORE you insert. Or are you wanting to limit the size of the list coming out to the client? Sean Durity Lead Cassandra Admin, Big Data Team From: yuankui [mailto:kui.y...@fraudmetrix.cn] Sent: Thursday, August 13, 2015 9:06 AM To: user@cassandra.apache.org Subject: limit the size of data type LIST hi, friends I am design a message history table CREATE TABLE message_history ( user_name text PRIMARY KEY, time timestamp, message_details listtext, ); so that I can query a user's message via primary key `user_name` at once. but the item in `message_details` list may be very long so that I want to limit the list size of the message_details list. is there a way to solve this? like a redis operation `LTRIM` - http://redis.readthedocs.org/en/latest/list/ltrim.html The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.
Column family ID mismatch
Hi All, My keyspace is created as: CREATE KEYSPACE some_keyspace WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '2'} AND durable_writes = true; However I am running a single node cluster: ./nodetool status some_keyspace Datacenter: datacenter1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack XX XXX.XXX.XXX.XX 3.73 GB256 100.0% uid rack1 And things were still running fine till today we encountered: ERROR [MigrationStage:1] 2015-08-13 01:58:49,249 CassandraDaemon.java:153 - Exception in thread Thread[MigrationStage:1,5,main] java.lang.RuntimeException: org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch (found uid; expected uid) at org.apache.cassandra.config.CFMetaData.reload(CFMetaData.java:1125) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.DefsTables.updateColumnFamily(DefsTables.java:422) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:295) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.DefsTables.mergeSchemaInternal(DefsTables.java:194) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:166) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.service.MigrationManager$2.runMayThrow(MigrationManager.java:393) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.1.2.jar:2.1.2] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_65] at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_65] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] Caused by: org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch (found uid; expected uid) at org.apache.cassandra.config.CFMetaData.validateCompatility(CFMetaData.java:1208) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.config.CFMetaData.apply(CFMetaData.java:1140) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.config.CFMetaData.reload(CFMetaData.java:1121) ~[apache-cassandra-2.1.2.jar:2.1.2] ... 11 common frames omitted Did notetool repair, probably didn't work so after few mins did a restart and then the problem went away. Need help in understanding what caused it and how it was resolved. Thanks
Mutagen for Cassandra
We are looking for a schema upgrade management tool for Cassandra. Does anyone have any experience using mutagen for Casssandra in a production environment? Any other recommendations?
Best strategy for hiring from OSS communities.
Mildly off topic but we are looking to hire someone with Cassandra experience.. I don’t necessarily want to spam the list though. We’d like someone from the community who contributes to Open Source, etc. Are there forums for Apache / Cassandra, etc for jobs? I couldn’t fine one. -- Founder/CEO Spinn3r.com Location: *San Francisco, CA* blog: http://burtonator.wordpress.com … or check out my Google+ profile https://plus.google.com/102718274791889610666/posts
Re: limit the size of data type LIST
Sorry for not making myself clear and thank you for your reply. -- I want to know if there is a way to automatically remove old items in the list in SERVER SIDE if the size() of the list reached a certain limit(say 1000). client does not need to care about this, and just do insert and get. and he will get the latest 1000 messages of a user? can I? 在 2015年8月14日,上午12:55,sean_r_dur...@homedepot.com sean_r_dur...@homedepot.com 写道: This sounds like something you do on the client side BEFORE you insert. Or are you wanting to limit the size of the list coming out to the client? Sean Durity Lead Cassandra Admin, Big Data Team From: yuankui [mailto:kui.y...@fraudmetrix.cn] Sent: Thursday, August 13, 2015 9:06 AM To: user@cassandra.apache.org Subject: limit the size of data type LIST hi, friends I am design a message history table CREATE TABLE message_history ( user_name text PRIMARY KEY, time timestamp, message_details listtext, ); so that I can query a user's message via primary key `user_name` at once. but the item in `message_details` list may be very long so that I want to limit the list size of the message_details list. is there a way to solve this? like a redis operation `LTRIM` - http://redis.readthedocs.org/en/latest/list/ltrim.html http://redis.readthedocs.org/en/latest/list/ltrim.html The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.
Re: limit the size of data type LIST
This is not currently possible, though it has been proposed in the past and may potentially be implemented in the future: https://issues.apache.org/jira/browse/CASSANDRA-9110 - Jeff From: yuankui Reply-To: user@cassandra.apache.org Date: Thursday, August 13, 2015 at 6:24 PM To: user@cassandra.apache.org Subject: Re: limit the size of data type LIST Sorry for not making myself clear and thank you for your reply. -- I want to know if there is a way to automatically remove old items in the list in SERVER SIDE if the size() of the list reached a certain limit(say 1000). client does not need to care about this, and just do insert and get. and he will get the latest 1000 messages of a user? can I? 在 2015年8月14日,上午12:55,sean_r_dur...@homedepot.com sean_r_dur...@homedepot.com 写道: This sounds like something you do on the client side BEFORE you insert. Or are you wanting to limit the size of the list coming out to the client? Sean Durity Lead Cassandra Admin, Big Data Team From: yuankui [mailto:kui.y...@fraudmetrix.cn] Sent: Thursday, August 13, 2015 9:06 AM To: user@cassandra.apache.org Subject: limit the size of data type LIST hi, friends I am design a message history table CREATE TABLE message_history ( user_name text PRIMARY KEY, time timestamp, message_details listtext, ); so that I can query a user's message via primary key `user_name` at once. but the item in `message_details` list may be very long so that I want to limit the list size of the message_details list. is there a way to solve this? like a redis operation `LTRIM` - http://redis.readthedocs.org/en/latest/list/ltrim.html The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment. smime.p7s Description: S/MIME cryptographic signature
limit the size of data type LIST
hi, friends I am design a message history table CREATE TABLE message_history ( user_name text PRIMARY KEY, time timestamp, message_details listtext, ); so that I can query a user's message via primary key `user_name` at once. but the item in `message_details` list may be very long so that I want to limit the list size of the message_details list. is there a way to solve this? like a redis operation `LTRIM` - http://redis.readthedocs.org/en/latest/list/ltrim.html http://redis.readthedocs.org/en/latest/list/ltrim.html