Migrating from mySQL to Cassandra

2013-03-03 Thread John Grogan
Hi, We have decided to explore moving our database from mySQL to Cassandra. I am now installing it on my machine (OSX system) and need to think about how to do a data export from mySQL. Using phpmyAdmin, I have a range of different options to export the database. However, the crux is figuring

Re: Migrating from mySQL to Cassandra

2013-03-03 Thread Tyler Hobbs
On Sun, Mar 3, 2013 at 5:06 AM, John Grogan vangu...@dir-uk.org wrote: However, the crux is figuring out an easy way to import the data into Cassandra. Does anyone have any thoughts they can share? If you don't have a very large dataset and you're not pressed for time, just iterating over

Re: Migrating from mySQL to Cassandra

2013-03-03 Thread Marco Matarazzo
There's DataStax OpsCenter, which has a free Community Edition: http://www.datastax.com/products/opscenter Is OpsCenter working with cassandra 1.2 with vnodes already ? -- Marco Matarazzo

Re: Migrating from mySQL to Cassandra

2013-03-03 Thread Tyler Hobbs
On Sun, Mar 3, 2013 at 11:38 AM, Marco Matarazzo marco.matara...@hexkeep.com wrote: Is OpsCenter working with cassandra 1.2 with vnodes already ? Yes, it's compatible with vnode-enabled clusters, but doesn't support vnode-specific things like running shuffle. For now, it basically randomly

Re: Compaction statistics information

2013-03-03 Thread Tyler Hobbs
It's a description of how many of the compacted SSTables the rows were spread across prior to compaction. In your case, 15 rows were spread across two of the four sstables, 68757 rows were spread across three of the four sstables, and 6865 were spread across all four. On Fri, Mar 1, 2013 at

Re: Compaction statistics information

2013-03-03 Thread Jabbar Azam
Thanks Tyler On 3 Mar 2013 18:55, Tyler Hobbs ty...@datastax.com wrote: It's a description of how many of the compacted SSTables the rows were spread across prior to compaction. In your case, 15 rows were spread across two of the four sstables, 68757 rows were spread across three of the four

Re: no other nodes seen on priam cluster

2013-03-03 Thread Ben Bromhead
Glad you got it going! There is a REST call you can make to priam telling it to double the cluster size (/v1/cassconfig/double_ring), it will pre fill all SimpleDB entries for when the nodes come online, you then change the number of nodes on the autoscale group. Now that Priam supports C* 1.2

Re: no backwards compatibility for thrift in 1.2.2? (we get utter failure)

2013-03-03 Thread Michael Kjellman
Dean, I think if you look back through previous mailing list items you'll find answers to this already but to summarize: Tables created prior to 1.2 will continue to work after upgrade. New tables created are not exposed by the Thrift API. It is up to client developers to upgrade the client to

Re: no backwards compatibility for thrift in 1.2.2? (we get utter failure)

2013-03-03 Thread Hiller, Dean
It was an issue for existing tables as in QA, I ran an upgrade from 1.1.4 with simple data and then after 1.2.2, could not access stuff, ended up with timeouts. After that I cleared everything and just started a 1.2.2 as I wanted to see if just a base install of 1.2.2 with no upgrade worked which

Re: no backwards compatibility for thrift in 1.2.2? (we get utter failure)

2013-03-03 Thread Edward Capriolo
Your other option is to create tables 'WITH COMPACT STORAGE'. Basically if you use COMPACT STORAGE and create tables as you did before. https://issues.apache.org/jira/browse/CASSANDRA-2995 From an application standpoint, if you can't do sparse, wide rows, you break compatibility with 90% of

Re: no backwards compatibility for thrift in 1.2.2? (we get utter failure)

2013-03-03 Thread aaron morton
Dean, Is this an issue with tables created using CQL 3 ? OR… An issue with tables created in 1.1.4 using the CLI not been readable after an in place upgrade to 1.2.2 ? I did a quick test and it worked. Cheers - Aaron Morton Freelance Cassandra Developer New Zealand

Re: Select X amount of column families in a super column family in Cassandra using PHP?

2013-03-03 Thread aaron morton
You'll probably have better luck asking the author directly. Check the tutorial http://cassandra-php-client-library.com/tutorial/fetching-data and tell them what you have tried. For future reference we are trying to direct client specific queries to the client-dev list. Cheers

Re: Column Slice Query performance after deletions

2013-03-03 Thread aaron morton
I need something to keep the deleted columns away from my query fetch. Not only the tombstones. It looks like the min compaction might help on this. But I'm not sure yet on what would be a reasonable value for its threeshold. Your tombstones will not be purged in a compaction until after

Re: reading the updated values

2013-03-03 Thread aaron morton
my question is how do i get the updated data in cassandra for last 1 hour or so to be indexed in elasticsearch. You cannot. The best approach is to update elastic search at the same time you update cassandra. Cheers - Aaron Morton Freelance Cassandra Developer New Zealand

old data / tombstones are not deleted after ttl

2013-03-03 Thread Matthias Zeilinger
Hi, I´m running Cassandra 1.1.5 and have following issue. I´m using a 10 days TTL on my CF. I can see a lot of tombstones in there, but they aren´t deleted after compaction. I have tried a nodetool -cleanup and also a restart of Cassandra, but nothing happened. total 61G drwxr-xr-x 2