Re: entire range of node out of sync -- out of the blue

2012-12-06 Thread Andras Szerdahelyi
Thanks! i'm also thinking a repair run without -pr could have caused this maybe ? Andras Szerdahelyi Solutions Architect, IgnitionOne | 1831 Diegem E.Mommaertslaan 20A M: +32 493 05 50 88 | Skype: sandrew84 [cid:7BDF7228-D831-4D98-967A-BE04FEB17544] On 06 Dec 2012, at 04:05, aaron morton

Re: What is substituting keys_cached column family argument

2012-12-06 Thread Edward Capriolo
Rob, Have you played with this I have many CFs, some big some small some using large caches some using small ones, some that take many requests, some that take a few. Over time I have cooked up a strategy for how to share the cache love, even thought it may not be the best solution to the

RE: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered compaction.

2012-12-06 Thread Poziombka, Wade L
Having so much data on each node is a potential bad day. Is this discussed somewhere on the Cassandra documentation (limits, practices etc)? We are also trying to load up quite a lot of data and have hit memory issues (bloom filter etc.) in 1.0.10. I would like to read up on big data usage

Slow Reads in Cassandra with Hadoop

2012-12-06 Thread Ralph Romanos
Hello Cassandra users, I am trying to read and process data in Cassandra using Hadoop. I have a 4-node Cassandra cluster, and an 8-node Hadoop cluster:- 1 Namenode/Jobtracker- 7 Datanodes/Tasktrackers (4 of them are also hosting Cassandra) I am using Cassandra 1.2 beta, Hadoop 0.20.2, java

Re: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered compaction.

2012-12-06 Thread Edward Capriolo
http://wiki.apache.org/cassandra/LargeDataSetConsiderations On Thu, Dec 6, 2012 at 9:53 AM, Poziombka, Wade L wade.l.poziom...@intel.com wrote: “Having so much data on each node is a potential bad day.” ** ** Is this discussed somewhere on the Cassandra documentation (limits,

Cassandra V/S Hadoop

2012-12-06 Thread Yogesh Dhari
Hi all, Hadoop have different file system(HDFS) and Cassandra have different file system(CFS). As Hadoop have great Eco-System (Hive{Dataware House}, Hbase{Data Base} n etc..) and Cassandra(Database) it self providing its own file system Although we can run Hadoop's Ecosystem on Cassandra (If

Re: reversed=true for CQL 3

2012-12-06 Thread Shahryar Sedghi
Thanks Rob. I am on 1.1.4 now (I can go to 1.1.6 if needed) and apparently it is broken. I defined the table like this: CREATE TABLE events(interval int,id bigint,containerName varchar,objectName varchar,objectType varchar, status int, severity int, event varchar, eventType

Re: how to take consistant snapshot?

2012-12-06 Thread aaron morton
For background http://wiki.apache.org/cassandra/Operations?highlight=%28snapshot%29#Consistent_backups If you it for a single node then yes there is a chance of inconsistency across CF's. If you have mulitple nodes the snashots you take on the later nodes will help. If you use CL QUOURM for

Re: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered compaction.

2012-12-06 Thread aaron morton
Meaning terabyte size databases. Lots of people have TB sized systems. Just add more nodes. 300 to 400 Gb is just a rough guideline. The bigger picture is considering how routine and non routine maintenance tasks are going to be carried out. Cheers - Aaron Morton

Re: how to take consistant snapshot?

2012-12-06 Thread Andrey Ilinykh
On Thu, Dec 6, 2012 at 7:34 PM, aaron morton aa...@thelastpickle.comwrote: For background http://wiki.apache.org/cassandra/Operations?highlight=%28snapshot%29#Consistent_backupshttp://wiki.apache.org/cassandra/Operations?highlight=(snapshot)#Consistent_backups If you it for a single node

Re: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered compaction.

2012-12-06 Thread Wei Zhu
I think Aaron meant 300-400GB instead of 300-400MB. Thanks. -Wei - Original Message - From: Wade L Poziombka wade.l.poziom...@intel.com To: user@cassandra.apache.org Sent: Thursday, December 6, 2012 6:53:53 AM Subject: RE: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered

Re: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered compaction.

2012-12-06 Thread Michael Kjellman
+1 On Dec 6, 2012, at 10:06 PM, Wei Zhu wz1...@yahoo.com wrote: I think Aaron meant 300-400GB instead of 300-400MB. Thanks. -Wei - Original Message - From: Wade L Poziombka wade.l.poziom...@intel.com To: user@cassandra.apache.org Sent: Thursday, December 6, 2012 6:53:53 AM