Thanks!
i'm also thinking a repair run without -pr could have caused this maybe ?
Andras Szerdahelyi
Solutions Architect, IgnitionOne | 1831 Diegem E.Mommaertslaan 20A
M: +32 493 05 50 88 | Skype: sandrew84
[cid:7BDF7228-D831-4D98-967A-BE04FEB17544]
On 06 Dec 2012, at 04:05, aaron morton
Rob,
Have you played with this I have many CFs, some big some small some using
large caches some using small ones, some that take many requests, some that
take a few.
Over time I have cooked up a strategy for how to share the cache love, even
thought it may not be the best solution to the
Having so much data on each node is a potential bad day.
Is this discussed somewhere on the Cassandra documentation (limits, practices
etc)? We are also trying to load up quite a lot of data and have hit memory
issues (bloom filter etc.) in 1.0.10. I would like to read up on big data
usage
Hello Cassandra users,
I am trying to read and process data in Cassandra using Hadoop. I have a 4-node
Cassandra cluster, and an 8-node Hadoop cluster:- 1 Namenode/Jobtracker- 7
Datanodes/Tasktrackers (4 of them are also hosting Cassandra)
I am using Cassandra 1.2 beta, Hadoop 0.20.2, java
http://wiki.apache.org/cassandra/LargeDataSetConsiderations
On Thu, Dec 6, 2012 at 9:53 AM, Poziombka, Wade L
wade.l.poziom...@intel.com wrote:
“Having so much data on each node is a potential bad day.”
** **
Is this discussed somewhere on the Cassandra documentation (limits,
Hi all,
Hadoop have different file system(HDFS) and Cassandra have different file
system(CFS).
As Hadoop have great Eco-System (Hive{Dataware House}, Hbase{Data Base} n
etc..) and Cassandra(Database) it self providing its own file system
Although we can run Hadoop's Ecosystem on Cassandra (If
Thanks Rob.
I am on 1.1.4 now (I can go to 1.1.6 if needed) and apparently it is
broken. I defined the table like this:
CREATE TABLE events(interval int,id bigint,containerName
varchar,objectName varchar,objectType varchar,
status int, severity int, event varchar, eventType
For background
http://wiki.apache.org/cassandra/Operations?highlight=%28snapshot%29#Consistent_backups
If you it for a single node then yes there is a chance of inconsistency across
CF's.
If you have mulitple nodes the snashots you take on the later nodes will help.
If you use CL QUOURM for
Meaning terabyte size databases.
Lots of people have TB sized systems. Just add more nodes.
300 to 400 Gb is just a rough guideline. The bigger picture is considering how
routine and non routine maintenance tasks are going to be carried out.
Cheers
-
Aaron Morton
On Thu, Dec 6, 2012 at 7:34 PM, aaron morton aa...@thelastpickle.comwrote:
For background
http://wiki.apache.org/cassandra/Operations?highlight=%28snapshot%29#Consistent_backupshttp://wiki.apache.org/cassandra/Operations?highlight=(snapshot)#Consistent_backups
If you it for a single node
I think Aaron meant 300-400GB instead of 300-400MB.
Thanks.
-Wei
- Original Message -
From: Wade L Poziombka wade.l.poziom...@intel.com
To: user@cassandra.apache.org
Sent: Thursday, December 6, 2012 6:53:53 AM
Subject: RE: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered
+1
On Dec 6, 2012, at 10:06 PM, Wei Zhu wz1...@yahoo.com wrote:
I think Aaron meant 300-400GB instead of 300-400MB.
Thanks.
-Wei
- Original Message -
From: Wade L Poziombka wade.l.poziom...@intel.com
To: user@cassandra.apache.org
Sent: Thursday, December 6, 2012 6:53:53 AM
12 matches
Mail list logo