The Cassandra team is pleased to announce the release of Apache Cassandra
version 1.2.2.
Cassandra is a highly scalable second-generation distributed database,
bringing together Dynamo's fully distributed design and Bigtable's
ColumnFamily-based data model. You can read more here:
Hmmm, ok, that makes sense. I suspect the same is true with leveled
compaction as well?
Thanks,
Dean
On 2/25/13 6:47 AM, Edward Capriolo edlinuxg...@gmail.com wrote:
Mostly but not 100%. You have a bloom filter for each sstable, so
going to disk means finding the row in each sstable if you end
I am confused. I thought running compact turns off the minor compactions
and users are actually supposed to run upgradesstables (maybe I am on
old documentation?)
Well, that's not true. What happens is that compaction use sstables with an
aproximate same size. So if you run a major
Result
+1: Stephen Connolly, Mikhail Mazursky
0: Fred Cooke
-1:
-Stephen
On 14 February 2013 09:28, Stephen Connolly stephen.alan.conno...@gmail.com
wrote:
Hi,
I'd like to release version 1.2.1-1 of Mojo's Cassandra Maven Plugin
to sync up with the 1.2.1 release of Apache Cassandra.
We
On 02/22/2013 07:47 PM, aaron morton wrote:
dropped this secondary index after while.
I assume you use UPDATE COLUMN FAMILY in the CLI.
yes
How can I avoid this secondary index building on node join?
Check the schema using show schema in the cli.
I see no indexes for CF in show
After running a major compaction, automatic minor compactions are no
longer triggered,
... Because of the size difference between the big sstable generated and
the new sstable flushed/compacted. Compactions are not stopped, they are
just no longer triggered for a while.
frequently requiring you
Sweet, thanks for the info.
Dean
From: Alain RODRIGUEZ arodr...@gmail.commailto:arodr...@gmail.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Monday, February 25, 2013 7:41 AM
To:
Dear users,
We have got very strange beheviour of hadoop cluster after upgrading
Cassandra from 1.1.5 to Cassandra 1.2.1. We have 5 nodes cluster of Cassandra,
where three of them are hodoop slaves. Now when we are submitting job through
Pig script, only one map task runs on one of the hadoop
Hello!
We have 1.0.7 multi-DC cassandra setup with strict time limits for read
(15ms). We use RF=1 per DC and reads with CL=ONE. Data in datacenters
are in sync, but we have next problem:
when application looks for key which is not yet in database, coordinator
wait for digests from remote
Hi,
I'm trying to use describe_splits_ex to get splits for local records only.
When I call it, I always get a list with only one CfSplit. The start_token and
end_token are always the same I passed as input and row_count is always 128.
I'm using 1.1.9. What am I doing wrong?
Thanks,
Hermán
You should be able to use LOCAL_QUORUM with RF=1. Did you try it and get
some error?
On Mon, Feb 25, 2013 at 10:01 AM, Igor i...@4friends.od.ua wrote:
Hello!
We have 1.0.7 multi-DC cassandra setup with strict time limits for read
(15ms). We use RF=1 per DC and reads with CL=ONE. Data in
H, my upgrade completed and then I added node back in and ran my repair.
What is weird is that my nreldata column family still shows 156Meg of memory
still(down from 2 gig though!!) in use and a false positive ratio of .99576
when I have the filter completely disabled(ie. Set to 1.0). I
Hi all,
I have a cluster with 2 data centers with an RF 2 keyspace using network
topology on 1.1.10. I would like to configure it such that some of the data is
not cross data center replicated but is replicated between the nodes of the
local data center. I assume my only options are to
Hello everyone!
I'd like to know if there is any guide or description of the cassandra
server log(system.log).
I mean, how should I interpret each log event, and what information may I
retain for it;
I've been away from Cassandra for a while and wondered what the
consensus is on using 1.2.2 as a primary data store?
Our app has a typical OLTP workload but we have high availability
requirements. The data set is just under 1TB and I don't see us growing
to more that a small Cassandra cluster.
How big will each mutation be roughly? 1MB, 5MB, 16MB?
On 2/25/13 3:32 PM, Chris Dean ctd...@sokitomi.com wrote:
I've been away from Cassandra for a while and wondered what the
consensus is on using 1.2.2 as a primary data store?
Our app has a typical OLTP workload but we have high availability
Michael Kjellman mkjell...@barracuda.com writes:
How big will each mutation be roughly? 1MB, 5MB, 16MB?
On the small end. Say 1MB.
Cheers,
Chris Dean
I do this, and have done with with C*, since 0.86
Pitfalls:
1) Large mutations are a pain, which is why it's not really a recommended
use case for C*, I limit mine to 5MB
2) Repairs can get ugly and replication can get ugly due to the fact that
your hints will grow very quickly if you have an
Here's a sample request trace (Cassandra 1.2.1), where there's a gap of
almost 60ms between one of the two local quorum nodes receiving a message
and the row cache getting hit. There's then a further almost 60ms delay
between the response enqueue and the actual send. Please see 54.234.178.159 in
Hi - I am doing a performance run using modified YCSB client and was able to
populate 8TB on a node and then ran some read workloads. I am seeing an average
TPS of 930 ops/sec for random reads. There is no key cache/row cache. Question -
Will the read TPS degrade if the data size increases to
No I did not look at nodetool gossipinfo but from the ring on both
pre-upgrade and post upgrade nodes to 1.2.1, what I observed was the
described behavior.
On Sat, Feb 23, 2013 at 1:26 AM, Michael Kjellman
mkjell...@barracuda.comwrote:
This was a bug with 1.2.0 but resolved in 1.2.1. Did you
Aaron,
Would 50 CFs be pushing it? According to
http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-improved-memory-and-disk-space-management,
This has been tested to work across hundreds or even thousands of
ColumnFamilies.
What is the bottleneck, IO?
Thanks,
Javier
On Sun, Feb 24,
22 matches
Mail list logo