Sorry, No - you are not doing it wrong ^)
Yes, Cassandra partitioner is based on hash ring. Doubling number of nodes is
the best cluster exctending policy I've ever seen, because it's zero-overhead.
Hashring - you get MD5 max (2^128-1), divide it by number of nodes (partitions)
getting N
Yes, Cassandra partitioner is based on hash ring. Doubling number of nodes is
the best cluster exctending policy I've ever seen, because it's zero-overhead.
Hashring - you get MD5 max (2^128-1), divide it by number of nodes (partitions)
getting N points and then evenly distribute them across
1. What do you mean by on top of Ceph?
2. What's he goal?
-- Original Message --
From: Colin Taylor colin.tay...@gmail.commailto:colin.tay...@gmail.com
To: user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Sent: 01.02.2015 12:26:42
Subject: Cassandra on
Alah Akbar
-- Original Message --
From: Servando Muñoz G. smg...@gmail.commailto:smg...@gmail.com
To: user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Sent: 04.12.2014 16:12:32
Subject: RE: Test
Saludos…
Quien eres tu
De: Castelain, Alain
We have 380k of them in some of our rows and it's ok.
-- Original Message --
From: Hannu Kröger hkro...@gmail.commailto:hkro...@gmail.com
To: user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Sent: 14.11.2014 16:13:49
Subject: Re: how wide can wide rows
What have you tried?
-- Original Message --
From: srinivas rao pinnakasrin...@gmail.commailto:pinnakasrin...@gmail.com
To: Cassandra Users
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Sent: 11.11.2014 22:51:54
Subject: Better option to load data to cassandra
Hi Team,
. Personally I'm not sure if that should
be the case.
The node
Thanks
Jabbar Azam
On 8 November 2014 02:56, Plotnik, Alexey
aplot...@rhonda.rumailto:aplot...@rhonda.ru wrote:
Cassandra is a cluster itself, it's not necessary to have redundant each node.
Cassandra has replication for that. And also
Cassandra is a cluster itself, it's not necessary to have redundant each node.
Cassandra has replication for that. And also Cassandra is designed to run in
multiple data center - am think that redundant policy is applicable for you.
Only thing from your saying you can deploy is raid10, other
After rebalance and cleanup I have leveled CF (SSTable size = 100MB) and a
compaction Task that is going to process ~750GB:
root@da1-node1:~# nodetool compactionstats
pending tasks: 10556
compaction typekeyspace column family completed
total unit
, Feb 18, 2014 at 2:58 PM, Plotnik, Alexey
aplot...@rhonda.rumailto:aplot...@rhonda.ru wrote:
Compression buffers are located in Heap, I saw them in Heapdump. That is:
==
public class CompressedRandomAccessReader extends RandomAccessReader {
…..
private ByteBuffer compressed
1. How many parallel threads is safe to have for sub-range repair process
running for a single node?
2. Is Repair process affected `concurrent_compactors` parameter? Should
the `concurrent_compactors` meet the multithreaded repair process needs?
UPD: Here we can see that repair process can be executed in parallel:
http://www.datastax.com/dev/blog/advanced-repair-techniques
From: Plotnik, Alexey
Sent: 24 февраля 2014 г. 12:42
To: user@cassandra.apache.org
Subject: Multi-threaded sub-range repair
1. How many parallel threads
My SSTable size is 100Mb. Last time I removed leveled manifest compaction was
running for 3 months
From: Robert Coli [mailto:rc...@eventbrite.com]
Sent: 19 февраля 2014 г. 6:24
To: user@cassandra.apache.org
Subject: Re: Turn off compression (1.2.11)
On Mon, Feb 17, 2014 at 4:35 PM, Plotnik
: 19 февраля 2014 г. 6:24
To: user@cassandra.apache.org
Subject: Re: Turn off compression (1.2.11)
On Mon, Feb 17, 2014 at 4:35 PM, Plotnik, Alexey
aplot...@rhonda.rumailto:aplot...@rhonda.ru wrote:
As an aside, 1.2.0 beta moved a bunch of data related to compression off the
heap. If you were
Each compressed SSTable uses additional transfer buffer in
CompressedRandomAccessReader instance.
After analyzing Heap I saw this buffer has a size about 70KB per SSTable. I
have more than 30K SSTables per node.
I want to turn off a compression for this column family to save some Heap. How
can
For some reason when my map-reduce job is almost complete, all mappers (~40)
begin to connect to a single Cassandra node. This noe then die due to Java Heap
space error.
It looks like Hadoop is misconfigured. Valid behavior for me:
Each mapper should iterate only a local node. How can I
If you are talking about scaling: Cassandra scaling is absolutely horizontal
without namenodes or other Mongo-bulshit-like intermediate daemons. And that’s
why one big cluster has the same throughput as many smaller clusters.
What will you do when your small clusters will exceed it’s capacity?
17 matches
Mail list logo