hey there,
i'm trying to scale casandra cluster with openstack. But i'm seeing strange
behavior when there is a scaleup (new node is added) scaledown (one node is
removed). (don't worry the seeds are stable).
I start my cluster with 2 machines, one seed and one server, then create
the database
at the biginning it looks like this:
[root@demo-server-seed-k6g62qr57nok ~]# nodetool status
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens OwnsHost
ID Rack
UN 40.0.0.208 128.73 KB 248
Could you provide the result of :
- nodetool status
- nodetool status YOURKEYSPACE
when there's a scaledown action, i make sure to decommission the node
before. but still, I don't understand why I'm having this behaviour. is it
normal. what do you do normally to remove a node? is it related to tokens?
i'm affecting to each of my node a different token based on there ip
address
.org
Objet : Re: cassandra scalability
when there's a scaledown action, i make sure to decommission the node before.
but still, I don't understand why I'm having this behaviour. is it normal. what
do you do normally to remove a node? is it related to tokens? i'm affecting to
each of my node a
't you using RandomPartitioner or
>> Murmur3Partitioner ?
>>
>> C*heers,
>>
>> Alain
>>
>>
>>
>> 2015-09-07 12:01 GMT+02:00 Edouard COLE <edouard.c...@rgsystem.com>:
>>
>>> Please, don't mail me directly
>>>
>>>
t;> Also, what happens if you query using "CONSISTENCY LOCAL_QUORUM;" (or
>>> ALL) before your select ? If not using cqlsh, set the Consistency Level of
>>> your client to LOCAL_QUORUM or ALL and try to select again.
>>>
>>> Also, I am not sure of the meani
ep 7, 2015 at 8:19 AM Alain RODRIGUEZ <arodr...@gmail.com>
>>> wrote:
>>>
>>>> Hi Sara,
>>>>
>>>> Can you detail actions performed, like how you load data, what scaleup
>>>> / scaledown are and precise if you let it decommission
> And answering with "Sorry, I can't help" is pointless :)
>
> Wait for the community to answer
>
> De : ICHIBA Sara [mailto:ichi.s...@gmail.com]
> Envoyé : Monday, September 07, 2015 11:34 AM
> À : user@cassandra.apache.org
> Objet : Re: cassandra scalability
>
&g
t;> And answering with "Sorry, I can't help" is pointless :)
>>
>> Wait for the community to answer
>>
>> De : ICHIBA Sara [mailto:ichi.s...@gmail.com]
>> Envoyé : Monday, September 07, 2015 11:34 AM
>> À : user@cassandra.apache.org
>> Objet : Re: ca
Hi folks,
we just posted a detailed Netflix technical blog entry on this
http://techblog.netflix.com/2011/11/benchmarking-cassandra-scalability-on.html
Hope you find it interesting/useful
Cheers Adrian
3 the straight line. Fantastic!
On Thu, Nov 3, 2011 at 6:41 PM, Adrian Cockcroft
adrian.cockcr...@gmail.com wrote:
Hi folks,
we just posted a detailed Netflix technical blog entry on this
http://techblog.netflix.com/2011/11/benchmarking-cassandra-scalability-on.html
Hope you find
Yes, it is ture.
Current cassandra has many limitations or bad implementations, especially on
storage level.
In my opinion, these limitations or bad implementations are just
implementation, not the original intention of design.
And I also want to give a suggestion/advice to the project leaders,
Hi Paul,
I do not have any pressure to build software using Cassandra right now.
I am studying and exploring Cassandra now. Hence I have a big curiosity
about Cassandra. Ok I will continue my study and wait better documentation.
Dir.
On Mon, Apr 19, 2010 at 1:44 PM, Paul Prescod
On Sun, Apr 18, 2010 at 11:14, dir dir sikerasa...@gmail.com wrote:
Hi Gary,
The main reason is that the compaction operation (removing deleted
values) currently requires that an entire row be read into memory.
Thank you for your explanation. But I still do not understand what do you
mean.
On Sat, Apr 17, 2010 at 10:50, dir dir sikerasa...@gmail.com wrote:
What problems can’t it solve?
No flexible indices
No querying on non PK values
Not good for binary data (64mb) unless you chunck
Row contents must fit in available memory
Gary Dusbabek say: Row contents must fit in
On Sun, Apr 18, 2010 at 7:41 AM, Gary Dusbabek gdusba...@gmail.com wrote:
On Sat, Apr 17, 2010 at 10:50, dir dir sikerasa...@gmail.com wrote:
What problems can’t it solve?
No flexible indices
No querying on non PK values
Not good for binary data (64mb) unless you chunck
Row
On Sun, Apr 18, 2010 at 8:26 AM, Brandon Williams dri...@gmail.com wrote:
On Sun, Apr 18, 2010 at 8:00 AM, Mason Hale ma...@onespot.com wrote:
This is a statement I wish I had run across sooner. Our first
implementation (which we're changing now) included some very big rows. We
ran into
Hi Gary,
The main reason is that the compaction operation (removing deleted
values) currently requires that an entire row be read into memory.
Thank you for your explanation. But I still do not understand what do you
mean.
in my opinion, Actually the row contents must fit in available memory.
Hale ma...@onespot.com
Sent: Sunday, April 18, 2010 8:53am
To: user@cassandra.apache.org
Subject: Re: Regarding Cassandra Scalability
On Sun, Apr 18, 2010 at 8:26 AM, Brandon Williams dri...@gmail.com wrote:
On Sun, Apr 18, 2010 at 8:00 AM, Mason Hale ma...@onespot.com wrote:
This is a statement
I think you might be forgetting just how tiny tweets are. The last numbers
I heard tweeter gets 55,000,000 messages a day. They've been around for
roughly 4 years.
I read a news in the internet, in the beginning tweeter using RDBMS MySQL
until tweeter
reach amount of tweet 1 million per day.
On Sat, Apr 17, 2010 at 10:50 AM, dir dir sikerasa...@gmail.com wrote:
Hi Mason,
Honestly, I am beginner user in Cassandra. I rather confused to follow
this database. I ask to the forum about the reason twitter.com to use
Cassandra
because I want to know the basic reason why we choose
hi ,
I am working for the past 1 year with hadoop, but quite new to
cassandra, I would like to get clarified few things regarding the
scalability of Cassandra. Can it scall up to TB of data ?
Please provide me some links regarding this..
--
--
With Love
Lin N
http://www.google.ca/search?hl=enq=cassandra+terabyte
On Thu, Apr 15, 2010 at 11:28 PM, Linton N gabrialmarialin...@gmail.com wrote:
hi ,
I am working for the past 1 year with hadoop, but quite new to
cassandra, I would like to get clarified few things regarding the
scalability of
Thank you very much. sorry for the trouble. I could have done in
myself
On Fri, Apr 16, 2010 at 1:29 PM, Paul Prescod pres...@gmail.com wrote:
http://www.google.ca/search?hl=enq=cassandra+terabyte
On Thu, Apr 15, 2010 at 11:28 PM, Linton N gabrialmarialin...@gmail.com
wrote:
hi ,
On 04/16/2010 01:38 AM, dir dir wrote:
I hear Facebook.com and tweeter.com http://tweeter.com using
cassandra database. In my opinion Facebook and
tweeter have hundreds TB data. because their user reach hundreds
million people.
I think you might be forgetting just how tiny tweets are. The last
Also people with 1M followers tend to have public tweets, which means
really I think it would be the same as subscribing to a RSS feed or
whatever. You aren't getting a local copy because you will always have
access to the tweet as will everyone else. Also tweets don't change
AFAIK so no point
PM, Stu Hood wrote:
http://twitter.com/jromeh/status/12295736793
-Original Message-
From: Mike Gallamoremike.e.gallam...@googlemail.com
Sent: Friday, April 16, 2010 3:46pm
To: user@cassandra.apache.org
Subject: Re: Regarding Cassandra Scalability
Also people with 1M followers tend to have
The redundancy/denormalization takes advantage of cheap writes to make
reads really quick. Imagine a query that returns one row with your
whole tweet stream vs having to do 50 separate lookups per tweet.
Space is cheap and the upside is performance Especially if you're
getting a lot of fail
...@googlemail.com
Sent: Friday, April 16, 2010 3:46pm
To: user@cassandra.apache.org
Subject: Re: Regarding Cassandra Scalability
Also people with 1M followers tend to have public tweets, which means
really I think it would be the same as subscribing to a RSS feed or
whatever. You aren't getting
30 matches
Mail list logo