Older versions had a request scheduler api.

On Monday, February 20, 2017, Ben Slater <ben.sla...@instaclustr.com
<javascript:_e(%7B%7D,'cvml','ben.sla...@instaclustr.com');>> wrote:

> We’ve actually had several customers where we’ve done the opposite - split
> large clusters apart to separate uses cases. We found that this allowed us
> to better align hardware with use case requirements (for example using AWS
> c3.2xlarge for very hot data at low latency, m4.xlarge for more general
> purpose data) we can also tune JVM settings, etc to meet those uses cases.
>
> Cheers
> Ben
>
> On Mon, 20 Feb 2017 at 22:21 Oleksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
>
>> On Sat, Feb 18, 2017 at 3:12 AM, Abhishek Verma <ve...@uber.com> wrote:
>>
>>> Cassandra is being used on a large scale at Uber. We usually create
>>> dedicated clusters for each of our internal use cases, however that is
>>> difficult to scale and manage.
>>>
>>> We are investigating the approach of using a single shared cluster with
>>> 100s of nodes and handle 10s to 100s of different use cases for different
>>> products in the same cluster. We can define different keyspaces for each of
>>> them, but that does not help in case of noisy neighbors.
>>>
>>> Does anybody in the community have similar large shared clusters and/or
>>> face noisy neighbor issues?
>>>
>>
>> Hi,
>>
>> We've never tried this approach and given my limited experience I would
>> find this a terrible idea from the perspective of maintenance (remember the
>> old saying about basket and eggs?)
>>
>> What potential benefits do you see?
>>
>> Regards,
>> --
>> Alex
>>
>> --
> ————————
> Ben Slater
> Chief Product Officer
> Instaclustr: Cassandra + Spark - Managed | Consulting | Support
> +61 437 929 798
>


-- 
Sorry this was sent from mobile. Will do less grammar and spell check than
usual.

Reply via email to