Another question is what type of SSD's are you using. There is a big
difference between not just vendors of SSD's but the size of them as their
internals make a big difference on how the OS interacts with them.
This link is still very usage today:
Are these nodes connected to a dedicated or a shared (in the sense there
are other workloads running) network switches? How fast (1G, 10G or faster)
are the interfaces? Also, how much RAM are you using? There's a rule of
thumb that says you should dedicate at least 1GB of RAM for each 1 TB of
raw
operators
>> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
>>you handle Nova on Ceph?
>>
>> Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +:
>> > >
>> > >
>
> From: Clint Byrum <cl...@fewbar.com>
> Sent: Wednesday, October 12, 2016 10:46 PM
> To: openstack-operators
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
> you handle Nova on Ceph?
>
&
> From: Warren Wang <war...@wangspeed.com>
> Sent: Wednesday, October 12, 2016 10:02 PM
> To: Adam Kijak
> Cc: Abel Lopez; openstack-operators
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
> you handle Nova on Ceph?
>
> If fault domai
Ceph cluster that can't be grown without the users
noticing is an over-subscribed Ceph cluster. My understanding is that
one is always advised to provision a certain amount of cluster capacity
for growing and replicating to replaced drives.
> There are bugs in Ceph which can cause data corrupti
16 at 8:35 AM, Adam Kijak <adam.ki...@corp.ovh.com> wrote:
> > ___
> > From: Abel Lopez <alopg...@gmail.com>
> > Sent: Monday, October 10, 2016 9:57 PM
> > To: Adam Kijak
> > Cc: openstack-operators
> > Subject:
> ___
> From: Abel Lopez <alopg...@gmail.com>
> Sent: Monday, October 10, 2016 9:57 PM
> To: Adam Kijak
> Cc: openstack-operators
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
> you handle Nova on
>
> From: Xav Paice <xavpa...@gmail.com>
> Sent: Monday, October 10, 2016 8:41 PM
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
> you handle Nova on Ceph?
Have you thought about dedicated pools for cinder/nova and a separate pool for
glance, and any other uses you might have?
You need to setup secrets on kvm, but you can have cinder creating volumes from
glance images quickly in different pools
> On Oct 10, 2016, at 6:29 AM, Adam Kijak
On Mon, 2016-10-10 at 13:29 +, Adam Kijak wrote:
> Hello,
>
> We use a Ceph cluster for Nova (Glance and Cinder as well) and over
> time,
> more and more data is stored there. We can't keep the cluster so big
> because of
> Ceph's limitations. Sooner or later it needs to be closed for adding
11 matches
Mail list logo