day, October 10, 2016 8:41 PM
>> > > To: openstack-operators@lists.openstack.org
>> > > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How
>> > > do you handle Nova on Ceph?
>> > >
>> > > I'm really keen to hear
>
> From: Clint Byrum
> Sent: Wednesday, October 12, 2016 10:46 PM
> To: openstack-operators
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
> you handle Nova on Ceph?
>
> Excerpts from Adam Kijak
> From: Warren Wang
> Sent: Wednesday, October 12, 2016 10:02 PM
> To: Adam Kijak
> Cc: Abel Lopez; openstack-operators
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
> you handle Nova on Ceph?
>
> If fault domain is a concern, you can a
Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +:
> >
> > From: Xav Paice
> > Sent: Monday, October 10, 2016 8:41 PM
> > To: openstack-operators@lists.openstack.org
> > Subject: Re: [Openstack-operators] [o
ts,
Warren
On Wed, Oct 12, 2016 at 8:35 AM, Adam Kijak wrote:
> > ___
> > From: Abel Lopez
> > Sent: Monday, October 10, 2016 9:57 PM
> > To: Adam Kijak
> > Cc: openstack-operators
> > Subject: Re: [Openstack-operators] [ope
> ___
> From: Abel Lopez
> Sent: Monday, October 10, 2016 9:57 PM
> To: Adam Kijak
> Cc: openstack-operators
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
> you handle Nova on Ceph?
>
> Have you th
>
> From: Xav Paice
> Sent: Monday, October 10, 2016 8:41 PM
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
> you handle Nova on Ceph?
>
> On Mon, 2
Have you thought about dedicated pools for cinder/nova and a separate pool for
glance, and any other uses you might have?
You need to setup secrets on kvm, but you can have cinder creating volumes from
glance images quickly in different pools
> On Oct 10, 2016, at 6:29 AM, Adam Kijak wrote:
>
On Mon, 2016-10-10 at 13:29 +, Adam Kijak wrote:
> Hello,
>
> We use a Ceph cluster for Nova (Glance and Cinder as well) and over
> time,
> more and more data is stored there. We can't keep the cluster so big
> because of
> Ceph's limitations. Sooner or later it needs to be closed for adding