Re: [Openstack-operators] Hypervisor Tuning Guide

2015-12-08 Thread Joe Topjian
Update on the Hypervisor Tuning Guide!

The plan mentioned earlier is still in effect and is in the midst of Step
2. All etherpad notes have been migrated to the OpenStack wiki and I've
recently finished cleaning them up. You can see the current work here[1].

For those who may be wondering "why yet another guide?", I guarantee that 9
out of 10 people who read the guide in its current state will learn
something new. Imagine how much you could learn if it was complete?

If you're interested in contributing, please see the How To Contribute
section[2]. In short: just add what you know to the wiki.

Here's a list of items that would be great to have:

* Information about Hypervisors other than libvirt/KVM.
* Information about operating systems other than Linux.
* Real options, settings, and values that you have found to be successful
in production.
* And ongoing: Continue to expand and elaborate on existing areas.

As mentioned before, there is no definitive timeline for this guide.
There's no plan to have formal meetings or anything like that at the
moment, either. Just an occasional poke to add what you know. However, if
you'd like to see this guide fall under a more formal schedule and would
like to lead that effort, please get in contact with me.

Thanks,
Joe

1: https://wiki.openstack.org/wiki/Documentation/HypervisorTuningGuide
2:
https://wiki.openstack.org/wiki/Documentation/HypervisorTuningGuide#How_to_Contribute


On Tue, Oct 27, 2015 at 9:02 PM, Joe Topjian  wrote:

> We had a great Hypervisor Tuning Guide session yesterday!
>
> We agreed on an initial structure to the guide that will include four core
> sections (CPU, Memory, Network, and Disk) and common subsections to each.
> The etherpad[1] has this structure defined and during the session, we went
> through and added some brief notes about what should be included.
>
> Another agreement was that this guide should be detailed. It should have
> specific actions such as "change the following sysctl setting to nnn"
> rather than being more broad and generic such as "make sure you aren't
> swapping". One disadvantage of this is the guide might become out of date
> sooner than if it was more broad. We felt this was an acceptable tradeoff.
>
> Our current plan is the following:
>
> 1. We're going to leave the etherpad active for the next two weeks to
> allow people to continue adding notes at their leisure. I'll send a
> reminder about this a few days before the deadline.
>
> 2. We'll then transfer the etherpad notes to the OpenStack wiki and begin
> creating a rough draft of the guide. Brief notes will be elaborated on and
> supporting documentation will be added. Areas that have no information will
> be highlighted for help. Everyone is encouraged to edit the wiki during
> this time.
>
> 3. Once a decent rough draft has been created, we'll look into creating a
> formal OpenStack document.
>
> We're all very busy, so there are no definitive timelines for completing
> steps 2 and 3. At a minimum, we'll continue to touch base with this during
> the Summits and mid-cycles. If there's enough interest, we could try to
> schedule a large block of time to do a doc sprint during one of these
> events.
>
> Thanks,
> Joe
>
> 1: https://etherpad.openstack.org/p/TYO-ops-hypervisor-tuning-guide
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-08 Thread Edgar Magana
Awesome code! I just did a small testbed test and it worked nicely!

Edgar




On 12/8/15, 7:16 PM, "Tom Fifield"  wrote:

>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
>> Hey fellow oppers!
>>
>> I was wondering if anyone has any experience doing a migration from 
>> nova-network to neutron. We're looking at an in place swap, on an Icehouse 
>> deployment. I don't have parallel
>>
>> I came across a couple of things in my search:
>>
>> https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo
>> http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
>>
>> But neither of them have much in the way of details.
>>
>> Looking to disrupt as little as possible, but of course with something like 
>> this there's going to be an interruption.
>>
>> If anyone has any experience, pointers, or thoughts I'd love to hear about 
>> it.
>>
>> Thanks!
>>
>> -- Kevin
>
>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron ) 
>with success to do a live nova-net to neutron using Juno.
>
>
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-08 Thread Sam Morrison

> On 9 Dec 2015, at 2:16 PM, Tom Fifield  wrote:
> 
> NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron ) with 
> success to do a live nova-net to neutron using Juno.

That’s correct except we were on Kilo. I’m not sure I would try to do this on 
Icehouse though, neutron was pretty immature back then so could be a lot of 
pain.

Cheers,
Sam


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-08 Thread Tom Fifield

On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:

Hey fellow oppers!

I was wondering if anyone has any experience doing a migration from 
nova-network to neutron. We're looking at an in place swap, on an Icehouse 
deployment. I don't have parallel

I came across a couple of things in my search:

https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo
http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html

But neither of them have much in the way of details.

Looking to disrupt as little as possible, but of course with something like 
this there's going to be an interruption.

If anyone has any experience, pointers, or thoughts I'd love to hear about it.

Thanks!

-- Kevin


NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron ) 
with success to do a live nova-net to neutron using Juno.




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Potential updates to networking guide deployment scenarios

2015-12-08 Thread James Dempsey
Hi Matt,

Commentary in-line.

On 05/12/15 14:03, Matt Kassawara wrote:
> The networking guide [1] contains deployment scenarios [2] that describe
> the operation of several common OpenStack Networking (neutron)
> architectures including functional configuration examples.
> 
> Currently, the legacy and L3HA scenarios [3][4][5][6] only support
> attaching VMs to private/internal/project networks (managed by projects)
> with a combination of routers and floating IPs that provide connectivity to
> external networks such as the Internet. However, L3 support regardless of
> architecture adds complexity and can introduce redundancy/performance
> concerns.
> 
> On the other hand, the provider networks scenarios [7][8] only support
> attaching VMs to public/external/provider networks (managed by
> administrators) and exclude components such as private networks, routers,
> and floating IPs.
> 
> Turns out... you can do both. In fact, the installation guide for Liberty
> [9] supports attaching VMs to both public and private networks. No choosing
> between the simplicity of provider networks and the "self-service" nature
> of true cloud networking in your deployment.
> 
> So, I propose that we update the legacy and L3HA scenarios in the
> networking guide to support attaching VMs to both public and private
> networks using one of the following options:
> 
> 1) Add support for attaching VMs to public networks to the existing
> scenarios.
> 2) Create additional scenarios that support attaching VMs to both public
> and private networks.
> 3) Restructure the existing scenarios by starting out with simple provider
> networks architectures for both Open vSwitch and Linux bridge and
> optionally adding L3 support to them. The installation guide for Liberty
> uses this approach.
> 
> Option 1 somewhat increases complexity of scenarios that our audience may
> already find difficult to comprehend. Option 2 proliferates the scenarios
> and makes it more difficult for our audience to choose the best one for a
> particular deployment. In addition, it can lead to duplication of content
> that becomes difficult to keep consistent. Option 3 requires a more complex
> documentation structure that our audience may find difficult to follow. As
> the audience, I would like your input on the usefulness of these potential
> updates and which option works best... or add another option.
> 


I'm not crazy about option 1 because I think it could over-complicate
the more simple scenarios.

With respect to option 2, would you be doubling the number of documented
scenarios?

It sounds like the provider network and "Legacy"/L3HA scenarios are
orthogonal enough that they could be separate from each other.  I don't
think it is too much to ask of operators to read a couple of sections
and compose them, provided the requirements and prerequisites are clear.


While not specifically pertaining to the re-structure, I will make a
couple of comments about the deploy/scenario sections, if they are being
updated...

a. I think these sections are bound to be confusing regardless of how
they are structured or re-structured.  Perhaps there should be
high-level comparison of the different scenarios to help operators
decide which scenario best fits their use case.  Maybe even a table
comparing them?

b. Does 'Legacy' just mean 'No HA/DVR Routing?'  I think that within the
context of OpenStack Networking, it is risky to call anything aside from
Nova Network 'Legacy.'  It seems like a 'single L3 agent' scenario is a
perfectly valid use case... It reduces complexity and cost while still
letting users create whatever topology they want.  Let me know if I'm
reading this wrong.

Cheers,
James



> Thanks,
> Matt
> 
> [1] http://docs.openstack.org/networking-guide/
> [2] http://docs.openstack.org/networking-guide/deploy.html
> [3] http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html
> [4] http://docs.openstack.org/networking-guide/scenario_legacy_lb.html
> [5] http://docs.openstack.org/networking-guide/scenario_l3ha_ovs.html
> [6] http://docs.openstack.org/networking-guide/scenario_l3ha_lb.html
> [7] http://docs.openstack.org/networking-guide/scenario_provider_ovs.html
> [8] http://docs.openstack.org/networking-guide/scenario_provider_lb.html
> [9] http://docs.openstack.org/liberty/install-guide-ubuntu/
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 


-- 
James Dempsey
Senior Cloud Engineer
Catalyst IT Limited
+64 4 803 2264
--

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Galera setup testing

2015-12-08 Thread Kevin Benton
Probably, but it's not something currently tested in the gate so I don't
know how we would prevent regressions.

On Mon, Dec 7, 2015 at 6:56 PM, Fox, Kevin M  wrote:

> Awsome news. Should there be a tag added for gallera multimaster safe to
> let us ops know about these things?
>
> Thanks,
> Kevin
>
> --
> *From:* Kevin Benton
> *Sent:* Monday, December 07, 2015 6:08:12 PM
> *To:* Matteo Panella
> *Cc:* OpenStack Operators
> *Subject:* Re: [Openstack-operators] Galera setup testing
>
> Neutron has protection against these now as of liberty with API level
> retry operations so this shouldn't be a problem for neutron any more.
> On Dec 7, 2015 4:06 PM, "Matteo Panella" 
> wrote:
>
>> On 2015-12-07 22:45, James Dempsey wrote:
>>
>>> +1 for designating one node as primary.  This helped us reduce some
>>> deadlocks that we were seeing when balancing sessions between DB hosts.
>>>
>>
>> Keystone most likely won't be affected by writeset certification failures,
>> but other services (especially Neutron) are going to be hit by one sooner
>> or later.
>>
>> Unfortunately, Galera doesn't take very kindly "SELECT ... FOR UPDATE", so
>> you can't really do proper load balancing (aside from designating a
>> different node
>> as master in multiple listen stanzas, each one for a different set of
>> services).
>>
>> Regards,
>> --
>> Matteo Panella
>> INFN CNAF
>> Via Ranzani 13/2 c - 40127 Bologna, Italy
>> Phone: +39 051 609 2903
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>


-- 
Kevin Benton
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Nova-network -> Neutron Migration

2015-12-08 Thread Kevin Bringard (kevinbri)
Hey fellow oppers!

I was wondering if anyone has any experience doing a migration from 
nova-network to neutron. We're looking at an in place swap, on an Icehouse 
deployment. I don't have parallel 

I came across a couple of things in my search:

https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo
http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html

But neither of them have much in the way of details.

Looking to disrupt as little as possible, but of course with something like 
this there's going to be an interruption.

If anyone has any experience, pointers, or thoughts I'd love to hear about it.

Thanks!

-- Kevin
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Downgrade in Trove

2015-12-08 Thread Telles Nobrega
I totally agree. I will bring it up on the next trove meeting to decide how
we are
proceeding with this.

Thanks,

On Tue, Dec 8, 2015 at 4:04 PM Flavio Percoco  wrote:

> On 07/12/15 13:11 +, Telles Nobrega wrote:
> >Make sense Flavio, we have approved the spec to delete the downgrade but
> we can
> >wait a bit more and decide if we warn the deprecation and delete in N or
> if we
> >do it as long as no one is actually using it.
>
> My main concern is that, whenever you're going to delete something
> from your service, the feedback is rarely enough and deprecation paths
> should always be used.
>
> Cheers,
> Flavio
>
> >
> >
> >On Mon, Dec 7, 2015 at 10:01 AM Flavio Percoco  wrote:
> >
> >On 03/12/15 12:08 +, Telles Nobrega wrote:
> >>Hello all,
> >>
> >>we from Trove, want to remove the downgrade fuctionality from our SQL
> >Schema.
> >>This was a TC approved spec for all projects across OpenStack[1] and
> we
> >need to
> >>follow this process as well.
> >>We would like to know if are there anyone using this functionality
> and
> >that
> >>removing it would be a complete mess up for your environment.
> >>If anyone here is against this please speak up, we are gonna wait
> 48h and
> >if we
> >>get no negative response we will move forward and remove downgrade.
> >
> >Hey Telles,
> >
> >It's always recommended to wait for a bit more than 48h (1 week?).
> >
> >That being said, we (Glance) moved forward with this by adding first a
> >deprecation warning on downgrades and deferring the deletion to N so
> >that OPs that are actually using it (no idea if there are) can move
> >away from it.
> >
> >Flavio
> >
> >>
> >>Thanks in advance,
> >>
> >>
> >>[1] http://specs.openstack.org/openstack/openstack-specs/specs/
> >>no-downward-sql-migration.html
> >>--
> >>Telles Nobrega
> >>Software Engineer @ Red Hat
> >
> >>___
> >>OpenStack-operators mailing list
> >>OpenStack-operators@lists.openstack.org
> >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >
> >--
> >@flaper87
> >Flavio Percoco
> >
> >--
> >Telles Nobrega
> >Software Engineer @ Red Hat
>
> --
> @flaper87
> Flavio Percoco
>
-- 
*Telles Nobrega*
Software Engineer @ Red Hat
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Liberty cinder ceph architecture

2015-12-08 Thread Jesse Keating
Cinder volume service just needs to know how to contact the ceph cluster
via rbd. It does not need to run on the ceph nodes that has the storage
locally.


- jlk

On Tue, Dec 8, 2015 at 12:26 AM, Ignazio Cassano 
wrote:

> Hi all, I am going to install openstack liberty and I already installed
> two ceph nodes .  Now I need to know where cinder components must be
> installed.
> In an nfs scenario I installed some cinder componens on controller node
> and some on nfs server but with ceph I would like  to avoid installing
> cinder components directly on ceph nodes.
> Any suggestions ?  My controller environment is made up of a cluster of
> physical nodes.
> Computing is made up of two kvm nodes.
> Must I install cinder-api, cinder-scheduler, cinder-volume and cinder
> backup on controller nodes or for best performace it' s more convenient to
> split them on different nodes ?
> Another question is related to object storage : is ceph radosgw supported
> to replace swift ?
> Regards
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Downgrade in Trove

2015-12-08 Thread Flavio Percoco

On 07/12/15 13:11 +, Telles Nobrega wrote:

Make sense Flavio, we have approved the spec to delete the downgrade but we can
wait a bit more and decide if we warn the deprecation and delete in N or if we
do it as long as no one is actually using it.


My main concern is that, whenever you're going to delete something
from your service, the feedback is rarely enough and deprecation paths
should always be used.

Cheers,
Flavio




On Mon, Dec 7, 2015 at 10:01 AM Flavio Percoco  wrote:

   On 03/12/15 12:08 +, Telles Nobrega wrote:
   >Hello all,
   >
   >we from Trove, want to remove the downgrade fuctionality from our SQL
   Schema.
   >This was a TC approved spec for all projects across OpenStack[1] and we
   need to
   >follow this process as well.
   >We would like to know if are there anyone using this functionality and
   that
   >removing it would be a complete mess up for your environment.
   >If anyone here is against this please speak up, we are gonna wait 48h and
   if we
   >get no negative response we will move forward and remove downgrade.

   Hey Telles,

   It's always recommended to wait for a bit more than 48h (1 week?).

   That being said, we (Glance) moved forward with this by adding first a
   deprecation warning on downgrades and deferring the deletion to N so
   that OPs that are actually using it (no idea if there are) can move
   away from it.

   Flavio

   >
   >Thanks in advance,
   >
   >
   >[1] http://specs.openstack.org/openstack/openstack-specs/specs/
   >no-downward-sql-migration.html
   >--
   >Telles Nobrega
   >Software Engineer @ Red Hat

   >___
   >OpenStack-operators mailing list
   >OpenStack-operators@lists.openstack.org
   >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


   --
   @flaper87
   Flavio Percoco

--
Telles Nobrega
Software Engineer @ Red Hat


--
@flaper87
Flavio Percoco

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [app-catalog] IRC Meeting Thursday December 10th at 17:00UTC

2015-12-08 Thread Christopher Aedo
Greetings! Our next OpenStack Community App Catalog meeting will take
place this ThursdayDecember 10th at 17:00 UTC in #openstack-meeting-3

The agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Please add agenda items if there's anything specific you would like to
discuss (or of course if the meeting time is not convenient for you
join us on IRC #openstack-app-catalog).

Please join us if you can!

-Christopher

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Galera setup testing

2015-12-08 Thread Nick Jones


Neutron has protection against these now as of liberty with API level 
retry

operations so this shouldn't be a problem for neutron any more.


Jay Pipes wrote an awesome post about the problem itself and a solution 
earlier this year: 
http://www.joinfu.com/2015/01/understanding-reservations-concurrency-locking-in-nova/


Saying that though, we’re running the same setup as others have 
described - which is that we have a multi-writer mode Galera cluster 
with haproxy in front ensuring that only one node is actually written to 
- and it’s worked very well for us thus far.


—

-Nick


On Dec 7, 2015 4:06 PM, "Matteo Panella" 
wrote:


On 2015-12-07 22:45, James Dempsey wrote:


+1 for designating one node as primary.  This helped us reduce some
deadlocks that we were seeing when balancing sessions between DB 
hosts.




Keystone most likely won't be affected by writeset certification 
failures,
but other services (especially Neutron) are going to be hit by one 
sooner

or later.

Unfortunately, Galera doesn't take very kindly "SELECT ... FOR 
UPDATE", so

you can't really do proper load balancing (aside from designating a
different node
as master in multiple listen stanzas, each one for a different set of
services).


--
DataCentred Limited registered in England and Wales no. 05611763

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] cinder-api with rbd driver ignores ceph.conf

2015-12-08 Thread Saverio Proto
Hello there,

finally yesterday I found fast way to backport the rbd driver in juno
glance_store.

I found this repository with the right patch I was looking for:
https://github.com/vumrao/glance_store.git (branch rbd_default_features)

I reworked the patch on top of stable juno:
https://github.com/zioproto/glance_store/commit/564129f865e10e7fcd5378a0914847323139f901

and I created my ubuntu packages.

Now everything works. I am testing the deb packages in my staging
cluster. I do have cinder and glance
honoring the ceph.conf default features. All volumes and images are
created in the ceph
backend with the object-map.

If anyone is running juno and wants to enable this feature we have
packages published here:
http://ubuntu.mirror.cloud.switch.ch/engines/packages/

Saverio



2015-11-26 11:36 GMT+01:00 Saverio Proto :
> Hello,
>
> I think it is worth to update the list on this issue, because a lot of
> operators are running Juno, and might want to enable the object map
> feature in their rbd backend.
>
> our cinder backport seems to work great.
>
> however, most of volumes are CoW from glance images. Glance uses as
> well the rbd backend.
>
> This means that if glance images do not have the rbd object map
> features, the cinder volumes will have flags "object map invalid".
>
> So, we are now trying to backport this feature of the rbd driver in
> glance as well.
>
> Saverio
>
>
>
> 2015-11-24 13:12 GMT+01:00 Saverio Proto :
>> Hello there,
>>
>> we were able finally to backport the patch to Juno:
>> https://github.com/zioproto/cinder/tree/backport-ceph-object-map
>>
>> we are testing this version. Everything good so far.
>>
>> this will require in your ceph.conf
>> rbd default format = 2
>> rbd default features = 13
>>
>> if anyone is willing to test this on his Juno setup I can also share
>> .deb packages for Ubuntu
>>
>> Saverio
>>
>>
>>
>> 2015-11-16 16:21 GMT+01:00 Saverio Proto :
>>> Thanks,
>>>
>>> I tried to backport this patch to Juno but it is not that trivial for
>>> me. I have 2 tests failing, about volume cloning and create a volume
>>> without layering.
>>>
>>> https://github.com/zioproto/cinder/commit/0d26cae585f54c7bda5ba5b423d8d9ddc87e0b34
>>> https://github.com/zioproto/cinder/commits/backport-ceph-object-map
>>>
>>> I guess I will stop trying to backport this patch and wait for the
>>> upgrade to Kilo of our Openstack installation to have the feature.
>>>
>>> If anyone ever backported this feature to Juno it would be nice to
>>> know, so I can use the patch to generate deb packages.
>>>
>>> thanks
>>>
>>> Saverio
>>>
>>> 2015-11-12 17:55 GMT+01:00 Josh Durgin :
 On 11/12/2015 07:41 AM, Saverio Proto wrote:
>
> So here is my best guess.
> Could be that I am missing this patch ?
>
>
> https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53


 Exactly, you need that patch for cinder to use rbd_default_features
 from ceph.conf instead of its own default of only layering.

 In infernalis and later version of ceph you can also add object map to
 existing rbd images via the 'rbd feature enable' and 'rbd object-map
 rebuild' commands.

 Josh

> proto@controller:~$ apt-cache policy python-cinder
> python-cinder:
>Installed: 1:2014.2.3-0ubuntu1.1~cloud0
>Candidate: 1:2014.2.3-0ubuntu1.1~cloud0
>
>
> Thanks
>
> Saverio
>
>
>
> 2015-11-12 16:25 GMT+01:00 Saverio Proto :
>>
>> Hello there,
>>
>> I am investigating why my cinder is slow deleting volumes.
>>
>> you might remember my email from few days ago with subject:
>> "cinder volume_clear=zero makes sense with rbd ?"
>>
>> so it comes out that volume_clear has nothing to do with the rbd driver.
>>
>> cinder was not guilty, it was really ceph rbd slow itself to delete big
>> volumes.
>>
>> I was able to reproduce the slowness just using the rbd client.
>>
>> I was also able to fix the slowness just using the rbd client :)
>>
>> This is fixed in ceph hammer release, introducing a new feature.
>>
>>
>> http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/
>>
>> Enabling the object map feature rbd is now super fast to delete large
>> volumes.
>>
>> However how I am in trouble with cinder. Looks like my cinder-api
>> (running juno here) ignores the changes in my ceph.conf file.
>>
>> cat cinder.conf | grep rbd
>>
>> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>> rbd_user=cinder
>> rbd_max_clone_depth=5
>> rbd_ceph_conf=/etc/ceph/ceph.conf
>> rbd_flatten_volume_from_snapshot=False
>> rbd_pool=volumes
>> rbd_secret_uuid=secret
>>
>> But when I create a volume with cinder, The options in ceph.conf are
>> ignored:
>>
>> cat /etc/ceph/ceph.conf | grep rbd
>> rbd default format = 2
>> rbd default fea

[Openstack-operators] Liberty cinder ceph architecture

2015-12-08 Thread Ignazio Cassano
Hi all, I am going to install openstack liberty and I already installed two
ceph nodes .  Now I need to know where cinder components must be installed.
In an nfs scenario I installed some cinder componens on controller node and
some on nfs server but with ceph I would like  to avoid installing cinder
components directly on ceph nodes.
Any suggestions ?  My controller environment is made up of a cluster of
physical nodes.
Computing is made up of two kvm nodes.
Must I install cinder-api, cinder-scheduler, cinder-volume and cinder
backup on controller nodes or for best performace it' s more convenient to
split them on different nodes ?
Another question is related to object storage : is ceph radosgw supported
to replace swift ?
Regards
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators