[openstack-dev] [Glance][Heat] Murano split dsicussion
During last Atlanta summit there were couple discussions about Application Catalog and Application space projects in OpenStack. These cross-project discussions occurred as a result of Murano incubation request [1] during Icehouse cycle. On the TC meeting devoted to Murano incubation there was an idea about splitting the Murano into parts which might belong to different programs[2]. Today, I would like to initiate a discussion about potential splitting of Murano between two or three programs. *App Catalog API to Catalog Program* Application Catalog part can belong to Catalog program, the package repository will move to artifacts repository part where Murano team already participates. API part of App Catalog will add a thin layer of API methods specific to Murano applications and potentially can be implemented as a plugin to artifacts repository. Also this API layer will expose other 3rd party systems API like CloudFoundry ServiceBroker API which is used by CloudFoundry marketplace feature to provide an integration layer between OpenStack Application packages and 3rd party PaaS tools. *Murano Engine to Orchestration Program* Murano engine orchestrates the Heat template generation. Complementary to a Heat declarative approach, Murano engine uses imperative approach so that it is possible to control the whole flow of the template generation. The engine uses Heat updates to update Heat templates to reflect changes in applications layout. Murano engine has a concept of actions - special flows which can be called at any time after application deployment to change application parameters or update stacks. The engine is actually complementary to Heat engine and adds the following: - orchestrate multiple Heat stacks - DR deployments, HA setups, multiple datacenters deployment - Initiate and controls stack updates on application specific events - Error handling and self-healing - being imperative Murano allows you to handle issues and implement additional logic around error handling and self-healing. *Murano UI to Dashboard Program* Application Catalog requires a UI focused on user experience. Currently there is a Horizon plugin for Murano App Catalog which adds Application catalog page to browse, search and filter applications. It also adds a dynamic UI functionality to render a Horizon forms without writing an actual code. [1] http://lists.openstack.org/pipermail/openstack-dev/2014-February/027736.html [2] http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-03-04-20.02.log.txt -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance][Heat] Murano split dsicussion
*Murano UI to Dashboard Program* Application Catalog requires a UI focused on user experience. Currently there is a Horizon plugin for Murano App Catalog which adds Application catalog page to browse, search and filter applications. It also adds a dynamic UI functionality to render a Horizon forms without writing an actual code. Are we going to wait for the generic UI (Merlin) or get murano-dashboard into Horizon then work on Merlin? Merlin will be a generic library\framework in Horizon for Application projects (Heat, Murano, Solum). We still need to have specific implementations of UI for each project, but these projects will reuse the common code. We can put existing Murano UI to Dashboard or inside Catalog program as a separate repo. I think it might make sense to keep UI components closer to project rather then keep UI in a separate program. On Thu, Aug 21, 2014 at 6:35 PM, Angus Salkeld asalk...@mirantis.com wrote: On Thu, Aug 21, 2014 at 6:14 AM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: During last Atlanta summit there were couple discussions about Application Catalog and Application space projects in OpenStack. These cross-project discussions occurred as a result of Murano incubation request [1] during Icehouse cycle. On the TC meeting devoted to Murano incubation there was an idea about splitting the Murano into parts which might belong to different programs[2]. Today, I would like to initiate a discussion about potential splitting of Murano between two or three programs. *App Catalog API to Catalog Program* Application Catalog part can belong to Catalog program, the package repository will move to artifacts repository part where Murano team already participates. API part of App Catalog will add a thin layer of API methods specific to Murano applications and potentially can be implemented as a plugin to artifacts repository. Also this API layer will expose other 3rd party systems API like CloudFoundry ServiceBroker API which is used by CloudFoundry marketplace feature to provide an integration layer between OpenStack Application packages and 3rd party PaaS tools. Seems to make sense, tho' I am not a glance-core. *Murano Engine to Orchestration Program* Murano engine orchestrates the Heat template generation. Complementary to a Heat declarative approach, Murano engine uses imperative approach so that it is possible to control the whole flow of the template generation. The engine uses Heat updates to update Heat templates to reflect changes in applications layout. Murano engine has a concept of actions - special flows which can be called at any time after application deployment to change application parameters or update stacks. The engine is actually complementary to Heat engine and adds the following: - orchestrate multiple Heat stacks - DR deployments, HA setups, multiple datacenters deployment - Initiate and controls stack updates on application specific events - Error handling and self-healing - being imperative Murano allows you to handle issues and implement additional logic around error handling and self-healing. +1 Are the teams going to work as-is from a core reviewer PoV (I'd assume so, just clarifying). I am just wondering how we can get the Heat and Murano teams to know what each are doing - basically work at least somewhat together. *Murano UI to Dashboard Program* Application Catalog requires a UI focused on user experience. Currently there is a Horizon plugin for Murano App Catalog which adds Application catalog page to browse, search and filter applications. It also adds a dynamic UI functionality to render a Horizon forms without writing an actual code. Are we going to wait for the generic UI (Merlin) or get murano-dashboard into Horizon then work on Merlin? -Angus [1] http://lists.openstack.org/pipermail/openstack-dev/2014-February/027736.html [2] http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-03-04-20.02.log.txt -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance][Heat] Murano split dsicussion
Hi, Let me comment on the reasons why we’ve suggested adding Murano engine to Orchestration program. If we consider the previous Murano incubation discussions, we see that an overlap with the Orchestration program was one of the TC’s concerns. We also find that it was the Heat team that proposed to split Murano into specific parts. As for items a) and b) highlighted by Zane, we definitely shared some of these concerns, but we were guided by TC feedback from the previous Murano incubation review that recommended splitting Murano into separate components. We are fine with having separate program or joining existing programs. In the current situation with Glance mission changed to more generic Catalog mission and Orchestration program mission covering everything it is better to joining existing programs rather than trying to add a new one stepping on everyone’s foot. If the Orchestration program team believes that Murano does not intersect with the mission of the Orchestration program and should start its own program, let’s send this message to the TC and Murano team is ready to go this route. Thanks Georgy On Fri, Aug 22, 2014 at 6:29 AM, Zane Bitter zbit...@redhat.com wrote: On 21/08/14 04:30, Thierry Carrez wrote: Georgy Okrokvertskhov wrote: During last Atlanta summit there were couple discussions about Application Catalog and Application space projects in OpenStack. These cross-project discussions occurred as a result of Murano incubation request [1] during Icehouse cycle. On the TC meeting devoted to Murano incubation there was an idea about splitting the Murano into parts which might belong to different programs[2]. Today, I would like to initiate a discussion about potential splitting of Murano between two or three programs. [...] I think the proposed split makes a lot of sense. Let's wait for the feedback of the affected programs to see if it's compatible with their own plans. I want to start out by saying that I am a big proponent of doing stuff that makes sense, and wearing my PTL hat I will support the consensus of the community on whatever makes the most sense. With the PTL hat off again, here is my 2c on what I think makes sense: * The Glance thing makes total sense to me. Murano's requirements should be pretty much limited to an artifact catalog with some metadata - that's bread and butter for Glance. Murano folks should join the Glance team and drive their requirements into the artifact catalog. * The Horizon thing makes some sense. I think at least part of the UI should be in Horizon, but I suspect there's also some stuff in there that is pretty specific to the domain that Murano is tackling and it might be better for that to live in the same program as the Murano engine. I believe that there's a close analogue here with Tuskar and the TripleO program, so maybe we could ask them about any lessons learned. Georgy suggested elsewhere that the Merlin framework should be in Horizon and the rest in the same program as the engine, and that would make total sense to me. * The Heat thing doesn't make a lot of sense IMHO. I now understand that apparently different projects in the same program can have different core teams - which just makes me more confused about what a program is for, since I thought it was a single team. Nevertheless, I don't think that the Murano project would be well-served by being represented by the Heat PTL (which is, I guess, the only meaning still attached to a program). I don't think they want the Heat PTL triaging their bugs, and I don't think it's even feasible for one person to do that for both projects (that is to say, I already have a negative amount of extra time available for Launchpad just handling Heat). I don't think they want the Heat PTL to have control over their design summit sessions, and if I were the PTL doing that I would *hate* to be in the position of trying to balance the interests of the two projects - *especially*, given that I am in Clint's camp of not seeing a lot of value in Murano, when one project has not gone through the incubation process and therefore there would be no guidance available from the TC or consensus in the wider community as to whether that project warranted any time at all devoted to it. In fact, I would go so far as to say that it's completely unreasonable to put a single PTL in that position. So, I don't think putting the Murano engine into the Orchestration program is being proposed because it makes sense. I think it's being proposed, despite not making sense, because people consider it unlikely that the TC would grant Murano a separate program due to some combination of: (a) People won't think Murano is a good (enough) idea - in which case we shouldn't do it (yet); and/or (b) People have an irrational belief that projects are lightweight but programs are heavyweight, when the reverse is true, and will block any new programs for fear of letting another person call themselves a PTL
[openstack-dev] [Solum] Language pack attributes schema
Hi, As a part of Language pack workgroup session we created an etherpad for language pack attributes definition. Please find a first draft of language pack attributes here: https://etherpad.openstack.org/p/Solum-Language-pack-json-format We have identified a minimal list of attributes which should be supported by language pack API. Please, provide your feedback and\or ideas in this etherpad. Once it is reviewed we can use this as a basis for language packs in PoC. Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [qa] [Solum] [tempest] Use of pecan test framework in functional tests
Hi, In Solum project we are currently creating tests environments for future test. We split unit tests and functional tests in order to use tempest framework from the beginning. Tempest framework assumes that you run your service and test APi endpoints by sending HTTP requests. Solum uses Pecan WSGI framework which has its own test framework based on WebTest. This framework allows to test application without sending actual HTTP traffic. It mocks low level stuff related to transport but keeps all high level WSGI part as it is a real life application\service. There is a question to QA\Tempest teams, what do you think about using pecan test framework in tempest for Pecan based applications? Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [qa] [Solum] [tempest] Use of pecan test framework in functional tests
Thanks everyone for feedback. Will follow the standard approach with HTTP requests in tempest tests. Thanks Georgy On Tue, Dec 10, 2013 at 2:47 PM, Sean Dague s...@dague.net wrote: Pretty much 100% agree with Russell and Ryan. Webtest is interesting for in tree testing with Solum, because it's specifically *not* bringing up the full stack. When it comes to Tempest, you are hitting a live OpenStack cloud, most likely not on the same machine as Tempest is on (not true in the gate today... but we try to act like it is). So you must hit HTTP. -Sean On 12/10/2013 04:24 PM, Ryan Petrello wrote: My opinion is that there’s value in both. Writing functional tests for Solum’s test suite using WebTest can be pretty useful for testing the API’s logic without having to involve HTTP (to e.g., call API endpoints with certain POST arguments and assert that certain mocked functions end up being called down the line). When you involve Tempest, though, you’re generally pointing at a real HTTP server and testing for correctness, so using HTTP here makes sense (imo). --- Ryan Petrello Senior Developer, DreamHost ryan.petre...@dreamhost.com On Dec 10, 2013, at 4:12 PM, Russell Bryant rbry...@redhat.com wrote: On 12/10/2013 04:10 PM, Georgy Okrokvertskhov wrote: Hi, In Solum project we are currently creating tests environments for future test. We split unit tests and functional tests in order to use tempest framework from the beginning. Tempest framework assumes that you run your service and test APi endpoints by sending HTTP requests. Solum uses Pecan WSGI framework which has its own test framework based on WebTest. This framework allows to test application without sending actual HTTP traffic. It mocks low level stuff related to transport but keeps all high level WSGI part as it is a real life application\service. There is a question to QA\Tempest teams, what do you think about using pecan test framework in tempest for Pecan based applications? I don't think that makes sense. Then we're not using the code like it would be used normally (via HTTP). -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] [glance] Heater Proposal
Hi, To keep this thread alive I would like to share the small screencast I've recorded for Murano Metadata repository. I would like to share with you what we have in Murano and start a conversation about metadata repository development in OpenStack. Here is a link to screencast http://www.youtube.com/watch?v=Yi4gC4ZhvPg Here is a linkhttps://wiki.openstack.org/wiki/Murano/SimplifiedMetadataRepository to a detailed specification of PoC for metadata repository currently implemented in Murano. There is an etherpad (here https://etherpad.openstack.org/p/MuranoMetadata) for new MetadataRepository design we started to write after lesson learn phase of PoC. This is a future version of repository we want to have. This proposal can be used as an initial basis for metadata repository design conversation. It will be great if we start conversation with Glance team to understand how this work can be organized. As it was revealed in this thread, the most probable candidate for metadata repository service implementation is Glance program. Thanks, Georgy On Mon, Dec 9, 2013 at 3:24 AM, Thierry Carrez thie...@openstack.orgwrote: Vishvananda Ishaya wrote: On Dec 6, 2013, at 10:07 AM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com mailto:gokrokvertsk...@mirantis.com wrote: I am really inspired by this thread. Frankly saying, Glance for Murano was a kind of sacred entity, as it is a service with a long history in OpenStack. We even did not think in the direction of changing Glance. Spending a night with these ideas, I am kind of having a dream about unified catalog where the full range of different entities are presented. Just imagine that we have everything as first class citizens of catalog treated equally: single VM (image), Heat template (fixed number of VMs\ autoscaling groups), Murano Application (generated Heat templates), Solum assemblies Projects like Solum will highly benefit from this catalog as it can use all varieties of VM configurations talking with one service. This catalog will be able not just list all possible deployable entities but can be also a registry for already deployed configurations. This is perfectly aligned with the goal for catalog to be a kind of market place which provides billing information too. OpenStack users also will benefit from this as they will have the unified approach for manage deployments and deployable entities. I doubt that it could be done by a single team. But if all teams join this effort we can do this. From my perspective, this could be a part of Glance program and it is not necessary to add a new program for that. As it was mentioned earlier in this thread an idea of market place for images in Glance was here for some time. I think we can extend it to the idea of creating a marketplace for a deployable entity regardless of the way of deployment. As Glance is a core project which means it always exist in OpenStack deployment it makes sense to as a central catalog for everything. +1 +1 too. I don't think that Glance is collapsing under its current complexity yet, so extending Glance to a general catalog service that can serve more than just reference VM images makes sense IMHO. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] [glance] Heater Proposal
Hi, I think BP is a right way to organize this. I will submit BP for metadata service from our side too. Thanks Georgy On Wed, Dec 11, 2013 at 3:53 PM, Randall Burt randall.b...@rackspace.comwrote: On Dec 11, 2013, at 5:44 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, To keep this thread alive I would like to share the small screencast I've recorded for Murano Metadata repository. I would like to share with you what we have in Murano and start a conversation about metadata repository development in OpenStack. Here is a link to screencast http://www.youtube.com/watch?v=Yi4gC4ZhvPg Here is a link to a detailed specification of PoC for metadata repository currently implemented in Murano. There is an etherpad (here) for new MetadataRepository design we started to write after lesson learn phase of PoC. This is a future version of repository we want to have. This proposal can be used as an initial basis for metadata repository design conversation. It will be great if we start conversation with Glance team to understand how this work can be organized. As it was revealed in this thread, the most probable candidate for metadata repository service implementation is Glance program. Thanks, Georgy Thanks for the link and info. I think the general consensus is this belongs in Glance, however I think details are being deferred until the mid-summit meet up in Washington D.C. (I could be totally wrong about this). In any case, I think I'll also start converting the existing HeatR blueprints to Glance ones. Perhaps it would be a good idea at this point to propose specific blueprints and have further ML discussions focused on specific changes? On Mon, Dec 9, 2013 at 3:24 AM, Thierry Carrez thie...@openstack.org wrote: Vishvananda Ishaya wrote: On Dec 6, 2013, at 10:07 AM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com mailto:gokrokvertsk...@mirantis.com wrote: I am really inspired by this thread. Frankly saying, Glance for Murano was a kind of sacred entity, as it is a service with a long history in OpenStack. We even did not think in the direction of changing Glance. Spending a night with these ideas, I am kind of having a dream about unified catalog where the full range of different entities are presented. Just imagine that we have everything as first class citizens of catalog treated equally: single VM (image), Heat template (fixed number of VMs\ autoscaling groups), Murano Application (generated Heat templates), Solum assemblies Projects like Solum will highly benefit from this catalog as it can use all varieties of VM configurations talking with one service. This catalog will be able not just list all possible deployable entities but can be also a registry for already deployed configurations. This is perfectly aligned with the goal for catalog to be a kind of market place which provides billing information too. OpenStack users also will benefit from this as they will have the unified approach for manage deployments and deployable entities. I doubt that it could be done by a single team. But if all teams join this effort we can do this. From my perspective, this could be a part of Glance program and it is not necessary to add a new program for that. As it was mentioned earlier in this thread an idea of market place for images in Glance was here for some time. I think we can extend it to the idea of creating a marketplace for a deployable entity regardless of the way of deployment. As Glance is a core project which means it always exist in OpenStack deployment it makes sense to as a central catalog for everything. +1 +1 too. I don't think that Glance is collapsing under its current complexity yet, so extending Glance to a general catalog service that can serve more than just reference VM images makes sense IMHO. -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev
[openstack-dev] [Heat] [Murano] [Solum] Metadata repository initiative discussion for Glance
Hi, Recently a Heater proposal was announced in openstack-dev mailing list. This discussion lead to a decision to add unified metadata service \ catalog capabilities into Glance. On the Glance weekly meeting this initiative was discussed and Glance team agreed to take a look onto BPs and API documents for metadata repository\catalog, in order to understand what can be done during Icehouse release and how to organize this work in general. There will be a separate meeting devoted to this initiative on Tuesday 12/17 in #openstack-glance channel. Exact time is not defined yet and I need time preferences from all parties. Here is a link to a doodle poll http://doodle.com/9f2vxrftizda9pun . Please select time slot which will be suitable for you. The agenda for this meeting is the following: 1. Define project goals in general 2. Discuss API for this service and find out what can be implemented during IceHouse release. 3. Define organizational stuff like how this initiative should be developed (branch of Glance or separate project within Glance program) Here is an etherpad https://etherpad.openstack.org/p/MetadataRepository-APIfor initial API version for this service. All project which are interested in metadata repository are welcome to discuss API and service itself. Currently there are several possible use cases for this service: 1. Heat template catalog 2. HOT Software orchestration scripts\recipes storage 3. Murano Application Catalog object storage 4. Solum assets storage Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] [Murano] [Solum] [Glance]Metadata repository initiative discussion for Glance
Hi, Doodle shows that the most suitable time is 10AM PST on Tuesday. Lets keep this time for Metadata Repository\Catalog meeting in #openstack-glance IRC channel. See you tomorrow! Thanks Georgy On Fri, Dec 13, 2013 at 12:09 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, It looks like I forgot to add Glance. Fixing this now. I am sorry for duplicating the thread. Thanks Georgy On Fri, Dec 13, 2013 at 12:02 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Yes. It is a Pacific Standard Time. Thanks Georgy On Fri, Dec 13, 2013 at 12:01 PM, Keith Bray keith.b...@rackspace.comwrote: PT as in Pacific Standard Time? -Keith On Dec 13, 2013 1:56 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, It is PT. I will add this info to the doodle pool. Thanks Georgy On Fri, Dec 13, 2013 at 11:50 AM, Keith Bray keith.b...@rackspace.comwrote: What timezone is the poll in? It doesn't say on the Doodle page. Thanks, -Keith From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: Friday, December 13, 2013 12:21 PM To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Subject: [openstack-dev] [Heat] [Murano] [Solum] Metadata repository initiative discussion for Glance Hi, Recently a Heater proposal was announced in openstack-dev mailing list. This discussion lead to a decision to add unified metadata service \ catalog capabilities into Glance. On the Glance weekly meeting this initiative was discussed and Glance team agreed to take a look onto BPs and API documents for metadata repository\catalog, in order to understand what can be done during Icehouse release and how to organize this work in general. There will be a separate meeting devoted to this initiative on Tuesday 12/17 in #openstack-glance channel. Exact time is not defined yet and I need time preferences from all parties. Here is a link to a doodle poll http://doodle.com/9f2vxrftizda9pun . Please select time slot which will be suitable for you. The agenda for this meeting is the following: 1. Define project goals in general 2. Discuss API for this service and find out what can be implemented during IceHouse release. 3. Define organizational stuff like how this initiative should be developed (branch of Glance or separate project within Glance program) Here is an etherpad https://etherpad.openstack.org/p/MetadataRepository-API for initial API version for this service. All project which are interested in metadata repository are welcome to discuss API and service itself. Currently there are several possible use cases for this service: 1. Heat template catalog 2. HOT Software orchestration scripts\recipes storage 3. Murano Application Catalog object storage 4. Solum assets storage Thanks Georgy -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation
Hi, Murano project volunteers to be a first project applying to Emerging projects program. Murano was here quite long time and it was developed taking into account all OpenStack development processes. I think Murano is a good candidate to be a first project in this new program as it should be quite painless to try different OpenStack requirements on Murano in order to specify Emerging projects requirements in the future. What kind of process should it be for Emerging projects program application? Thanks Georgy On Tue, Dec 17, 2013 at 6:27 AM, Flavio Percoco fla...@redhat.com wrote: On 17/12/13 14:59 +0100, Thierry Carrez wrote: Mark McLoughlin wrote: On Tue, 2013-12-17 at 13:44 +0100, Thierry Carrez wrote: Mark McLoughlin wrote: How about if we had an emerging projects page where the TC feedback on each project would be listed? That would give visibility to our feedback, without making it a yes/no blessing. Ok, whether to list any feedback about the project on the page is a yes/no decision, but at least it allows us to fully express why we find the project promising, what people need to help with in order for it to be incubated, etc. With a formal yes/no status, I think we'd struggle with projects which we're not quite ready to even bless with an emerging status but we still want to encourage them - this allows us to bless a project as emerging but be explicit about our level of support for it. I agree that being able to express our opinion on a project in shades of grey is valuable... The main drawback of using a non-boolean status for that is that you can't grant any benefit to it. So we'd not be able to say emerging projects get design summit space. They can still collaborate in unconference space or around empty tables, but then we are back to the problem we are trying to solve: increase visibility of promising projects pre-incubation. Have an emerging projects track and leave it up to the track coordinator to decide prioritize the most interesting sessions and the most advanced projects (according to the TC's feedback) ? I guess that /could/ work. I don't expect we'll have space for more than one session per project, but that may be enough for self-organization if we nail the collaboration spaces correctly. I'm fine with giving that page a try (we can always revisit if it's not working any better...). I'm not sure about the page as a medium for this but I like the idea. This is pretty much a way to incubate programs, which is basically what I proposed in my previous emails. Lets not consider Programs official right away, lets give them a place where they can grow a bit with the projects the have under their umbrella. Lets also - as Mark suggested - use that place to add comments and help them grow. And I'm also in favor of having a emerging projects track. +1 Cheers, FF -- @flaper87 Flavio Percoco ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Glance] [Metadatarepository] Metadata repository initiative status
Hi, Metadata repository meeting occurred this Tuesday in #openstack-glance channel. Main item that was discussed was an API for a new metadata functions and where this API should appear. During discussion it was defined that the main functionality will be a storage for different objects and metadata associated with them. Initially all objects will have a specific type which defines specific attributes in metadata. There will be also a common set of attributes for all objects stored in Glance. During the discussion there was an input from different projects (Hest, Murano, Solum) what kind of objects should be stored for each project and what kind functionality is minimally required. Here is a list of potential objects: Heat: - HOT template Potential Attributes: version, tag, keywords, etc. Required Features: - Object and metadata versioning - Search by specific attribute\attributes value Murano - Murano files - UI definition - workflow definition - HOT templates - Scripts Required Features: - Object and metadata versioning - Search by specific attribute Solum - Solum Language Packs Potential Attributes: name, build_toolchain, OS, language platform, versions Required Features: - Object and metadata versioning - Search by specific attribute After a discussion it was concluded that the best way will be to add a new API endpoint /artifacts. This endpoint will be used to work with object’s common attributes while type specific attributes and methods will be accessible through /artifact/object-type endpoint. The endpoint /artifacts will be used for filtering objects by searching for specific attributes value. Type specific attributes search should also be possible via /artifacts endpoint. For each object type there will be a separate table for attributes in a database. Currently it is supposed that metadata repository API will be implemented inside Glance within v2 version without changing existing API for images. In the future, v3 Glance API can fold images related API to the common artifacts API. New artifact’s API will reuse as much as possible from existing Glance functionality. Most of the stored objects will be non-binary, so it is necessary to check how Glance code handle this. AI All projects teams should start submit BPs for new functionality in Glance. These BPs will be discussed in ML and on Glance weekly meetings. Related Resources: Etherpad for Artifacts API design: https://etherpad.openstack.org/p/MetadataRepository-ArtifactRepositoryAPI Heat templates repo BP for Heat: https://blueprints.launchpad.net/heat/+spec/heat-template-repo Initial API discussion Etherpad: https://etherpad.openstack.org/p/MetadataRepository-API Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Tempest][Solum] Writing functional tests in tempest style
Hi, In Solum project we decided to write functional\integration tests from the very beginning. Initially we used pecan testing framework, but after discussion we moved to standard HTTP client approach used in other projects. In order to simplify further integration with Tempest when Solum will apply for incubation, we started to think how to write functional test cases to minimize efforts for tempest integration in the future. After some learning of tempest code we figured out that direct usage of existing tempest code will be overcomplicated at this stage. We decided to use tempest approach and part of tempest framework independently from tempest itself. Here is a patch with the example how we use tempest approach by extracting core tempest parts and using them independently. https://review.openstack.org/#/c/64165/https://review.openstack.org/#/c/64165/ It will be great to have some feedback from tempest team. If this approach is valid it can be used by other projects who want to write tempest like tests without having whole huge tempest infrastructure. I think some part of tempest can be extracted and converted to some common testing framework, probably as a oslo library part. Thanks, Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [trove][mistral] scheduled tasks
Hi, I would suggest to talk with Mistral team about scheduling. As I know right now thay are writing a PoC code which will have scheduling functionality. Mistral in contrary to Qonos does not require worker process. In Mistral you can create a task with scheduler and API call back so that Mistral can call some API endpoint with some specific parameters in predefined time. I suspect that it covers your use case. Security concerns can be addressed and it will be great if you submit specific BP to Mistral with all security requirements and scheduling features required by Trove. Thanks Georgy On Mon, Dec 30, 2013 at 12:26 PM, Brian Rosmaita brian.rosma...@rackspace.com wrote: Greg, This might be useful: https://wiki.openstack.org/wiki/Qonos-scheduling-service There's a link to the code repository at the bottom of the document. It might be kind of overkill for what you want, though. cheers, brian -- *From:* Greg Hill [greg.h...@rackspace.com] *Sent:* Monday, December 30, 2013 12:59 PM *To:* OpenStack Development Mailing List (not for usage questions) *Subject:* [openstack-dev] [trove][mistral] scheduled tasks I've begun working on the scheduled tasks feature that will allow automated backups (and other things) in trove. Here's the blueprint: https://wiki.openstack.org/wiki/Trove/scheduled-tasks I've heard some mention that mistral might be an option rather than building something into trove. I did some research and it seems like it *might* be a good fit, but it also seems like a bit of overkill for something that could be built in to trove itself pretty easily. There's also the security concern of having to give mistral access to the trove management API in order to allow it to fire off backups and other tasks on behalf of users, but maybe that's just my personal paranoia and it's really not much of a concern. My current plan is to not use mistral, at least for the original implementation, because it's not yet ready and we have a fairly urgent need for the functionality. We could make it an optional feature later for people who are running mistral and want to use it for this purpose. I'd appreciate any and all feedback before I get too far along. Greg ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Tempest][Solum] Writing functional tests in tempest style
Hi Jay, Thank you for your input. Right now this approach allows to run integration tests with and without tempest. I think this is valuable for the project as anyone can run integration tests on their laptop having only keystone available. It will be great to have some input from Tempest team. Can we extract some core tempest component to create a testing framework for projects on stackforge? Having common integration test framework in tempest style will help further project integration to OpenStack ecosystem during incubation. Thanks Georgy On Thu, Dec 26, 2013 at 2:50 PM, Jay Pipes jaypi...@gmail.com wrote: On 12/26/2013 03:34 PM, Georgy Okrokvertskhov wrote: Hi, In Solum project we decided to write functional\integration tests from the very beginning. ++! :) Initially we used pecan testing framework, but after discussion we moved to standard HTTP client approach used in other projects. In order to simplify further integration with Tempest when Solum will apply for incubation, we started to think how to write functional test cases to minimize efforts for tempest integration in the future. After some learning of tempest code we figured out that direct usage of existing tempest code will be overcomplicated at this stage. Yes, because unfortunately at this time, Tempest does not have a Python lib that can be import'd and used easily by other projects. We really should have such a thing, to make adding functional integration tests to non-integrated projects like Solum easier. We decided to use tempest approach and part of tempest framework independently from tempest itself. Here is a patch with the example how we use tempest approach by extracting core tempest parts and using them independently. https://review.openstack.org/#/c/64165/ https://review.openstack.org/#/c/64165/ It will be great to have some feedback from tempest team. If this approach is valid it can be used by other projects who want to write tempest like tests without having whole huge tempest infrastructure. I think the approach you've taken in the above review is the appropriate one at this time. It will make eventual inclusion into tempest when/if Solum is integrated quite easy. I think some part of tempest can be extracted and converted to some common testing framework, probably as a oslo library part. ++ Best, -jay Thanks, Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy
Hi, In Solum project we will need to implement security and ACL for Solum API. Currently we use Pecan framework for API. Pecan has its own security model based on SecureController class. At the same time OpenStack widely uses policy mechanism which uses json files to control access to specific API methods. I wonder if someone has any experience with implementing security and ACL stuff with using Pecan framework. What is the right way to provide security for API? Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy
Hi Dough, Thank you for pointing to this code. As I see you use OpenStack policy framework but not Pecan security features. How do you implement fine grain access control like user allowed to read only, writers and admins. Can you block part of API methods for specific user like access to create methods for specific user role? Thanks Georgy On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann doug.hellm...@dreamhost.comwrote: On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, In Solum project we will need to implement security and ACL for Solum API. Currently we use Pecan framework for API. Pecan has its own security model based on SecureController class. At the same time OpenStack widely uses policy mechanism which uses json files to control access to specific API methods. I wonder if someone has any experience with implementing security and ACL stuff with using Pecan framework. What is the right way to provide security for API? In ceilometer we are using the keystone middleware and the policy framework to manage arguments that constrain the queries handled by the storage layer. http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/acl.py and http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/controllers/v2.py#n337 Doug Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Tempest][Solum] Writing functional tests in tempest style
Hi Jay, Thank you very much for working on that! Thanks Georgy On Tue, Jan 7, 2014 at 12:50 PM, Jay Pipes jaypi...@gmail.com wrote: On Mon, 2014-01-06 at 11:46 -0800, Georgy Okrokvertskhov wrote: Thank you for your input. Right now this approach allows to run integration tests with and without tempest. I think this is valuable for the project as anyone can run integration tests on their laptop having only keystone available. It will be great to have some input from Tempest team. Can we extract some core tempest component to create a testing framework for projects on stackforge? Having common integration test framework in tempest style will help further project integration to OpenStack ecosystem during incubation. Hi Georgy, I created a blueprint for tracking this work: https://blueprints.launchpad.net/tempest/+spec/split-out-reusable-tempest-library If I have some time this week, I'll look into estimating various breakout work items for the blueprint. Best, -jay ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Devstack gate is failing
Should we rather revert patch to make gate working? Thanks Georgy On Tue, Jan 7, 2014 at 7:19 PM, Murali Allada murali.all...@rackspace.comwrote: I'm ok with making this non-voting for Solum until this gets fixed. -Murali On Jan 7, 2014, at 8:53 PM, Noorul Islam Kamal Malmiyoda noo...@noorul.com wrote: Hi team, After merging [1] devstack gate started failing. There is already a thread [2] related to this in mailing list. Until this gets fixed shall we make this job non-voting? Regards, Noorul [1] https://review.openstack.org/64226 [2] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg12440.html ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Devstack gate is failing
Hi, I do understand why there is a push back for this patch. This patch is for infrastructure project which works for multiple projects. Infra maintainers should not know specifics of each project in details. If this patch is a temporary solution then who will be responsible to remove it? If we need start this gate I propose to revert all patches which led to this inconsistent state and apply workaround in Solum repository which is under Solum team full control and review. We need to open a bug in Solum project to track this. Thanks Georgy On Wed, Jan 8, 2014 at 7:09 AM, Noorul Islam K M noo...@noorul.com wrote: Anne Gentle a...@openstack.org writes: On Wed, Jan 8, 2014 at 8:26 AM, Noorul Islam Kamal Malmiyoda noo...@noorul.com wrote: On Jan 8, 2014 6:11 PM, Sean Dague s...@dague.net wrote: On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote: On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Should we rather revert patch to make gate working? I think it is always good to have test packages reside in test-requirements.txt. So -1 on reverting that patch. Here [1] is a temporary solution. Regards, Noorul [1] https://review.openstack.org/65414 If Solum is trying to be on the road to being an OpenStack project, why would it go out of it's way to introduce an incompatibility in the way all the actual OpenStack packages work in the gate? Seems very silly to me, because you'll have to add oslo.sphinx back into test-requirements.txt the second you want to be considered for incubation. I am not sure why it seems silly to you. We are not anyhow removing oslo.sphinx from the repository. We are just removing it before installing the packages from test-requirements.txt in the devstack gate. How does that affects incubation? Am I missing something? Docs are a requirement, and contributor docs are required for applying for incubation. [1] Typically these are built through Sphinx and consistency is gained through oslo.sphinx, also eventually we can offer consistent extensions. So a perception that you're skipping docs would be a poor reflection on your incubation application. I don't think that's what's happening here, but I want to be sure you understand the consistency and doc needs. See also http://lists.openstack.org/pipermail/openstack-dev/2014-January/023582.htmlfor similar issues, we're trying to figure out the best solution. Stay tuned. I have seen that, also posted solum issue [1] there yesterday. I started this thread to have consensus on making solum devstack gate non-voting until the issue gets fixed. Also proposed a temporary solution with which we can solve the issue for the time being. Since the gate is failing for all the patches, it is affecting every patch. Regards, Noorul [1] http://lists.openstack.org/pipermail/openstack-dev/2014-January/023618.html [2] https://review.openstack.org/65414 1. https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements Regards, Noorul -Sean -- Sean Dague Samsung Research America s...@dague.net / sean.da...@samsung.com http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy
Hi Kurt, As for WSGI middleware I think about Pecan hooks which can be added before actual controller call. Here is an example how we added a hook for keystone information collection: https://review.openstack.org/#/c/64458/4/solum/api/auth.py What do you think, will this approach with Pecan hooks work? Thanks Georgy On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths kurt.griffi...@rackspace.com wrote: You might also consider doing this in WSGI middleware: Pros: - Consolidates policy code in once place, making it easier to audit and maintain - Simple to turn policy on/off – just don’t insert the middleware when off! - Does not preclude the use of oslo.policy for rule checking - Blocks unauthorized requests before they have a chance to touch the web framework or app. This reduces your attack surface and can improve performance (since the web framework has yet to parse the request). Cons: - Doesn't work for policies that require knowledge that isn’t available this early in the pipeline (without having to duplicate a lot of code) - You have to parse the WSGI environ dict yourself (this may not be a big deal, depending on how much knowledge you need to glean in order to enforce the policy). - You have to keep your HTTP path matching in sync with with your route definitions in the code. If you have full test coverage, you will know when you get out of sync. That being said, API routes tend to be quite stable in relation to to other parts of the code implementation once you have settled on your API spec. I’m sure there are other pros and cons I missed, but you can make your own best judgement whether this option makes sense in Solum’s case. From: Doug Hellmann doug.hellm...@dreamhost.com Reply-To: OpenStack Dev openstack-dev@lists.openstack.org Date: Tuesday, January 7, 2014 at 6:54 AM To: OpenStack Dev openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi Dough, Thank you for pointing to this code. As I see you use OpenStack policy framework but not Pecan security features. How do you implement fine grain access control like user allowed to read only, writers and admins. Can you block part of API methods for specific user like access to create methods for specific user role? The policy enforcement isn't simple on/off switching in ceilometer, so we're using the policy framework calls in a couple of places within our API code (look through v2.py for examples). As a result, we didn't need to build much on top of the existing policy module to interface with pecan. For your needs, it shouldn't be difficult to create a couple of decorators to combine with pecan's hook framework to enforce the policy, which might be less complex than trying to match the operating model of the policy system to pecan's security framework. This is the sort of thing that should probably go through Oslo and be shared, so please consider contributing to the incubator when you have something working. Doug Thanks Georgy On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann doug.hellm...@dreamhost.com wrote: On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, In Solum project we will need to implement security and ACL for Solum API. Currently we use Pecan framework for API. Pecan has its own security model based on SecureController class. At the same time OpenStack widely uses policy mechanism which uses json files to control access to specific API methods. I wonder if someone has any experience with implementing security and ACL stuff with using Pecan framework. What is the right way to provide security for API? In ceilometer we are using the keystone middleware and the policy framework to manage arguments that constrain the queries handled by the storage layer. http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/acl.py and http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/controllers/v2.py#n337 Doug Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo
Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy
Hi, Keep policy control in one place is a good idea. We can use standard policy approach and keep access control configuration in json file as it done in Nova and other projects. Keystone uses wrapper function for methods. Here is a wrapper code: https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L111. Each controller method has @protected() wrapper, so a method information is available through python f.__name__ instead of URL parsing. It means that some RBAC parts anyway scattered among the code. If we want to avoid RBAC scattered among the code we can use URL parsing approach and have all the logic inside hook. In pecan hook WSGI environment is already created and there is full access to request parameters\content. We can map URL to policy key. So we have two options: 1. Add wrapper to each API method like all other project did 2. Add a hook with URL parsing which maps path to policy key. Thanks Georgy On Wed, Jan 8, 2014 at 9:05 AM, Kurt Griffiths kurt.griffi...@rackspace.com wrote: Yeah, that could work. The main thing is to try and keep policy control in one place if you can rather than sprinkling it all over the place. From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com Reply-To: OpenStack Dev openstack-dev@lists.openstack.org Date: Wednesday, January 8, 2014 at 10:41 AM To: OpenStack Dev openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy Hi Kurt, As for WSGI middleware I think about Pecan hooks which can be added before actual controller call. Here is an example how we added a hook for keystone information collection: https://review.openstack.org/#/c/64458/4/solum/api/auth.py What do you think, will this approach with Pecan hooks work? Thanks Georgy On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths kurt.griffi...@rackspace.com wrote: You might also consider doing this in WSGI middleware: Pros: - Consolidates policy code in once place, making it easier to audit and maintain - Simple to turn policy on/off – just don’t insert the middleware when off! - Does not preclude the use of oslo.policy for rule checking - Blocks unauthorized requests before they have a chance to touch the web framework or app. This reduces your attack surface and can improve performance (since the web framework has yet to parse the request). Cons: - Doesn't work for policies that require knowledge that isn’t available this early in the pipeline (without having to duplicate a lot of code) - You have to parse the WSGI environ dict yourself (this may not be a big deal, depending on how much knowledge you need to glean in order to enforce the policy). - You have to keep your HTTP path matching in sync with with your route definitions in the code. If you have full test coverage, you will know when you get out of sync. That being said, API routes tend to be quite stable in relation to to other parts of the code implementation once you have settled on your API spec. I’m sure there are other pros and cons I missed, but you can make your own best judgement whether this option makes sense in Solum’s case. From: Doug Hellmann doug.hellm...@dreamhost.com Reply-To: OpenStack Dev openstack-dev@lists.openstack.org Date: Tuesday, January 7, 2014 at 6:54 AM To: OpenStack Dev openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi Dough, Thank you for pointing to this code. As I see you use OpenStack policy framework but not Pecan security features. How do you implement fine grain access control like user allowed to read only, writers and admins. Can you block part of API methods for specific user like access to create methods for specific user role? The policy enforcement isn't simple on/off switching in ceilometer, so we're using the policy framework calls in a couple of places within our API code (look through v2.py for examples). As a result, we didn't need to build much on top of the existing policy module to interface with pecan. For your needs, it shouldn't be difficult to create a couple of decorators to combine with pecan's hook framework to enforce the policy, which might be less complex than trying to match the operating model of the policy system to pecan's security framework. This is the sort of thing that should probably go through Oslo and be shared, so please consider contributing to the incubator when you have something working. Doug Thanks Georgy On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann doug.hellm...@dreamhost.com wrote: On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, In Solum project we will need
Re: [openstack-dev] [Ceilometer] Add a filter between auth_token and v2
Hi, Here is how we are doing this for Solum: Keystone auth: https://github.com/stackforge/solum/blob/master/solum/api/auth.py Additional Hook: https://review.openstack.org/#/c/64458/ (auth.py for hook code and config.py for hooks) Here is an e-mail thread with discussion: http://lists.openstack.org/pipermail/openstack-dev/2014-January/023524.html Hope this will help, Georgy On Wed, Jan 8, 2014 at 3:02 PM, Pendergrass, Eric eric.pendergr...@hp.comwrote: I need to add an additional layer of authorization between auth_token and the reporting API. I know it’s as simple as creating a WSGI element and adding it to the pipeline. Examining the code I haven’t figured out where to begin doing this. I’m not using Apache and mod_wsgi, just the reporting API and Pecan. Any pointers on where to start and what files control the pipeline would be a big help. Thanks Eric ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy
Hi Adam, This looks very interesting. When do you expect to have this code available in oslo? Do you have a development guide which describes best practices for using this authorization approach? I think that for Pecan it will be possible to get rid of @protected wrapper and use SecureController class as a parent. It has a method which will be called before each controller method call. I saw Pecan was moved to stackforge, so probably it is a good idea to talk with Pecan developers and discuss how this part of keystone can be integrated\ supported by Pecan framework. On Wed, Jan 8, 2014 at 8:34 PM, Adam Young ayo...@redhat.com wrote: We are working on cleaning up the Keystone code with an eye to Oslo and reuse: https://review.openstack.org/#/c/56333/ On 01/08/2014 02:47 PM, Georgy Okrokvertskhov wrote: Hi, Keep policy control in one place is a good idea. We can use standard policy approach and keep access control configuration in json file as it done in Nova and other projects. Keystone uses wrapper function for methods. Here is a wrapper code: https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L111. Each controller method has @protected() wrapper, so a method information is available through python f.__name__ instead of URL parsing. It means that some RBAC parts anyway scattered among the code. If we want to avoid RBAC scattered among the code we can use URL parsing approach and have all the logic inside hook. In pecan hook WSGI environment is already created and there is full access to request parameters\content. We can map URL to policy key. So we have two options: 1. Add wrapper to each API method like all other project did 2. Add a hook with URL parsing which maps path to policy key. Thanks Georgy On Wed, Jan 8, 2014 at 9:05 AM, Kurt Griffiths kurt.griffi...@rackspace.com wrote: Yeah, that could work. The main thing is to try and keep policy control in one place if you can rather than sprinkling it all over the place. From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com Reply-To: OpenStack Dev openstack-dev@lists.openstack.org Date: Wednesday, January 8, 2014 at 10:41 AM To: OpenStack Dev openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy Hi Kurt, As for WSGI middleware I think about Pecan hooks which can be added before actual controller call. Here is an example how we added a hook for keystone information collection: https://review.openstack.org/#/c/64458/4/solum/api/auth.py What do you think, will this approach with Pecan hooks work? Thanks Georgy On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths kurt.griffi...@rackspace.com wrote: You might also consider doing this in WSGI middleware: Pros: - Consolidates policy code in once place, making it easier to audit and maintain - Simple to turn policy on/off – just don’t insert the middleware when off! - Does not preclude the use of oslo.policy for rule checking - Blocks unauthorized requests before they have a chance to touch the web framework or app. This reduces your attack surface and can improve performance (since the web framework has yet to parse the request). Cons: - Doesn't work for policies that require knowledge that isn’t available this early in the pipeline (without having to duplicate a lot of code) - You have to parse the WSGI environ dict yourself (this may not be a big deal, depending on how much knowledge you need to glean in order to enforce the policy). - You have to keep your HTTP path matching in sync with with your route definitions in the code. If you have full test coverage, you will know when you get out of sync. That being said, API routes tend to be quite stable in relation to to other parts of the code implementation once you have settled on your API spec. I’m sure there are other pros and cons I missed, but you can make your own best judgement whether this option makes sense in Solum’s case. From: Doug Hellmann doug.hellm...@dreamhost.com Reply-To: OpenStack Dev openstack-dev@lists.openstack.org Date: Tuesday, January 7, 2014 at 6:54 AM To: OpenStack Dev openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi Dough, Thank you for pointing to this code. As I see you use OpenStack policy framework but not Pecan security features. How do you implement fine grain access control like user allowed to read only, writers and admins. Can you block part of API methods for specific user like access to create methods for specific user role? The policy enforcement isn't simple on/off switching in ceilometer, so we're using the policy framework calls in a couple
Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy
Hi Rayan, Thank you for sharing your view on SecureController. That is always good to hear info from the developers who are deeply familiar with the code base. I like an idea with hooks. If we go this path, we will need to have an information about a method of a particular controller which will be called if authorization is successful. In current keystone implementation this is done by wrapper which knows the actual method name it wraps. This allows one to write simple rules for specific methods like identity:get_policy: rule:admin_required, Do you know if you are inside hook code is there a way to obtain information about router and method which will be called after hook? Thanks Georgy On Thu, Jan 9, 2014 at 2:48 PM, Ryan Petrello ryan.petre...@dreamhost.comwrote: As a Pecan developer, I’ll chime in and say that I’m actually *not* a fan of SecureController and its metaclass approach. Maybe it’s just too magical for my taste. I’d give a big thumbs up to an approach that involves utilizing pecan’s hooks. Similar to Kurt’s suggestion with middleware, they give you the opportunity to hook in security *before* the controller call, but they avoid the nastiness of parsing the WSGI environ by hand and writing code that duplicates pecan’s route-to-controller resolution. --- Ryan Petrello Senior Developer, DreamHost ryan.petre...@dreamhost.com On Jan 9, 2014, at 3:04 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi Adam, This looks very interesting. When do you expect to have this code available in oslo? Do you have a development guide which describes best practices for using this authorization approach? I think that for Pecan it will be possible to get rid of @protected wrapper and use SecureController class as a parent. It has a method which will be called before each controller method call. I saw Pecan was moved to stackforge, so probably it is a good idea to talk with Pecan developers and discuss how this part of keystone can be integrated\ supported by Pecan framework. On Wed, Jan 8, 2014 at 8:34 PM, Adam Young ayo...@redhat.com wrote: We are working on cleaning up the Keystone code with an eye to Oslo and reuse: https://review.openstack.org/#/c/56333/ On 01/08/2014 02:47 PM, Georgy Okrokvertskhov wrote: Hi, Keep policy control in one place is a good idea. We can use standard policy approach and keep access control configuration in json file as it done in Nova and other projects. Keystone uses wrapper function for methods. Here is a wrapper code: https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L111. Each controller method has @protected() wrapper, so a method information is available through python f.__name__ instead of URL parsing. It means that some RBAC parts anyway scattered among the code. If we want to avoid RBAC scattered among the code we can use URL parsing approach and have all the logic inside hook. In pecan hook WSGI environment is already created and there is full access to request parameters\content. We can map URL to policy key. So we have two options: 1. Add wrapper to each API method like all other project did 2. Add a hook with URL parsing which maps path to policy key. Thanks Georgy On Wed, Jan 8, 2014 at 9:05 AM, Kurt Griffiths kurt.griffi...@rackspace.com wrote: Yeah, that could work. The main thing is to try and keep policy control in one place if you can rather than sprinkling it all over the place. From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com Reply-To: OpenStack Dev openstack-dev@lists.openstack.org Date: Wednesday, January 8, 2014 at 10:41 AM To: OpenStack Dev openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy Hi Kurt, As for WSGI middleware I think about Pecan hooks which can be added before actual controller call. Here is an example how we added a hook for keystone information collection: https://review.openstack.org/#/c/64458/4/solum/api/auth.py What do you think, will this approach with Pecan hooks work? Thanks Georgy On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths kurt.griffi...@rackspace.com wrote: You might also consider doing this in WSGI middleware: Pros: • Consolidates policy code in once place, making it easier to audit and maintain • Simple to turn policy on/off – just don’t insert the middleware when off! • Does not preclude the use of oslo.policy for rule checking • Blocks unauthorized requests before they have a chance to touch the web framework or app. This reduces your attack surface and can improve performance (since the web framework has yet to parse the request). Cons: • Doesn't work for policies that require knowledge that isn’t available this early in the pipeline (without having to duplicate a lot of code) • You have
[openstack-dev] [Solum][Keystone] Best practices for storing keystone trusts information
Hi, In Solum project we want to use Keystone trusts to work with other OpenStack services on behalf of user. Trusts are long term entities and a service should keep them for a long time. I want to understand what are best practices for working with trusts and storing them in a service? What are the options to keep trust? I see obvious approaches like keep them in a service DB or keep them in memory. Are there any other approaches? Is there a proper way to renew trust? For example if I have a long term task which is waiting for external event, how to keep trust fresh for a long and unpredicted period? Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] [Metadatarepository] Metadata repository initiative status
Hi Travis, I think it will be discussed on the mini-summit which will be on Jan 27th-28th in Washington DC. Here is an etherpad with the summit agenda: https://etherpad.openstack.org/p/glance-mini-summit-agenda I hope that after F2F discussion all BPs will have priority and assignment. Thanks Georgy On Fri, Jan 17, 2014 at 10:11 AM, Tripp, Travis S travis.tr...@hp.comwrote: Hello All, I just took a look at this blueprint and see that it doesn’t have any priority. Was there a discussion on priority? Any idea what, if any of this will make it into Icehouse? Also, are there going to be any further design sessions on it? Thanks, Travis *From:* Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com] *Sent:* Friday, December 20, 2013 3:43 PM *To:* OpenStack Development Mailing List *Subject:* [openstack-dev] [Glance] [Metadatarepository] Metadata repository initiative status Hi, Metadata repository meeting occurred this Tuesday in #openstack-glance channel. Main item that was discussed was an API for a new metadata functions and where this API should appear. During discussion it was defined that the main functionality will be a storage for different objects and metadata associated with them. Initially all objects will have a specific type which defines specific attributes in metadata. There will be also a common set of attributes for all objects stored in Glance. During the discussion there was an input from different projects (Hest, Murano, Solum) what kind of objects should be stored for each project and what kind functionality is minimally required. Here is a list of potential objects: Heat: · HOT template Potential Attributes: version, tag, keywords, etc. *Required Features:* · Object and metadata versioning · Search by specific attribute\attributes value *Murano* · *Murano files* o UI definition o workflow definition o HOT templates o Scripts *Required Features:* · Object and metadata versioning · Search by specific attribute Solum · Solum Language Packs *Potential Attributes:* name, build_toolchain, OS, language platform, versions *Required Features:* · Object and metadata versioning · Search by specific attribute After a discussion it was concluded that the best way will be to add a new API endpoint /artifacts. This endpoint will be used to work with object’s common attributes while type specific attributes and methods will be accessible through /artifact/object-type endpoint. The endpoint /artifacts will be used for filtering objects by searching for specific attributes value. Type specific attributes search should also be possible via /artifacts endpoint. For each object type there will be a separate table for attributes in a database. Currently it is supposed that metadata repository API will be implemented inside Glance within v2 version without changing existing API for images. In the future, v3 Glance API can fold images related API to the common artifacts API. New artifact’s API will reuse as much as possible from existing Glance functionality. Most of the stored objects will be non-binary, so it is necessary to check how Glance code handle this. AI All projects teams should start submit BPs for new functionality in Glance. These BPs will be discussed in ML and on Glance weekly meetings. Related Resources: Etherpad for Artifacts API design: https://etherpad.openstack.org/p/MetadataRepository-ArtifactRepositoryAPI Heat templates repo BP for Heat: https://blueprints.launchpad.net/heat/+spec/heat-template-repo Initial API discussion Etherpad: https://etherpad.openstack.org/p/MetadataRepository-API Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum][Keystone] Best practices for storing keystone trusts information
Hi Adrian, Barbican looks good for this purpose. I will do a prototype with it. Thanks Georgy On Fri, Jan 17, 2014 at 11:43 AM, Adrian Otto adrian.o...@rackspace.comwrote: Georgy, For Solum, let's refrain from storing any secrets, whether they be passwords or trusts, or tokens. I definitely don't want to be in the business of managing how to secure them in an SQL database. I don't even want admin password values to appear in the configuration files. I'd prefer to take a hard dependency on barbican[1], and store them in there, where they can be centrally fortified with encryption and access controls, accesses can be logged, they can be revoked, and we have a real auditing story for enterprises who have strict security requirements. Thanks, Adrian [1] https://github.com/stackforge/barbican On Jan 17, 2014, at 11:26 AM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi Lance, Thank you for the documentation link. It really solves the problem with trust expiration. I really like an idea to restrict trust to specific roles. This is great. As you mentioned, you use sql to store trusts information. Do you use any encryption for that? I am thinking from security perspective, if you have trust information in DB it might be not safe as this trust is a long term authentication. Thanks Georgy On Fri, Jan 17, 2014 at 10:31 AM, Lance D Bragstad ldbra...@us.ibm.comwrote: Hi Georgy, The following might help with some of the trust questions you have, if you haven't looked at it already: *https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md*https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md As far as storage implementation, trust uses sql and kvs backends. Trusts can be given an expiration but if an expiration is not given the trust is valid until it is explicitly revoked (taken from the link above): Optionally, the trust may only be valid for a specified time period, as defined by expires_at. If noexpires_at is specified, then the trust is valid until it is explicitly revoked. Trusts can also be given 'uses' so that you can set a limit to how many times a trust will issue a token to the trustee. That functionality hasn't landed yet but it is up for review: *https://review.openstack.org/#/c/56243/*https://review.openstack.org/#/c/56243/ Hope this helps! Best Regards, Lance Bragstad graycol.gifGeorgy Okrokvertskhov ---01/17/2014 12:11:46 PM---Hi, In Solum project we want to use Keystone trusts to work with other From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com To: OpenStack Development Mailing List openstack-dev@lists.openstack.org, Date: 01/17/2014 12:11 PM Subject: [openstack-dev] [Solum][Keystone] Best practices for storing keystone trusts information -- Hi, In Solum project we want to use Keystone trusts to work with other OpenStack services on behalf of user. Trusts are long term entities and a service should keep them for a long time. I want to understand what are best practices for working with trusts and storing them in a service? What are the options to keep trust? I see obvious approaches like keep them in a service DB or keep them in memory. Are there any other approaches? Is there a proper way to renew trust? For example if I have a long term task which is waiting for external event, how to keep trust fresh for a long and unpredicted period? Thanks Georgy___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra
-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance] Meetup Schedule Posted!
Hi Mark, Happy Martin Luther King Jr. Day! Will Google hangout or skype meeting available for remote participants? I know few engineers who will not be able to attend this mini-summit in person but they will be happy to join remotely. Thanks, Georgy On Mon, Jan 20, 2014 at 1:22 AM, Mark Washenberger mark.washenber...@markwash.net wrote: Hi folks, First things first: Happy Martin Luther King Jr. Day! Our mini summit / meetup for the Icehouse cycle will take place in one week's time. To ensure we are all ready and know what to expect, I have started a wiki page tracking the event details and a tentative schedule. Please have a look if you plan to attend. https://wiki.openstack.org/wiki/Glance/IcehouseCycleMeetup I have taken the liberty of scheduling several of the topics we have already discussed. Let me know if anything in the existing schedule creates a conflict for you. There are also presently 4 unclaimed slots in the schedule. If your topic is not yet scheduled, please tell me the time you want and I will update accordingly. EXTRA IMPORTANT: If you plan to attend the meetup but have not spoken with me, please respond as soon as possible to let me know your plans. We have a limited number of seats remaining. Cheers, markwash Our only hope today lies in our ability to recapture the revolutionary spirit and go out into a sometimes hostile world declaring eternal hostility to poverty, racism, and militarism. I knew that I could never again raise my voice against the violence of the oppressed in the ghettos without having first spoken clearly to the greatest purveyor of violence in the world today, my own government. - Martin Luther King, Jr. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Oslo Context and SecurityContext
Hi, From my experience context is usually bigger then just a storage for user credentials and specifics of request. Context usually defines an area within the called method should act. Probably the class name RequestContext is a bit confusing. The actual goal of the context should be defined by a service design. If you have a lot of independent components you will probably will ned to pass a lot of parameters to specify specifics of work, so it is just more convenient to have dictionary like object which carry all necessary information about contextual information. This context can be used to pass information between different components of the service. On Mon, Jan 27, 2014 at 4:27 PM, Angus Salkeld angus.salk...@rackspace.comwrote: On 27/01/14 22:53 +, Adrian Otto wrote: On Jan 27, 2014, at 2:39 PM, Paul Montgomery paul.montgom...@rackspace.com wrote: Solum community, I created several different approaches for community consideration regarding Solum context, logging and data confidentiality. Two of these approaches are documented here: https://wiki.openstack.org/wiki/Solum/Logging A) Plain Oslo Log/Config/Context is in the Example of Oslo Log and Oslo Context section. B) A hybrid Oslo Log/Config/Context but SecurityContext inherits the RequestContext class and adds some confidentiality functions is in the Example of Oslo Log and Oslo Context Combined with SecurityContext section. None of this code is production ready or tested by any means. Please just examine the general architecture before I polish too much. I hope that this is enough information for us to agree on a path A or B. I honestly am not tied to either path very tightly but it is time that we reach a final decision on this topic IMO. Thoughts? I have a strong preference for using the SecurityContext approach. The main reason for my preference is outlined in the Pro/Con sections of the Wiki page. With the A approach, leakage of confidential information mint happen with *any* future addition of a logging call, a discipline which may be forgotten, or overlooked during future code reviews. The B approach handles the classification of data not when logging, but when placing the data into the SecurityContext. This is much safer from a long term maintenance perspective. I think we seperate this out into: 1) we need to be security aware whenever we log information handed to us by the user. (I totally agree with this general statement) 2) should we log structured data, non structured data or use the notification mechanism (which is structured) There have been some talks at summit about the potential merging of the logging and notification api, I honestly don't know what happened to that but have no problem with structured logging. We should use the notification system so that ceilometer can take advantage of the events. 3) should we use a RequestContext in the spirit of the olso-incubator (and inherited from it too). OR one different from all other projects. IMHO we should just use oslo-incubator RequestContext. Remember the context is not a generic dumping ground for I want to log stuff so lets put it into the context. It is for user credentials and things directly associated with the request (like the request_id). I don't see why we need a generic dict style approach, this is more likely to result in programming error context.set_priv('userid', bla) instead of: context.set_priv('user_id', bla) I think my point is: We should very quickly zero in on the attributes we need in the context and they will seldom change. As far as security goes Paul has shown a good example of how to change the logging_context_format_string to achieve structured and secure logging of the context. oslo log module does not log whatever is in the context but only what is configured in the solum.conf (via logging_context_format_string). So I don't believe that the new/different RequestContext provides any improved security. -Angus Adrian ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Oslo Context and SecurityContext
Hi Angus, Let me share my view on this. I think we need to distinguish implementation and semantics. Context means that you provide an information for method but method will not keep or store this information. Method does not own context but can modify it. Context does not explicitly define what information will be used by method. Context usually used when you keep some state and this state is shared between methods. Parameters in contrary are part of method definition and strictly define that method requires them. So semantically there is a difference between context and parameters, while implementation can be the same. Lets take this example: https://review.openstack.org/#/c/69308/5/solum/objects/plan.py There is a class Plan which defines a model for specific entity. The method definition def create(self, context): shows us that there is no required parameters but method result might be affected by context and the context itself might be affected by this method. It does not say what will be the behavior and what will be a resulting plan, but even with empty context it will return something meaningful. Also it will be reasonable to expect that I will have mostly the same result for different contexts like RequestContext in API call and ExecutionContext in a working code when worker executes this plan. Now I am reading test https://review.openstack.org/#/c/69308/5/solum/tests/objects/test_plan.pytest case test_check_data. From what I see here I can figure out is that Plan actually stores all values from context inside plan object as its attributes and just adds additional attribute id. There is a question: Is plan just a copy of Context with id? Why do we need it? What are the functions of plan and what it consist of? If plan needs parameters and context its really just a container for parameters, lets use **kwargs or something more meaningful which clearly defines how to use Plan and what are its methods. We want to define a data model for a Plan entity. Lets clearly express what data is mandatory for a plan object like Plan.create(project_id, user_id, raw_data, context). Let's keep data model clear and well defined instead of blur it with meaningless contexts. On Tue, Jan 28, 2014 at 3:26 PM, Angus Salkeld angus.salk...@rackspace.comwrote: On 28/01/14 07:13 -0800, Georgy Okrokvertskhov wrote: Hi, From my experience context is usually bigger then just a storage for user credentials and specifics of request. Context usually defines an area within the called method should act. Probably the class name RequestContext is a bit confusing. The actual goal of the context should be defined by a service design. If you have a lot of independent components you will probably will ned to pass a lot of parameters to specify specifics of work, so it is just more convenient to have dictionary like object which carry all necessary information about contextual information. This context can be used to pass information between different components of the service. I think we should be using the nova style objects for passing data between solum services (they can be serialized for rpc). But you hit on a point - this context needs to be called something else, it is not a RequestContext (we need the RequestContext regardless). I'd also suggest we don't build it until we know we need it (I am just suspicious as the other openstack services I have worked on don't have such a thing). Normally we just pass arguments to methods. How about we keep things simple and don't get into designing a boeing, we can always add these things later if they are really needed. I get the feeling we are being distracted from our core problem of getting this service functional by nice to haves. -Angus On Mon, Jan 27, 2014 at 4:27 PM, Angus Salkeld angus.salk...@rackspace.comwrote: On 27/01/14 22:53 +, Adrian Otto wrote: On Jan 27, 2014, at 2:39 PM, Paul Montgomery paul.montgom...@rackspace.com wrote: Solum community, I created several different approaches for community consideration regarding Solum context, logging and data confidentiality. Two of these approaches are documented here: https://wiki.openstack.org/wiki/Solum/Logging A) Plain Oslo Log/Config/Context is in the Example of Oslo Log and Oslo Context section. B) A hybrid Oslo Log/Config/Context but SecurityContext inherits the RequestContext class and adds some confidentiality functions is in the Example of Oslo Log and Oslo Context Combined with SecurityContext section. None of this code is production ready or tested by any means. Please just examine the general architecture before I polish too much. I hope that this is enough information for us to agree on a path A or B. I honestly am not tied to either path very tightly but it is time that we reach a final decision on this topic IMO. Thoughts? I have a strong preference for using the SecurityContext approach. The main reason for my
Re: [openstack-dev] [Heat] How to model resources in Heat
Hi, There is a stackforge project Mistral which is aimed to provide generic workflow service. I believe Zane mentioned it in his previous e-mail. Currently, this project is at a pilot stage. Mistral has working pilot with all core components implemented and right now we are finalizing DSL syntax for task definitions. Mistral can call any API endpoint which is defined in a task and Mistral exposes hooks to trig workflow execution on some external event. There will be a meetup where Renat Akhmerov (Mistral lead) will present Mistral, its use cases and current status of the project followed by a demo. Here is a link: http://www.meetup.com/openstack/events/163020092/ We plan to finish Mistral core development during Icehouse release and apply for an incubation. I think in J release Mistral can be used by other OpenStack projects as all bits an pieces will be available at this time. Thanks Georgy On Fri, Jan 31, 2014 at 4:53 AM, Hugh Brock hbr...@redhat.com wrote: On Jan 31, 2014, at 1:30 AM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Zane Bitter's message of 2014-01-30 19:30:40 -0800: On 30/01/14 16:54, Clint Byrum wrote: I'm pretty sure it is useful to model images in Heat. Consider this scenario: resources: build_done_handle: type: AWS::CloudFormation::WaitConditionHandle build_done: type: AWS::CloudFormation::WaitCondition properties: handle: {Ref: build_done_handle} build_server: type: OS::Nova::Server properties: image: build-server-image userdata: join [ , - #!/bin/bash\n - build_an_image\n - cfn-signal -s SUCCESS - {Ref: build_done_handle} - \n] built_image: type: OS::Glance::Image depends_on: build_done properties: fetch_url: join [ , [http://;, {get_attribute: [ build_server, fixed_ip ]}, /image_path]] actual_server: type: OS::Nova::Server properties: image: {Ref: built_image} Anyway, seems rather useful. Maybe I'm reaching. Well, consider that when this build is complete you'll still have the server you used to build the image still sitting around. Of course you can delete the stack to remove it - and along with it will go the image in Glance. Still seem useful? No, not as such. However I have also discussed with other users having an OS::Heat::TemporaryServer which is deleted after a wait condition is signaled (resurrected on each update). This would be useful for hosting workflow code as the workflow doesn't actually need to be running all the time. It would also be useful for heat resources that want to run code that needs to be contained into their own VM/network such as the port probe thing that came up a few weeks ago. Good idea? I don't know. But it is the next logical step my brain keeps jumping to for things like this. (I'm conveniently ignoring the fact that you could have set DeletionPolicy: Retain on the image to hack your way around this.) What you're looking for is a workflow service (I think it's called Mistral this week?). A workflow service would be awesome, and Heat is pretty awesome, but Heat is not a workflow service. Totally agree. I think workflow and orchestration have an unusual relationship though, because orchestration has its own workflow that users will sometimes need to defer to. This is why we use wait conditions, right? So yeah, Glance images in Heat might be kinda useful, but at best as a temporary hack to fill in a gap because the Right Place to implement it doesn't exist yet. That's why I feel ambivalent about it. I think you've nudged me away from optimistic at least closer to ambivalent as well. We (RH tripleo folks) were having a similar conversation around Heat and stack upgrades the other day. There is unquestionably a workflow involving stack updates when a user goes to upgrade their overcloud, and it's awkward trying to shoehorn it into Heat (Steve Dake agreed). Our first thought was Tuskar should do that, but our second thought was Whatever the workflow service is should do that, and Tuskar should maybe provide a shorthand API for it. I feel like we (tripleo) need to take a harder look at getting a working workflow thing available for our needs, soon... --Hugh ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev
Re: [openstack-dev] [heat] non-trivial example - IBM Connections
Hi Mike, Thank you for sharing this. It looks pretty impressive. Could you, please some details about DSL syntax, if it is possible? In our Murano project we are also solving the problem of Heat template generation. At his moment we are working on a new DSL for Murano to move from XMLs based DSL to simplified lightweight Yaml based DSL syntax. Here is an example how this new DSL looks like. https://github.com/istalker2/MuranoDsl/blob/master/meta/com.mirantis.murano.services.windows.activeDirectory.PrimaryController/manifest.yaml Using Murano DSL it is possible to combine multiple Applications defined by their manifests and then Murano engine will create Heat template for the whole environment by using Heat snippets available in Murano. Thanks Georgy On Thu, Feb 6, 2014 at 11:16 PM, Mike Spreitzer mspre...@us.ibm.com wrote: Thanks to work by several colleagues, I can share a non-trivial Heat template for a system that we have been using as an example to work. It is in the TAR archive here: https://drive.google.com/file/d/0BypF9OutGsW3Z2JqVTcxaW1BeXc/edit?usp=sharing Start at connections.yaml. Also in the archive are other files referenced from that template. The system described is a non-trival J2EE application, IBM Connections. It is a suite of collaboration applications, including things like wiki, file sharing, blogs, and directory of people. It is deployed into a system of WebSphere application servers. The servers are organized into four clusters of four servers each; each server is a VM with a single application server. The applications are partitioned among the four clusters. There is also a deployment manager, a VM and process used to manage the application servers. There is also a pair of HTTP servers --- the IBM Http Server (IHS), basically the Apache httpd. There is also a database server, running DB2. This system makes reference to an external (not deployed by the template) NFS server and an extenal LDAP server. The template describes both the VMs and the configuration of the software on them. The images used have the IBM software installed but not configured. This template (and the referenced files) were produced by automatic translation from sources expressed in Weaver into a Heat template that is ready to run. Weaver is a system for describing both infrastructure and software configuration (based on Chef). A Weaver description is a modular Ruby program that computes a model of the infrastructure and software; Weaver includes certain constructs tailored to this job, you may think of this as a DSL embedded in Ruby. The cross-machine dependencies are described abstractly in the source, connected to things in the Chef. Weaver uses an implementation that is different from the one being implemented now; Weaver's is based on communication through Zookeeper. You will see in the Heat the convolution of the user's input and Weaver's implementation (as Heat has no built in support for this mechanism). (I say these things not as a sales pitch, but to explain what you will see.) You will not be able to instantiate this template, as it has several references to external servers that are part of our environment, including those mentioned above. In fact, I have edited the external references to remove some private details. This template is presented as an example to look at. Regards, Mike ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] non-trivial example - IBM Connections [and Murano]
Hi Mike, Thank you for clarification. I like your approach with Ruby and I think this is a right way to solve such tasks like DSL creation. In Murano we use Yaml and python just to avoid introduction of a whole new language like Ruby to OpenStack. As for software configurations in heat, we are eager to have it available for use. We use Heat in Murano and we want to pass as much as possible work to Heat engine. Murano itself is intended to be an Application Catalog for managing available application packages and focuses on UI and user experience rather then on deployment details. We still use DSL for several things, have something working while waiting for Heat implementations, and to have imperative workflow engine which is useful when you need to orchestrate complex workflows. The last part is very powerful when you need to have an explicit control on deployment sequence with conditional branches orchestration among several different instances. When Mistral will be available we plan to use its workflow engine for task orchestration. Again, thank you for sharing the work you are doing in IBM. This is very good feedback for OpenStack community and helps to understand how OpenStack components used in enterprise use cases. Thanks Georgy On Sat, Feb 8, 2014 at 10:52 AM, Mike Spreitzer mspre...@us.ibm.com wrote: From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com ... Thank you for sharing this. It looks pretty impressive. Could you, please some details about DSL syntax, if it is possible? I will respond briefly, and pass your request along to the people working on that. In the Weaver language there are distinct concepts for software configuration vs. things (such as VMs) that can host software configs. As in the current software config work, there are distinct concepts for defining a software configuration vs. applying one to some particular infrastructure. Weaver supposes that local configuration is described in Chef; Weaver adds a way of connecting chef variables across machines. So you see, there are a lot of similarities with the current work on software config, which I support. Weaver takes advantage of the power of Ruby metaprogramming to add pretty natural and concise constructs for modeling infrastructure and software config. The source does not look like it is one level off, it looks like it is describing a thing rather than describing how to build a model of the thing (even though that is what it is actually doing). Embedding in Ruby adds powerful and familiar support for abstraction and customized repetition in descriptions of distributed systems. This is going to be hard to do nicely in a language (regardless of whether it uses JSON or YAML syntax) that is primarily for data representation. One of the points of sharing this example was to show a system for which you would like such power. What is the unique problem that Murano is addressing? The current software config work can deploy multiple configs to a target. Supposing that the local config (the non-distributed base configuration management tool involved, which is open in the current software config work) is expressed using something sufficiently powerful (chef and bash are examples), the local config language can do abstraction and composition in the description of local configuration. Regards, Mike ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Regarding language pack database schema
Hi Arati, I would vote for Option #2 as a short term solution. Probably later we can consider using NoSQL DB or MariaDB which has Column_JSON type to store complex types. Thanks Georgy On Thu, Feb 13, 2014 at 8:12 AM, Arati Mahimane arati.mahim...@rackspace.com wrote: Hi All, I have been working on defining the Language pack database schema. Here is a link to my review which is still a WIP - https://review.openstack.org/#/c/71132/3. There are a couple of different opinions on how we should be designing the schema. Language pack has several complex attributes which are listed here - https://etherpad.openstack.org/p/Solum-Language-pack-json-format We need to support search queries on language packs based on various criteria. One example could be 'find a language pack where type='java' and version1.4' Following are the two options that are currently being discussed for the DB schema: *Option 1:* Having a separate table for each complex attribute, in order to achieve normalization. The current schema follows this approach. However, this design has certain drawbacks. It will result in a lot of complex DB queries and each new attribute will require a code change. *Option 2:* We could have a predefined subset of attributes on which we would support search queries. In this case, we would define columns (separate tables in case of complex attributes) only for this subset of attributes and all other attributes will be a part of a json blob. With this option, we will have to go through a schema change in case we decide to support search queries on other attributes at a later stage. I would like to know everyone's thoughts on these two approaches so that we can take a final decision and go ahead with one approach. Suggestions regarding any other approaches are welcome too! Thanks, Arati ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] Need a new DSL for Murano
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Regarding language pack database schema
That is exactly option #2 which propose to store attributes in columns. So there will be a limited set of attributes and each of them will have its own column in a table. Thanks Georgy On Tue, Feb 18, 2014 at 10:55 AM, Paul Montgomery paul.montgom...@rackspace.com wrote: Maybe a crazy idea butŠ What if we simply don't store the JSON blob data for M1 instead of putting storing it in a way we don't like long term? This way, there is no need to remember to change something later even though a bug could be created anyways. I believe the fields that would be missing/not stored in the blob are: * Compiler version * Language platform * OS platform Can we live with that for M1? On 2/18/14 12:07 PM, Adrian Otto adrian.o...@rackspace.com wrote: I agree. Let's proceed with option #2, and submit a wishlist bug to track this as tech debt. We would like to come back to this later and add an option to use a blob store for the JSON blob content, as Georgy mentioned. These could be stored in swift, or a K/V store. It might be nice to have a thin get/set abstraction there to allow alternates to be implemented as needed. I'm not sure exactly where we can track Paul Czarkowski's suggested restriction. We may need to just rely on reviewers to prevent this, because if we ever start introspecting the JSON blob, we will be using an SQL anti-pattern. I'm generally opposed to putting arbitrary sized text and blob entries into a SQL database, because eventually you may run into the maximum allowable size (ie: max-allowed-packet) and cause unexpected error conditions. Thanks, Adrian On Feb 18, 2014, at 8:48 AM, Paul Czarkowski paul.czarkow...@rackspace.com wrote: I'm also a +1 for #2.However as discussed on IRC, we should clearly spell out that the JSON blob should never be treated in a SQL-like manner. The moment somebody says 'I want to make that item in the json searchable' is the time to discuss adding it as part of the SQL schema. On 2/13/14 4:39 PM, Clayton Coleman ccole...@redhat.com wrote: I like option #2, simply because we should force ourselves to justify every attribute that is extracted as a queryable parameter, rather than making them queryable at the start. - Original Message - Hi Arati, I would vote for Option #2 as a short term solution. Probably later we can consider using NoSQL DB or MariaDB which has Column_JSON type to store complex types. Thanks Georgy On Thu, Feb 13, 2014 at 8:12 AM, Arati Mahimane arati.mahim...@rackspace.com wrote: Hi All, I have been working on defining the Language pack database schema. Here is a link to my review which is still a WIP - https://review.openstack.org/#/c/71132/3 . There are a couple of different opinions on how we should be designing the schema. Language pack has several complex attributes which are listed here - https://etherpad.openstack.org/p/Solum-Language-pack-json-format We need to support search queries on language packs based on various criteria. One example could be 'find a language pack where type='java' and version1.4' Following are the two options that are currently being discussed for the DB schema: Option 1: Having a separate table for each complex attribute, in order to achieve normalization. The current schema follows this approach. However, this design has certain drawbacks. It will result in a lot of complex DB queries and each new attribute will require a code change. Option 2: We could have a predefined subset of attributes on which we would support search queries. In this case, we would define columns (separate tables in case of complex attributes) only for this subset of attributes and all other attributes will be a part of a json blob. With this option, we will have to go through a schema change in case we decide to support search queries on other attributes at a later stage. I would like to know everyone's thoughts on these two approaches so that we can take a final decision and go ahead with one approach. Suggestions regarding any other approaches are welcome too! Thanks, Arati ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [solum] async / threading for python 2 and 3
Hi Angus, I think that we need to keep python 2 as a lot of enterprise customers still use python 2.x versions. If you target Solum only for python 3 users you will significantly reduce potential customer adoption. I would rather spend some time and research round option #3. I saw in openstack-dev mailing list that there is a work around asyncio native in py3 and its port for python2. Here is related BP https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio Here is the e-mail thread: http://lists.openstack.org/pipermail/openstack-dev/2014-February/026237.html Thanks Gosha On Tue, Feb 18, 2014 at 4:53 PM, Angus Salkeld angus.salk...@rackspace.comwrote: Hi all I need to use some async / threaded behaviour to solum for image creation (all I need right now is to run a job asyncronously). eventlet is python 2 only tulip is python 3 only tornado (supports 2 + 3) http://www.tornadoweb.org twisted pyev etc... Options: 1) use eventlet and have the same path of migration as the rest of openstack. Basically give up python 3 for now. 2) use tulip and give up python 2 3) choose an existing framework that supports both py2+3 Thoughts? Since we are starting out fresh, I'd suggest 2). This will mean some learning, but that is always fun and would be benefical to other projects to see how this code looks. I am not sure how important support for python 2 is, I'd suggest supporting python 3 is more important. -Angus ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Incubation Request: Murano
All, Murano is the OpenStack Application Catalog service which has been developing on stackforge almost 11 months. Murano has been presented on HK summit on unconference track and now we would like to apply for incubation during Juno release. As the first step we would like to get feedback from TC on Murano readiness from OpenStack processes standpoint as well as open up conversation around mission and how it fits OpenStack ecosystem. Murano incubation request form is here: https://wiki.openstack.org/wiki/Murano/Incubation As a part of incubation request we are looking for an advice from TC on the governance model for Murano. Murano may potentially fit to the expanding scope of Image program, if it will be transformed to Catalog program. Also it potentially fits Orchestration program, and as a third option there might be a value in creation of a new standalone Application Catalog program. We have pros and cons analysis in Murano Incubation request form. Murano team has been working on Murano as a community project. All our code and bugs/specs are hosted at OpenStack Gerrit and Launchpad correspondingly. Unit tests and all pep8/hacking checks are run at OpenStack Jenkins and we have integration tests running at our own Jenkins server for each patch set. Murano also has all necessary scripts for devstack integration. We have been holding weekly IRC meetings for the last 7 months and discussing architectural questions there and in openstack-dev mailing lists as well. Murano related information is here: Launchpad: https://launchpad.net/murano Murano Wiki page: https://wiki.openstack.org/wiki/Murano Murano Documentation: https://wiki.openstack.org/wiki/Murano/Documentation Murano IRC channel: #murano With this we would like to start the process of incubation application review. Thanks Georgy -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Automatic Evacuation
Hi, If I am not mistaken Mistral team listed Live migration as a potential use case for workflow engine. There is no much details though. https://wiki.openstack.org/wiki/Mistral#Live_migration As I know Mistral plan to implement generic event handling mechanism when one can bind any kind of workflow with external event triggered by Ceilometer or other monitoring system. This bound workflow can actually define live migration logic. Thanks Georgy On Thu, Feb 20, 2014 at 3:04 PM, Sean Dague s...@dague.net wrote: On 02/20/2014 05:32 PM, Russell Bryant wrote: On 02/20/2014 05:05 PM, Costantino, Leandro I wrote: Hi, Would like to know if there's any interest on having 'automatic evacuation' feature when a compute node goes down. I found 3 bps related to this topic: [1] Adding a periodic task and using ServiceGroup API for compute-node status [2] Using ceilometer to trigger the evacuate api. [3] Include some kind of H/A plugin by using a 'resource optimization service' Most of those BP's have comments like 'this logic should not reside in nova', so that's why i am asking what should be the best approach to have something like that. Should this be ignored, and just rely on external monitoring tools to trigger the evacuation? There are complex scenarios that require lot of logic that won't fit into nova nor any other OS component. (For instance: sometimes it will be faster to reboot the node or compute-nova than starting the evacuation, but if it fail X times then trigger an evacuation, etc ) Any thought/comment// about this? Regards Leandro [1] https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken [2] https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically [3] https://blueprints.launchpad.net/nova/+spec/resource-optimization-service My opinion is that I would like to see this logic done outside of Nova. Right now Nova is the only service that really understands the compute topology of hosts, though it's understanding of liveness is really not sufficient to handle this kind of HA thing anyway. I think that's the real problem to solve. How to provide notifications to somewhere outside of Nova on host death. And the question is, should Nova be involved in just that part, keeping track of node liveness and signaling up for someone else to deal with it? Honestly that part I'm more on the fence about. Because putting another service in place to just handle that monitoring seems overkill. I 100% agree that all the policy, reacting, logic for this should be outside of Nova. Be it Heat or somewhere else. -Sean -- Sean Dague Samsung Research America s...@dague.net / sean.da...@samsung.com http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] Object-oriented approach for defining Murano Applications
, defines a machine which can join an Active Directory. Adds a join domain workflow step 4. DomainController - inherits Windows instance, adds an Install AD Role workflow steps and extends the Deploy step to call it. 5. PrimaryController - inherits DomainContoller, adds a Configure as Primary DC workflow step and extends Deploy step to call it. Also adds a domainIpAddress property which is set during the deployment. 6. SecondaryController, inherits both DomainMember and DomainController. Adds a Configure as Secondary DC worflow step and extends Deploy() step to call it and the join domain step inherited from the Domain Member class. 7. ActiveDirectory - a primary class which defines an Active Directory application. Defines properties for PrimaryController and SecondaryControllers and a Deploy workflow which call appropriate workflows on the controllers. The simplified class diagram may look like this: So, this approach allows to decompose the AD deployment workflow into simple isolated parts, explicitly manage the state and create reusable entities (of course classes like Instance, WindowsInstance, DomainMember may be used by other Murano Applications). For me this looks much, much better than the current implicit state machine which we run based on XML rules. What do you think about this approach, folks? Do you think it will be easily understood by application developers? Will it be easy to write workflows this way? Do you see any drawbacks here? Waiting for your feedback. -- Regards, Alexander Tivelkov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours Stanislav (Stan) Lagun Senior Developer Mirantis 35b/3, Vorontsovskaya St. Moscow, Russia Skype: stanlagun www.mirantis.com sla...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Incubation Request: Murano
projects ? -- Thierry Carrez (ttx) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Mistral] Plugin architecture for custom actions?
Hi Winson, I think this is a good idea to support pluggable interface for actions. I think you can submit a BP for that. There is a python library stevedore developed in OpenStack community. I don't know the details but it looks like this library is intended to help build plugins. Thanks Georgy On Mon, Feb 24, 2014 at 5:43 PM, W Chan m4d.co...@gmail.com wrote: Will Mistral be supporting custom actions developed by users? If so, should the Actions module be refactored to individual plugins with a dynamic process for action type mapping/lookup? Thanks. Winson ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Incubation Request: Murano
will be a requirement it can be done with Murano engine as it has all necessary concepts and this is a question of translation between different syntax. There are probably some parts still missed, but we use pragmatic approach by introducing new concepts and ideas when it is necessary. Zane, thanks again for this question. I think my explanation of Murano and TOSCA relations will help to understand what value Murano adds for OpenStack. In this e-mail we actually discussed only one Murano component which is responsible for application package processing. I did not touch the Application Catalog part as this is not a part of TOSCA standard. [1] - TOSCA TC Scope of work: https://www.oasis-open.org/committees/tosca/charter.php [2] - TOSCA standard document: http://docs.oasis-open.org/tosca/TOSCA/v1.0/TOSCA-v1.0.html [3] - DSL discussion e-mail thread: http://lists.openstack.org/pipermail/openstack-dev/2014-February/027938.html On Mon, Mar 3, 2014 at 6:33 PM, Zane Bitter zbit...@redhat.com wrote: On 25/02/14 05:08, Thierry Carrez wrote: The second challenge is that we only started to explore the space of workload lifecycle management, with what looks like slightly overlapping solutions (Heat, Murano, Solum, and the openstack-compatible PaaS options out there), and it might be difficult, or too early, to pick a winning complementary set. I'd also like to add that there is already a codified OASIS standard (TOSCA) that covers application definition at what appears to be a similar level to Murano. Basically it's a more abstract version of what Heat does plus workflows for various parts of the lifecycle (e.g. backup). Heat and TOSCA folks have been working together since around the time of the Havana design summit with the aim of eventually getting a solution for launching TOSCA applications on OpenStack. Nothing is set in stone yet, but I would like to hear from the Murano folks how they are factoring compatibility with existing standards into their plans. cheers, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Incubation Request: Murano
Hi, Here is an etherpad page with current Murano status http://etherpad.openstack.org/p/murano-incubation-status. Thanks Georgy On Mon, Mar 3, 2014 at 9:04 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi Zane, Thank you very much for this question. First of all let me highlight that Murano DSL was much inspired by TOSCA. We carefully read this standard before our movement to Murano DSL. TOSCA standard has a lot f of very well designed concepts and ideas which we reused in Murano. There is one obvious draw back of TOSCA - very heavy and verbose XML based syntax. Taking into account that OpenStack itself is clearly moving from XML based representations, it will be strange to bring this huge XML monster back on a higher level. Frankly, the current Murano workflows language is XML based and it is quite painful to write a workflows without any additional instrument like IDE. Now let me remind that TOSCA has a defined scope of its responsibility. There is a list of areas which are out of scope. For Murano it was important to see that the following items are out of TOSCA scope: Citations from [1]: ... 2. The definition of concrete plans, i.e. the definition of plans in any process modeling language like BPMN or BPEL. 3. The definition of a language for defining plans (i.e. a new process modeling language). ... Plans in TOSCA understanding is something similar to workflows. This is what we address by Murano workflow. Not let me go through TOSCA ideas and show how they are reflected in Murano. It will be a long peace of text so feel free to skip it. Taking this into account lets review what we have in Murano as an application package. Inside application package we have: 1. Application metadata which describes application, its relations and properties 2. Heat templates snippets 3. Scripts for deployment 4. Workflow definitions In TOSCA document in section 3.2.1 there are Service Templates introduced. These templates are declarative descriptions of services components and service Topologies. Service templates can be stored in catalog to be found and used by users. This service template description is abstracted from actual infrastructure implementation and each cloud provider maps this definition to actual cloud infrastructure. This is definitely a part which is already covered by Heat. The same section says the following: Making a concrete instance of a Topology Template can be done by running a corresponding Plan (so-called instantiating management plan, a.k.a. build plan). This build plan could be provided by the service developer who also creates the Service Template. This plan part which is out of scope of TOSCA is essentially what Murano adds as a part of application definition. Section 3.3 of TOSCA document introduces an new entity - artifacts. Artifact is a name for content which is needed for service deployment including (scripts, executables, binaries and images). That is why Murano has a metadata service to store artifacts as a part of application package. Moreover, Murano works with Glance team to move this metadata repository from Murano to Glance providing generic artifact repository which can be used not only by Murano but by any other services. Sections 3.4 and 3.5 explain the idea of Relationships with Compatibilities and Service Composition. Murano actually implements all these high level features. Application definition has a section with contract definitions. This contract syntax is not just a declaration of the relations and capabilities but also a way to make assertions and on-the fly type validation and conversion if needed. Section 10 reveals details of requirements. It explains that requirements can be complex: inherit each other and be abstract to define a broad set of required values. Like when service requires relation database it will require type=RDMS without assuming the actual DB implementation MySQL, PostgreSQL or MSSQL. In order to solve the problem of complex requirements, relations and service composition we introduced classes in our DSL. It was presented and discussed in this e-mail thread [3]. Murano DSL syntax allows application package writer to compose applications from existing classes by using inheritance and class properties with complex types like classes. It is possible to define a requirement with using abstract classes to define specific types of applications and services like databases, webservers and other. Using class inheritance Murano will be able to satisfy the requirement by specific object which proper parent class by checking the whole hierarchy of objects parent classes which can be abstract. I don't want to cover all entities defined in TOSCA as most important were discussed already. There are implementations of many TOSCA concepts in Murano, like class properties for TOSCA Properties, class methods for TOSCA Operations etc. TL;DR Summarizing
Re: [openstack-dev] Incubation Request: Murano
Hi Thomas, Zane, Thank you for bringing TOSCA to the discussion. I think this is important topic as it will help to find better alignment or even future merge of Murano DSL and Heat templates. Murano DSL uses YAML representation too, so we can easily merge use constructions from Heat and probably any other YAML based TOSCA formats. I will be glad to join TOSCA TC. Is there any formal process for that? I also would like to use this opportunity and start conversation with Heat team about Heat roadmap and feature set. As Thomas mentioned in his previous e-mail TOSCA topology story is quite covered by HOT. At the same time there are entities like Plans which are covered by Murano. We had discussion about bringing workflows to Heat engine before HK summit and it looks like that Heat team has no plans to bring workflows into Heat. That is actually why we mentioned Orchestration program as a potential place for Murano DSL as Heat+Murano together will cover everything which is defined by TOSCA. I think TOSCA initiative can be a great place to collaborate. I think it will be possible then to use Simplified TOSCA format for Application descriptions as TOSCA is intended to provide such descriptions. Is there a team who are driving TOSCA implementation in OpenStack community? I feel that such team is necessary. Thanks Georgy On Tue, Mar 4, 2014 at 2:36 PM, Thomas Spatzier thomas.spatz...@de.ibm.comwrote: Excerpt from Zane Bitter's message on 04/03/2014 23:16:21: From: Zane Bitter zbit...@redhat.com To: openstack-dev@lists.openstack.org Date: 04/03/2014 23:20 Subject: Re: [openstack-dev] Incubation Request: Murano On 04/03/14 00:04, Georgy Okrokvertskhov wrote: It so happens that the OASIS's TOSCA technical committee are working as we speak on a TOSCA Simple Profile that will hopefully make things easier to use and includes a YAML representation (the latter is great IMHO, but the key to being able to do it is the former). Work is still at a relatively early stage and in my experience they are very much open to input from implementers. Nice, I was probably also writing a mail with this information at about the same time :-) And yes, we are very much interested in feedback from implementers and open to suggestions. If we can find gaps and fill them with good proposals, now is the right time. I would strongly encourage you to get involved in this effort (by joining the TOSCA TC), and also to architect Murano in such a way that it can accept input in multiple formats (this is something we are making good progress toward in Heat). Ideally the DSL format for Murano+Heat should be a trivial translation away from the relevant parts of the YAML representation of TOSCA Simple Profile. Right, having a straight-forward translation would be really desirable. The way to get there can actually be two-fold: (1) any feedback we get from the Murano folks on the TOSCA simple profile and YAML can help us to make TOSCA capable of addressing the right use cases, and (2) on the other hand make sure the implementation goes in a direction that is in line with what TOSCA YAML will look like. cheers, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Incubation Request: Murano
Hi Steve, Thank you for sharing your thoughts. I believe that is what we were trying to receive as a feedback form TC. The current definition of program actually suggest the scenario you described. A new project will appear under Orchestration umbrella. Let say there will be two project one is Heat and another is Workflow (no specific name here, probably some part of Murano). Program will have one PTL (current Heat PTL) and two separate code team for each project. That was our understanding of what we want. I am not sure that this was enough stressed out on TC meeting. There were no any intentions to add anything to Heat. Not at all. We just discussed a possibility to split current Murano App Catalog into two parts. Catalog part would go to Catalog program to land App Catalog code near Glance project and integrate them as Glance will store application packages for Murano App Catalog service. The second part of Murano related to environment processing (deployment, life cycle management, events) would go to Orchestration program as a new project like Murano workflows or Murano environment control or anything else. As I mentioned in one of the previous e-mails, we already discussed with the heat team workflows Heat before HK summit. We understand this very well that workflows will not fit Heat and we perfectly understand reasons why. I think that the good result of last TC was the official mandate to discuss alignment and integration between projects Glance, Heat, Murano and probably other projects. Right now we consider the following: 1) Continue discussion around Catalog program mission and how Murano App Catalog will fit into this program. 2) Start conversation with the Heat team in two directions: a) TOSCA and its implementation. How Murano can extend TOSCA and how TOSCA can help Murano to define an application package. Murano should reuse as much as possible from TOSCA to implement this open standard b) Define the alignment between Heat and Murano. How workflows can coexist with HOT. What will be the best way to develop both Heat and Workflows within Orchestration program. 3) Explore Application space for OpenStack. As Thierry mentioned on TC meeting, there are concerns that it is probably to early for OpenStack to make a new step up to the stack. Thanks, Georgy On Wed, Mar 5, 2014 at 7:47 PM, Steven Dake sd...@redhat.com wrote: On 03/05/2014 02:16 AM, Thomas Spatzier wrote: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 05/03/2014 00:32:08: From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: 05/03/2014 00:34 Subject: Re: [openstack-dev] Incubation Request: Murano Hi Thomas, Zane, Thank you for bringing TOSCA to the discussion. I think this is important topic as it will help to find better alignment or even future merge of Murano DSL and Heat templates. Murano DSL uses YAML representation too, so we can easily merge use constructions from Heat and probably any other YAML based TOSCA formats. I will be glad to join TOSCA TC. Is there any formal process for that? The first part is that your company must be a member of OASIS. If that is the case, I think you can simply go to the TC page [1] and click a button to join the TC. If your company is not yet a member, you could get in touch with the TC chairs Paul Lipton and Simon Moser and ask for the best next steps. We recently had people from GigaSpaces join the TC, and since they are also doing very TOSCA aligned implementation in Cloudify, their input will probably help a lot to advance TOSCA. I also would like to use this opportunity and start conversation with Heat team about Heat roadmap and feature set. As Thomas mentioned in his previous e-mail TOSCA topology story is quite covered by HOT. At the same time there are entities like Plans which are covered by Murano. We had discussion about bringing workflows to Heat engine before HK summit and it looks like that Heat team has no plans to bring workflows into Heat. That is actually why we mentioned Orchestration program as a potential place for Murano DSL as Heat+Murano together will cover everything which is defined by TOSCA. I remember the discussions about whether to bring workflows into Heat or not. My personal opinion is that workflows are probably out of the scope of Heat (i.e. everything but the derived orchestration flows the Heat engine implements). So there could well be a layer on-top of Heat that lets Heat deal with all topology-related declarative business and adds workflow-based orchestration around it. TOSCA could be a way to describe the respective overarching models and then hand the different processing tasks to the right engine to deal with it. My general take is workflow would fit in the Orchestration program, but not be integrated into the heat repo specifically. It would be a different repo
Re: [openstack-dev] [Oslo] oslo.messaging on VMs
Hi Julien, I there are valid reasons why we can consider MQ approach for communicating with VM agents. The first obvious reason is scalability and performance. User can ask infrastructure to create 1000 VMs and configure them. With HTTP approach it will lead to a corresponding number of connections to a REST API service. Taking into account that cloud has multiple clients the load on infrastructure will be pretty significant. You can address this with introducing Load Balancing for each service, but it will significantly increase management overhead and complexity of OpenStack infrastructure. The second issue is connectivity and security. I think that in typical production deployment VMs will not have an access to OpenStack infrastructure services. It is fine for core infrastructure services like Nova and Cinder as they do not work directly with VM. But it makes a huge problem for VM level services like Savanna, Heat, Trove and Murano which have to be able to communicate with VMs. The solution here is to put an intermediary to create a controllable way of communication. In case of HTTP you will need to have a proxy with QoS and Firewalls or policies, to be able to restrict an access to some specific URLS or services, to throttle the number of connections and bandwidth to protect services from DDoS attacks from VM sides. In case of MQ usage you can have a separate MQ broker for communication between service and VMs. Typical brokers have throttling mechanism, so you can protect service from DDoS attacks via MQ. Using different queues and even vhosts you can effectively segregate different tenants. For example we use this approach in Murano service when it is installed by Fuel. The default deployment configuration for Murano produced by Fuel is to have separate RabbitMQ instance for Murano-VM communications. This configuration will not expose the OpenStack internals to VM, so even if someone broke the Murano rabbitmq instance, the OpenSatck itself will be unaffected and only the Murano part will be broken. Thanks Georgy On Thu, Mar 6, 2014 at 7:46 AM, Julien Danjou jul...@danjou.info wrote: On Thu, Mar 06 2014, Dmitry Mescheryakov wrote: So, messaging people, what is your opinion on that idea? I've already raised that question in the list [1], but seems like not everybody who has something to say participated. So I am resending with the different topic. For example, yesterday we started discussing security of the solution in the openstack-oslo channel. Doug Hellmann at the start raised two questions: is it possible to separate different tenants or applications with credentials and ACL so that they use different queues? My opinion that it is possible using RabbitMQ/Qpid management interface: for each application we can automatically create a new user with permission to access only her queues. Another question raised by Doug is how to mitigate a DOS attack coming from one tenant so that it does not affect another tenant. The thing is though different applications will use different queues, they are going to use a single broker. What about using HTTP and the REST APIs? What's what supposed to be the world facing interface of OpenStack. If you want to receive messages, it's still possible to use long polling connections. -- Julien Danjou ;; Free Software hacker ;; http://julien.danjou.info ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Oslo] oslo.messaging on VMs
On Thu, Mar 6, 2014 at 8:59 AM, Julien Danjou jul...@danjou.info wrote: On Thu, Mar 06 2014, Georgy Okrokvertskhov wrote: I there are valid reasons why we can consider MQ approach for communicating with VM agents. The first obvious reason is scalability and performance. User can ask infrastructure to create 1000 VMs and configure them. With HTTP approach it will lead to a corresponding number of connections to a REST API service. Taking into account that cloud has multiple clients the load on infrastructure will be pretty significant. You can address this with introducing Load Balancing for each service, but it will significantly increase management overhead and complexity of OpenStack infrastructure. Uh? I'm having trouble imagining any large OpenStack deployment without load-balancing services. I don't think we ever designed OpenStack to run without load-balancers at large scale. Not all services require LoadBalancer instances. It makes sense to use LB for API services but even in Nova there are components which use MQ RPC for communication and one doesn't need to put them behind LB as they scale naturally just using MQ concurrently. I believe this change to MQ RPC was exactly done to address the problems of scalability for internal services. I agree that LBs are supposed to be in production grade deployment but this solution is not a silver bullet and has lot of limitations and overall design implications. The second issue is connectivity and security. I think that in typical production deployment VMs will not have an access to OpenStack infrastructure services. Why? Should they be different than other VM? Are you running another OpenStack cloud to run your OpenStack cloud? There are use cases and security requirements that usually enforce to have very limited access to OpenStack infrastructure components. As cloud admins do not control the workloads on VMs there is a significant security risk of being attacked from VM. The common requirements we see in production deployment is to enable SSL for everything including MySQL, MQ and nova metadata service. I also would like to highlight that even Nova\Neutron for working with cloud-init enables an access to metadata temporary by managing routes on the VM. So for design purpose it is better to assume that there will be no access to OpenStack services from VM side and if you need is, you will have to configure this properly. It is fine for core infrastructure services like Nova and Cinder as they do not work directly with VM. But it makes a huge problem for VM level services like Savanna, Heat, Trove and Murano which have to be able to communicate with VMs. The solution here is to put an intermediary to create a controllable way of communication. In case of HTTP you will need to have a proxy with QoS and Firewalls or policies, to be able to restrict an access to some specific URLS or services, to throttle the number of connections and bandwidth to protect services from DDoS attacks from VM sides. This really sounds like weak arguments. You probably already do need firewall, QoS, and throttling for your users if you're deploying a cloud and want to mitigate any kind of attack. I don't argue about existence of such components in OpenStack deployment. I just show that with increasing number of services one will have to manage the complexity of such configuration. Taking into account number of possible Neutron configurations, possibility of overlapping subnets in virtual networks, and existence of fully private network which are not attached through the router to external network the connectivity and access control looks like a real complex task which will be a headache for cloud admins and devops. In case of MQ usage you can have a separate MQ broker for communication between service and VMs. Typical brokers have throttling mechanism, so you can protect service from DDoS attacks via MQ. Yeah and I'm pretty sure a lot of HTTP servers have throttling for connection rate and/or bandwidth limitation. I'm not really convinced. Yes, some of them have and you will need to configure them properly. Using different queues and even vhosts you can effectively segregate different tenants. Sounds like could do the same thing the HTTP protocol. For example we use this approach in Murano service when it is installed by Fuel. The default deployment configuration for Murano produced by Fuel is to have separate RabbitMQ instance for Murano-VM communications. This configuration will not expose the OpenStack internals to VM, so even if someone broke the Murano rabbitmq instance, the OpenSatck itself will be unaffected and only the Murano part will be broken. It really sounds like you already settled on the solution being RabbitMQ, so I'm not sure what/why you ask in the first place. :) Is there any problem with starting VMs on a network that is connected to your internal
Re: [openstack-dev] [Oslo] [Marconi] oslo.messaging on VMs
the communication. Bootstrapping seems like a second obvious problem with this model. I prefer a point to point model, much as the metadata service works today. Although rpc.messaging is a really nice framework (I know, I just ported heat to oslo.messaging!) it doesn't fit this problem well because of the security implications. Regards -steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Murano] New API methods for App Catalog UI
Hi, Murano is moving towards App Catalog functionality and in order to support this new aspect in the UI we need to add new API methods to cover App Catalog operations. Currently the vision for App Catalog API is the following: 1) All App create operations will be covered by metadata repository API which will eventually be a part of Glance Artifacts functionality. New application creation will be technically a creation of a new artifact and uploading it to metadata repository. The sharing and distribution aspects will be covered by the same artifact repository functionality. 2) App Listing and App Catalog rendering will be covered by a new Murano API. The reason for that is to keep UI thin and keep package representation aspects out of the general artifacts repository. The list of new API functions is available here: https://etherpad.openstack.org/p/MuranoAppCatalogAPI This is a first draft to cover minimal UI rendering requirements. Thanks Georgy -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Heat][Murano][TOSCA] Murano team contrib. to Heat TOSCA activities
Hi, Thomas and Zane initiated a good discussion about Murano DSL and TOSCA initiatives in Heat. I think will be beneficial for both teams to contribute into TOSCA. While Mirantis is working on organizational part for OASIS. I would like to understand what is the current view on the TOSCA and HOT relations. It looks like TOSCA can cover all aspects of declarative components HOT templates and imperative workflows which can be covered by Murano. What do you think about that? I think TOSCA format can be used a a descriptions of Applications and heat-translator can actually convert TOSCA descriptions to both HOT and Murano files which can be then used for actual Application deployment. Both Het and Murano workflows can coexist in Orchestration program and cover both declarative templates and imperative workflows use cases. -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat][Murano][TOSCA] Murano team contrib. to Heat TOSCA activities
Hi Randall, I saw only the original XML based TOSCA 1.0 standard, I heard that there is a new YAML based version but I did not see it. The original TOSCA standard covered all aspects with TOSCA topology templates (Heat templates) and TOSCA Plans (workflows). I hope they will still use this approach. Mirantis is going to join OASIS and participate in TOSCA standard development too. As for Mistral, then it will have its own format for defining a tasks and flows. Murano can perfectly coexists with Mistral as it provides an application specific workflows. It is possible to define different actions for Application like Application.backup and this action can be called by Mistral to run a backup on schedule. Thanks, Georgy On Mon, Mar 10, 2014 at 11:51 AM, Randall Burt randall.b...@rackspace.comwrote: On Mar 10, 2014, at 1:26 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, Thomas and Zane initiated a good discussion about Murano DSL and TOSCA initiatives in Heat. I think will be beneficial for both teams to contribute into TOSCA. Wasn't TOSCA developing a simplified version in order to converge with HOT? While Mirantis is working on organizational part for OASIS. I would like to understand what is the current view on the TOSCA and HOT relations. It looks like TOSCA can cover all aspects of declarative components HOT templates and imperative workflows which can be covered by Murano. What do you think about that? Aren't workflows covered by Mistral? How would this be different than including mistral support in Heat? I think TOSCA format can be used a a descriptions of Applications and heat-translator can actually convert TOSCA descriptions to both HOT and Murano files which can be then used for actual Application deployment. Both Het and Murano workflows can coexist in Orchestration program and cover both declarative templates and imperative workflows use cases. -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
cannot have Python code as a part of your DSL if you need to evaluate that DSL on server-side. Using Python's eval() is not secure. And you don't have enough control on what evaled code is allowed to do. MuranoPL on the contrary are fully sandboxed. You have absolute control over what functions/methods/APIs are exposed to DSL and DSL code can do nothing except for what it allowed to do. Besides you typically do want your DSL to be domain-specific so general-purpose language like Python can be suboptimal. I don't say MuranoPL is good for all projects. It has many Murano-specific things after all. In most cases you don't need all those OOP features that MuranoPL has. But the code organization, how it uses YAML, block structures and especially YAQL expressions can be of a great value to many projects For examples of MuranoPL classes you can browse https://github.com/istalker2/MuranoDsl/tree/master/meta folder. This is my private repository that I was using to develop PoC for MuranoPL engine. We are on the way to create production-quality implementation with unit-tests etc. in regular Murano repositories on stackforge On Sun, Mar 9, 2014 at 7:33 AM, Joshua Harlow harlo...@yahoo-inc.com mailto:harlo...@yahoo-inc.com wrote: To continue from other thread Personally I believe that YAQL-based DSLs similar (but probably simlet than) MuranoPL can be of a great value for many OpenStack projects that have DSLs of different kind. Murano for App Catatalog, Mistral for workflows, Heat for HOT templates, Ceilometer for alarms counter aggregation rules, Nova for customizable resource scheduling and probably many more. Isn't python a better host language for said DSLs than muranoPL? I am still pretty weary of a DSL that starts to grow so many features to encompass other DSLs since then it's not really a DSL but a non-DSL, and we already have one that everyone is familiar with (python). Are there any good examples if muranoPL that I can look at that take advantage of muranoPL classes, functions, namespaces which can be compared to there python equivalents. Has such an analysis/evaluation been done? Sent from my really tiny device... ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours Stanislav (Stan) Lagun Senior Developer Mirantis 35b/3, Vorontsovskaya St. Moscow, Russia Skype: stanlagun www.mirantis.com http://www.mirantis.com/ sla...@mirantis.com mailto:sla...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
Hi, I think notification mechanism proposed in Heat will work fine for integration with external workflows. The approach which uses workflows outside of Heat engine sounds consistent with our current approach in Murano. I am looking into new TOSCA yaml format and I also ask Mirantis management to consider joining OASIS. The decision is not made yet, but hopefully will be made on next week. We eager to jump onto TOSCA standard work and contribute plan related parts. Thanks Georgy On Wed, Mar 19, 2014 at 1:38 PM, Thomas Spatzier thomas.spatz...@de.ibm.com wrote: Excerpts from Zane Bitter's message on 19/03/2014 18:18:34: From: Zane Bitter zbit...@redhat.com To: openstack-dev@lists.openstack.org Date: 19/03/2014 18:21 Subject: Re: [openstack-dev] [Murano][Heat] MuranoPL questions? On 19/03/14 05:00, Stan Lagun wrote: Steven, Agree with your opinion on HOT expansion. I see that inclusion of imperative workflows and ALM would require major Heat redesign and probably would be impossible without loosing compatibility with previous HOT syntax. It would blur Heat mission, confuse current users and rise a lot of questions what should and what should not be in Heat. Thats why we chose to built a system on top of Heat rather then expending HOT. +1, I agree (as we have discussed before) that it would be a mistake to shoehorn workflow stuff into Heat. I do think we should implement the hooks I mentioned at the start of this thread to allow tighter integration between Heat and a workflow engine (i.e. Mistral). +1 on not putting workflow stuff into Heat. Rather let's come up with a nice way of Heat and a workflow service to work together. That could be done in two ways: (1) let Heat hand off to a workflow service for certains tasks or (2) let people define workflow tasks that can easily work on Heat deployed resources. Maybe both make sense, but right now I am more leaning towards (2). So building a system on top of Heat is good. Building it on top of Mistral as well would also be good, and that was part of the feedback from the TC. To me, building on top means building on top of the languages (which users will have to invest a lot of work in learning) as well, rather than having a completely different language and only using the underlying implementation(s). That all sounds logical to me and would keep things clean, i.e. keep the HOT language clean by not mixing it with imperative expression, and keep the Heat engine clean by not blowing it up to act as a workflow engine. When I think about the two aspects that are being brought up in this thread (declarative description of a desired state and workflows) my thinking is that much (and actually as much as possible) can be done declaratively the way Heat does it with HOT. Then for bigger lifecycle management there will be a need for additional workflows on top, because at some point it will be hard to express management logic declaratively in a topology model. Those additional flows on-top will have to be aware of the instance created from a declarative template (i.e. a Heat stack) because it needs to act on the respective resources to do something in addition. So when thinking about a domain specific workflow language, it should be possible to define tasks (in a template aware manner) like on resource XYZ of the template, do something, or update resource XYZ of the template with this state, then do this etc. At runtime this would resolve to the actual resource instances created from the resource templates. Making such constructs available to the workflow authors would make sense. Having a workflow service able to execute this via the right underlying APIs would be the execution part. I think from an instance API perspective, Heat already brings a lot for this with the stack model, so workflow tasks could be written to use the stack API to access instance information. Things like update of resources is also something that is already there. BTW, we have a similar concept (or are working on a refinement of it based on latest discussions) in TOSCA and call it the plan portability API, i.e. an API that a declarative engine would expose so that fit-for-purpose workflow tasks can be defined on-top. Regards, Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
to integrate more with HOT and eliminate all duplications between projects. I think that Murano and Heat are complimentary products that can effectively coexist. Murano provides access to all HOT features and relies on Heat for most of its activities. I believe that we need to find an optimal way to integrate Heat, Murano, Mistral, Solum, Heater, TOSCA, do some integration between ex-Thermal and Murano Dashboard, be united regarding Glance usage for metadata and so on. +1 To me that implies that Murano should be a relatively thin wrapper that ties together HOT and Mistral's DSL. We are okay with throwing MuranoPL out if the issues it solves would be addressed by HOT. If you have a vision on how HOT can address the same domain MuranoPL does or any plans for such features in upcoming Heat releases I would ask you to share it. [1] https://wiki.openstack.org/__wiki/Murano/DSL/Blueprint#__ Object_model https://wiki.openstack.org/wiki/Murano/DSL/Blueprint# Object_model ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
Hi Thomas, I think we went to the second loop of the discussion about generic language concepts. Murano does not use a new language for the sole purpose of having parameters, constraints and polymorphism. These are generic concepts which are common for different languages, so keeping arguing about these generic concepts is just like a holy war like Python vs. C. Keeping these arguments is just like to say that we don't need Python as functions and parameters already exists in C which is used under the hood in Python. Yes Murano DSL have some generic concepts similar to HOT. I think this is a benefit as user will see the familiar syntax constructions and it will be a lower threshold for him to start using Murano DSL. In a simplified view Murano uses DSL for application definition to solve several particular problems: a) control UI rendering of Application Catalog b) control HOT template generation These aspects are not covered in HOT and probably should not be covered. I don't like an idea of expressing HOT template generation in HOT as it sounds like a creation another Lisp like language :-) I don't think that your statement that most of the people in the community are against new DSL is a right summary. There are some disagreements how it should look like and what are the goals. You will be probably surprised but we are not the first who use DSL for HOT templates generation. Here is an e-mail thread right about Ruby based DSL used in IBM for the same purpose: http://lists.openstack.org/pipermail/openstack-dev/2014-February/026606.html The term Orchestration is quite generic. Saying that orchestration should be Heat job sounds like a well know Henry Ford's phrase You can have any colour as long as it's black.. I think this is again a lack of understanding of the difference between Orchestration program and Heat project. There are many aspects of Orchestration and OpenStack has the Orchestration program for the projects which are focused on some aspects of orchestration. Heat is one of the project inside Orchestration program but it does not mean that Heat should cover everything. That is why we discussed in this thread how workflows aspects should be aligned and how they should be placed into this Orchestration program. Thanks Georgy On Mon, Mar 24, 2014 at 8:28 AM, Dmitry mey...@gmail.com wrote: MuranoPL supposed to provide a solution for the real needs to manage services in the centralized manner and to allow cloud provider customers to create their own services. The application catalog similar to AppDirect (www.appdirect.com) natively supported by OpenStack is a huge step forward. Think about Amazon which provides different services for the different needs: Amazon Cloud Formation, Amazon OpsWorks and Amazon Beanstalk. Following the similar logic (which is fully makes sense for me), OpenStack should provide resource reservation and orchestration (Heat and Climate), Application Catalog (Murano) and PaaS (Solum). Every project can live in harmony with other and contribute for the cloud service provider service completeness. This is my opinion and i would happy to use Murano in our internal solution. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 On Mon, Mar 24, 2014 at 5:13 AM, Thomas Herve thomas.he...@enovance.comwrote: Hi Stan, Comments inline. Zane, I appreciate your explanations on Heat/HOT. This really makes sense. I didn't mean to say that MuranoPL is better for Heat. Actually HOT is good for Heat's mission. I completely acknowledge it. I've tried to avoid comparison between languages and I'm sorry if it felt that way. This is not productive as I don't offer you to replace HOT with MuranoPL (although I believe that certain elements of MuranoPL syntax can be contributed to HOT and be valuable addition there). Also people tend to protect what they have developed and invested into and to be fair this is what we did in this thread to great extent. What I'm trying to achieve is that you and the rest of Heat team understand why it was designed the way it is. I don't feel that Murano can become full-fledged member of OpenStack ecosystem without a bless from Heat team. And it would be even better if we agree on certain design, join our efforts and contribute to each other for sake of Orchestration program. Note that I feel that most people outside of the Murano project are against the idea of using a DSL. You should feel that it could block the integration in OpenStack. I'm sorry for long mail texts written in not-so-good English and appreciate you patience reading and answering them. Having said that let me step backward and explain our design
Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
On Tue, Mar 25, 2014 at 3:32 AM, Thomas Herve thomas.he...@enovance.comwrote: Hi Thomas, I think we went to the second loop of the discussion about generic language concepts. Murano does not use a new language for the sole purpose of having parameters, constraints and polymorphism. These are generic concepts which are common for different languages, so keeping arguing about these generic concepts is just like a holy war like Python vs. C. Keeping these arguments is just like to say that we don't need Python as functions and parameters already exists in C which is used under the hood in Python. Yes Murano DSL have some generic concepts similar to HOT. I think this is a benefit as user will see the familiar syntax constructions and it will be a lower threshold for him to start using Murano DSL. In a simplified view Murano uses DSL for application definition to solve several particular problems: a) control UI rendering of Application Catalog b) control HOT template generation These aspects are not covered in HOT and probably should not be covered. I don't like an idea of expressing HOT template generation in HOT as it sounds like a creation another Lisp like language :-) I'm not saying that HOT will cover all your needs. I think it will cover a really good portion. And I'm saying that for the remaining part, you can use an existing language and not create a new one. As a user can't run arbitrary python code in openstack we used Python language to create a new API for the remaining parts. This API service accepts a yaml based description of what should be done. There is no intention to create a new generic programming language. We used OpenStack approach and created a service for specific functions around Application Catalog features. Due to dynamic nature of applications we had to add a bit of dynamics to the service input just because of the same reason why Heat uses templates. I don't think that your statement that most of the people in the community are against new DSL is a right summary. There are some disagreements how it should look like and what are the goals. You will be probably surprised but we are not the first who use DSL for HOT templates generation. Here is an e-mail thread right about Ruby based DSL used in IBM for the same purpose: http://lists.openstack.org/pipermail/openstack-dev/2014-February/026606.html The term Orchestration is quite generic. Saying that orchestration should be Heat job sounds like a well know Henry Ford's phrase You can have any colour as long as it's black.. That worked okay for him :). Not really. The world acknowledged his inventions and new approaches. Other manufacturers adopted his ideas and moved forward, providing more variety, while Ford stuck with his model-T, which was very successful though. The history shows that variety won the battle over single approach and now we have different colors, shapes, engines :-) I think this is again a lack of understanding of the difference between Orchestration program and Heat project. There are many aspects of Orchestration and OpenStack has the Orchestration program for the projects which are focused on some aspects of orchestration. Heat is one of the project inside Orchestration program but it does not mean that Heat should cover everything. That is why we discussed in this thread how workflows aspects should be aligned and how they should be placed into this Orchestration program. Well, today Heat is the one and only program in the Orchestration program. If and when you have orchestration needs not covered, we are there to make sure Heat is not the best place to handle them. The answer will probably not Heat forever, but we need good use cases to delegate those needs to another project. That is exactly the reason why we have these discussions :-) We have the use cases for new functionality and we are trying to find a place for it. -- Thomas ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano][Heat] MuranoPL questions?
/HOT, Python, etc, but none of them was designed with ALM in mind as a goal. Solum[3] is specifically designed for ALM and purpose built for OpenStack... It has declared that it will generate HOT templates and setup other services, including putting together or executing supplied workflow definition (using Mistral if applicable). Like Murano, Solum is also not an OpenStack incubated project, but it has been designed with community collaboration (based on shared pain across multiple contributors) with the ALM goal in mind from the very beginning. I think that the HOT template generation is not a key feature of Solum. It is just a details of implementation. Other services like TripleO also generates Heat templates but it does not mean that it is their primary goal. ALM is a broad term. I think Solum has a different area of responsibility covering the Application Development area which does covered in OpenStack yet. The high level goals listed on Solum.io are: - Provisioning Speed - CI/CD - Git Push - Integration with common IDE's (Eclipse, IntelliJ, etc) - Application lifecycle management(connected environments - Dev, Test, Prod) Given that I don't see the huge overlap here with Murano functionality as even if Solum stated that as a part of solution Heat template will be generated it does not necessarily mean that Solum itself will do this. From what is listed on the Solum page, in Solum sense - ALM is a way how the application build from source promoted between different CI\CD environments Dev, QA, Stage, Production. Solum can use other service to do this keeping its own focus on the target area. Specifically to the environments - Solum can use Murano environments which for Murano is just a logical unity of multiple applications. Solum can add CI\CD specific stuff on top of it keeping using Murano API for the environment management under the hood. Again, this is a typical OpenStack approach to have different services integrated to achieve the larger goal, keeping services itself very focused. -Keith [1] I regularly speak with DevOps, Application Specialists, and cloud customers, specifically about Orchestration and Heat.. HOT is somewhat simple enough for the most technical of them (DevOps App Specialists) to grasp and have interest in adopting, but their is strong push back from the folks I talk to about having to learn one more thing... Since Heat adopters are exactly the same people who have the knowledge to define the overall system capabilities including component connectivity and how UI should be rendered, I'd like to keep it simple for them. The more we can do to have the Orchestration service look/feel like one thing (even if it's Engine + Other things under the hood), or reuse other OpenStack core components (e.g. Glance) the better for adoption. [2] https://wiki.openstack.org/wiki/Heat/htr [3] http://solum.io ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows
mechanism as a part of Convection proposal, which would allow other OS projects to benefit by using this. We would like to discuss if such design could become a part of future Heat version as well as other possible contributions from Murano team. I am really happy that you want to get involved and this sounds like it functionally matches quite well to the blueprints at the top. -Angus Regards, Stan Lagun -- Senior Developer Mirantis 35b/3, Vorontsovskaya St. Moscow, Russia Skype: stanlagun www.mirantis.com sla...@mirantis.com __**_ OpenStack-dev mailing list OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __**_ OpenStack-dev mailing list OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows
for the dependence. 2. Support from Heat engine / analyzer in supporting the runtime ordering, coordination between resources, and also the communication of the values. What are your thoughts? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack
Hi Clint, This is done in collaboration with Joshua Harlow and the task flow team who come up with the original proposal. We had a hangout session where we defined a project scope and its relationship with task flow library. We cant use name Convection as this is a trademark owned by Microsoft. We saw the naming problems with Quantum, so we decided to pick up a name which is not associated with any IT company. Thanks Georgy On Mon, Oct 14, 2013 at 1:04 PM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Renat Akhmerov's message of 2013-10-14 12:40:28 -0700: Hi OpenStackers, I am proud to announce the official launch of the Mistral project. At Mirantis we have a team to start contributing to the project right away. We invite anybody interested in task service state management to join the initiative. Mistral is a new OpenStack service designed for task flow control, scheduling, and execution. The project will implement Convection proposal ( https://wiki.openstack.org/wiki/Convection) and provide an API and domain-specific language that enables users to manage tasks and their dependencies, and to define workflows, triggers, and events. The service will provide the ability to schedule tasks, as well as to define and manage external sources of events to act as task execution triggers. Why exactly aren't you just calling this Convection and/or collaborating with the developers who came up with it? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software configuration proposal
, or making things very difficult for new developers, or both :( cheers, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sincerely yours Stanislav (Stan) Lagun Senior Developer Mirantis 35b/3, Vorontsovskaya St. Moscow, Russia Skype: stanlagun www.mirantis.com sla...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software configuration proposal
Hi Thomas, I agree with you on semantics part. At the same time I see a potential question which might appear - if semantics is limited by few states visible for Heat engine, then who actually does software orchestration? Will it be reasonable then to have software orchestration as separate subproject for Heat as a part of Orchestration OpenStack program? Heat engine will then do dependency tracking and will use components as a reference for software orchestration engine which will perform actual deployment and high level software components coordination. This separated software orchestration engine may address all specific requirements proposed by different teams in this thread without affecting existing Heat engine. Thanks Georgy On Tue, Oct 22, 2013 at 12:06 PM, Thomas Spatzier thomas.spatz...@de.ibm.com wrote: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 22.10.2013 20:01:19: From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com To: OpenStack Development Mailing List openstack-dev@lists.openstack.org, Date: 22.10.2013 20:05 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal Hi, I would agree with Stan that we need to discuss definitions before going deeply to the implementation. The first example on the https://wiki.openstack.org/wiki/Heat/ Blueprints/hot-software-config shows components like install_mysql and install_wordpress. Good point. I also think that at least the examples currently used are more of an automation step than a component. IMO component should represent some kind of software installation (e.g. a web server, a DBMS, etc.), where automation is used under the covers to install and configure that piece of software. From an orchestration point of view, a reasonable semantics would be that when a component is in state CREATE_COMPLETE it is ready to use, e.g. a web server is ready to serve applications. With respect to the automation that was used to bring up the component, it would return successful (and this would be signaled to Heat) when the component setup is done. For example, the following could represent an Apache web server component, installed using Chef: components: apache: type: OS::Heat::SoftwareConfig_Chef cookbook: http://www.example.com/my_apache_cookbook.zip properties: http_port: 8080 'apache' is just a name here that indicates of course what you get. The type indicates that a component provide able to invoke Chef automation is used. The cookbook attribute points to the cookbook to use, which will install and configure apache. By setting the http_port property to 8080, you provide input to the Chef cookbook. The SoftwareConfig_Chef component provider will implement the logic to pass properties to the Chef invocation in the right syntax. I would say that this is a bit confusing because I expected to see component definitions instead of software deployment definition process. I think this is a quite dangerous path here because this example shows us that we can use components as installation steps definition instead of real component definition. If one continue to do this approach and defines more and more granular steps as a components they will come to workflow definition composed in terms of components. This approach does not add either simplicity or clarity in the HOT template. Thanks Georgy On Tue, Oct 22, 2013 at 10:02 AM, Stan Lagun sla...@mirantis.com wrote: Hello, I've been reading through the thread and the wiki pages and I'm still confused by the terms. Is there a clear definition of what do we understand by component from user's and developer's point of view. If I write component, type:MySQL what is behind that definition? I mean how does the system know what exactly MySQL is and how to install it? What MySQL version is it gonna be? Will it be x86 or x64? How does the system understand that I need MySQL for Windows on Windows VM rather then Linux MySQL? What do I as a developer need to do so that it would be possible to have type: MyCoolComponentType? On Tue, Oct 22, 2013 at 8:35 PM, Thomas Spatzier thomas.spatz...@de.ibm.com wrote: Zane Bitter zbit...@redhat.com wrote on 22.10.2013 17:23:52: From: Zane Bitter zbit...@redhat.com To: openstack-dev@lists.openstack.org, Date: 22.10.2013 17:26 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal On 22/10/13 16:35, Thomas Spatzier wrote: Zane Bitter zbit...@redhat.com wrote on 22.10.2013 15:24:28: From: Zane Bitter zbit...@redhat.com To: openstack-dev@lists.openstack.org, Date: 22.10.2013 15:27 Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal On 22/10/13 09:15, Thomas Spatzier wrote: BTW, the convention of properties being input and attributes being output, i.e. that subtle distinction between properties and attributes
Re: [openstack-dev] [Trove] Replication and Clustering API
Hi, I don't see the replication type in the metadata replication contract. For example someone can use MySQL Galera cluster with synchronous replication + asynchronous replication master-slave for backup to remote site. MS SQL offers alwaysON availability groups clustering with pair of synchronous replication plus up to 3 nodes with asynchronous replication. Also there are some existing different mechanisms like data mirroring (synchronous or asynchronous) or log shipping. So my point is that when you say replication, it is not obvious which type of replication is used. Thanks Georgy On Tue, Oct 22, 2013 at 12:37 PM, Daniel Salinas imsplit...@gmail.comwrote: We have drawn up a new spec for the clustering api which removes the concept of a /clusters path as well as the need for the /clustertypes path. The spec lives here now: https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API Initially I'd like to get eyes on this and see if we can't generate some discussion. This proposal is far reaching and will ultimately require a major versioning of the trove API to support. It is an amalgam of ideas from Vipul, hub_cap and a few others but we feel like this gets us much closer to having a more intuitive interface for users. Please peruse the document and lets start working through any issues. I would like to discuss the API proposal tomorrow during our weekly meeting but I would welcome comments/concerns on the mailing list as well. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software configuration proposal
Hi Clint, Thank you for the detailed analysis. I'm not sure I know what software orchestration is, but I will take a stab at a succinct definition: Coordination of software configuration across multiple hosts. Having this definition of software orchestration what will Heat software orchestration component BP cover? I just trying to clarify for myself what is Heat position and view on software orchestration based on components and Heat view on workflows. Right now it is not clear where is the separation line between component and workflow. I think this blurred line introduced a lot of confusion in this thread as some guys had a workflows based approach in mind and some had component based view. Thanks Georgy On Tue, Oct 22, 2013 at 3:28 PM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Georgy Okrokvertskhov's message of 2013-10-22 13:32:40 -0700: Hi Thomas, I agree with you on semantics part. At the same time I see a potential question which might appear - if semantics is limited by few states visible for Heat engine, then who actually does software orchestration? Will it be reasonable then to have software orchestration as separate subproject for Heat as a part of Orchestration OpenStack program? Heat engine will then do dependency tracking and will use components as a reference for software orchestration engine which will perform actual deployment and high level software components coordination. This separated software orchestration engine may address all specific requirements proposed by different teams in this thread without affecting existing Heat engine. I'm not sure I know what software orchestration is, but I will take a stab at a succinct definition: Coordination of software configuration across multiple hosts. If that is what you mean, then I believe what you actually want is workflow. And for that, we have the Mistral project which was recently announced [1]. Use that and you will simply need to define your desired workflow and feed it into Mistral using a Mistral Heat resource. We can create a nice bootstrapping resource for Heat instances that shims the mistral workflow execution agent into machines (or lets us use one already there via custom images). I can imagine it working something like this: resources: mistral_workflow_handle: type: OS::Mistral::WorkflowHandle web_server: type: OS::Nova::Server components: mistral_agent: component_type: mistral params: workflow_: {ref: mistral_workflow_handle} mysql_server: type: OS::Nova::Server components: mistral_agent: component_type: mistral params: workflow_handle: {ref: mistral_workflow_handle} mistral_workflow: type: OS::Mistral::Workflow properties: handle: {ref: mistral_workflow_handle} workflow_reference: mysql_webapp_workflow params: mysql_server: {ref: mysql_server} webserver: {ref: web_server} And then the workflow is just defined outside of the Heat template (ok I'm sure somebody will want to embed it, but I prefer stronger separation). Something like this gets uploaded as mysql_webapp_workflow: [ 'step1': 'install_stuff', 'step2': 'wait(step1)', 'step3': 'allocate_sql_user(server=%mysql_server%)' 'step4': 'credentials=wait_and_read(step3)' 'step5': 'write_config_file(server=%webserver%)' ] Or maybe it is declared as a graph, or whatever, but it is not Heat's problem how to do workflows, it just feeds the necessary data from orchestration into the workflow engine. This also means you can use a non OpenStack workflow engine without any problems. I think after having talked about this, we should have workflow live in its own program.. we can always combine them if we want to, but having a clear line would mean keeping the interfaces clean. [1] http://lists.openstack.org/pipermail/openstack-dev/2013-October/016605.html ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software configuration proposal
___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] HOT Software configuration proposal
Hi, I guess then heat is mainly doing top-level orchestration, and then mistral does the workflow middle-level, and taskflow is (hopefully) at the lowest-level?? You drove the right picture. I can not say who is top-level and who is low-level orchestration. This is all gear wheels which should work all together well to achieve the result while Het is probably the driving wheel among them who makes sure that everything is working. Thanks Georgy On Tue, Oct 22, 2013 at 5:14 PM, Joshua Harlow harlo...@yahoo-inc.comwrote: Ah, Seems like a reasonable approach then :-) I guess then heat is mainly doing top-level orchestration, and then mistral does the workflow middle-level, and taskflow is (hopefully) at the lowest-level?? Thanks Georgy! From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com Reply-To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Date: Tuesday, October 22, 2013 4:53 PM To: OpenStack Development Mailing List openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal Hi Joshua, Sounds like taskflow could be that program (+1 from me, ha)? Mistral to me is a nice authenticated REST api + other goodies ontop of something that reliably executes workflows. I would say that Mistral is a way to do this. My arguments are the following: 1. Mistral decouples code. Heat can use API calls to invoke a workflows execution instead of linking with taskflow library in the code. This is standard SOA approach which OpenStack uses a lot. 2. Mistral will expose DSL to define tasks while taskflow will require python code for task definition. Mistral itself uses taskflow library to execute workflows but Mistral in addition does parsing and translation from DSL task definition to actual python code. Heat can use taskflow for other purposes but workflows execution is not a good reason for that. Just because of nature of workflows for deployment, there is no knowledge about workflow until end user uploads it, so you can not use taskflow itself and code this workflow in python without preliminary knowledge about workflow. If Heat uses just taskflow it should do all the work on workflow parsing and translation to a code that Heat team wants to avoid. At least this is my understanding. Thanks Georgy On Tue, Oct 22, 2013 at 4:34 PM, Joshua Harlow harlo...@yahoo-inc.comwrote: Sounds like taskflow could be that program (+1 from me, ha)? Mistral to me is a nice authenticated REST api + other goodies ontop of something that reliably executes workflows. But then what I described is also the majority of what openstack does (authenticated REST api + other goodies ontop of VM/volume/network/... workflows). On 10/22/13 3:28 PM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Georgy Okrokvertskhov's message of 2013-10-22 13:32:40 -0700: Hi Thomas, I agree with you on semantics part. At the same time I see a potential question which might appear - if semantics is limited by few states visible for Heat engine, then who actually does software orchestration? Will it be reasonable then to have software orchestration as separate subproject for Heat as a part of Orchestration OpenStack program? Heat engine will then do dependency tracking and will use components as a reference for software orchestration engine which will perform actual deployment and high level software components coordination. This separated software orchestration engine may address all specific requirements proposed by different teams in this thread without affecting existing Heat engine. I'm not sure I know what software orchestration is, but I will take a stab at a succinct definition: Coordination of software configuration across multiple hosts. If that is what you mean, then I believe what you actually want is workflow. And for that, we have the Mistral project which was recently announced [1]. Use that and you will simply need to define your desired workflow and feed it into Mistral using a Mistral Heat resource. We can create a nice bootstrapping resource for Heat instances that shims the mistral workflow execution agent into machines (or lets us use one already there via custom images). I can imagine it working something like this: resources: mistral_workflow_handle: type: OS::Mistral::WorkflowHandle web_server: type: OS::Nova::Server components: mistral_agent: component_type: mistral params: workflow_: {ref: mistral_workflow_handle} mysql_server: type: OS::Nova::Server components: mistral_agent: component_type: mistral params: workflow_handle: {ref: mistral_workflow_handle} mistral_workflow: type: OS::Mistral::Workflow properties: handle: {ref: mistral_workflow_handle} workflow_reference: mysql_webapp_workflow
Re: [openstack-dev] [Heat] HOT Software configuration proposal
or private git repo (maybe subversion), handle situations where there is one cookbook per repo or multiple cookbooks per repo, let the user choose a particular branch or label, provide ssh keys if it's a private repo, and so forth. We support all of this scenarios and so we can provide more detailed requirements if needed. Correct me if I'm wrong though, all of those scenarios are just variations on standard inputs into chef. So the chef component really just has to allow a way to feed data to chef. I am not sure adding component relations like the 'depends-on' would really help us since it is the job of config management to handle software dependencies. Also, it doesn't address the issue of circular dependencies. Circular dependencies occur in complex software stack deployments. Example. When we setup a Slum virtual cluster, both the head node and compute nodes depend on one another to complete their configuration and so they would wait for each other indefinitely if we were to rely on the 'depends-on'. In addition, I think it's critical to distinguish between configuration parameters which are known ahead of time, like a db name or user name and password, versus contextualization parameters which are known after the fact generally when the instance is created. Typically those contextualization parameters are IP addresses but not only. The fact packages x,y,z have been properly installed and services a,b,c successfully started is contextualization information (a.k.a facts) which may be indicative that other components can move on to the next setup stage. The form of contextualization you mention above can be handled by a slightly more capable wait condition mechanism than we have now. I've been suggesting that this is the interface that workflow systems should use. The case of complex deployments with or without circular dependencies is typically resolved by making the system converge toward the desirable end-state through running idempotent recipes. This is our approach. The first configuration phase handles parametrization which in general brings an instance to CREATE_COMPLETE state. A second phase follows to handle contextualization at the stack level. As a matter of fact, a new contextualization should be triggered every time an instance enters or leave the CREATE_COMPLETE state which may happen any time with auto-scaling. In that phase, circular dependencies can be resolved because all contextualization data can be compiled globally. Notice that Heat doesn't provide a purpose built resource or service like Chef's data-bag for the storage and retrieval of metadata. This a gap which IMO should be addressed in the proposal. Currently, we use a kludge that is to create a fake AWS::AutoScaling::LaunchConfiguration resource to store contextualization data in the metadata section of that resource. That is what we use in TripleO as well: http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/overcloud-source.yaml#n143 We are not doing any updating of that from within our servers though. That is an interesting further use of the capability. Aside from the HOT software configuration proposal(s). There are two critical enhancements in Heat that would make software life-cycles management much easier. In fact, they are actual blockers for us. The first one would be to support asynchronous notifications when an instance is created or deleted as a result of an auto-scaling decision. As stated earlier, contextualization needs to apply in a stack every time a instance enters or leaves the CREATE_COMPLETE state. I am not referring to a Ceilometer notification but a Heat notification that can be consumed by a Heat client. I think this fits into something that I want for optimizing os-collect-config as well (our in-instance Heat-aware agent). That is a way for us to wait for notification of changes to Metadata without polling. The second one would be to support a new type of AWS::IAM::User (perhaps OS::IAM::User) resource whereby one could pass Keystone credentials to be able to specify Ceilometer alarms based on application's specific metrics (a.k.a KPIs). It would likely be OS::Keystone::User, and AFAIK this is on the list of de-AWS-ification things. I hope this is making sense to you and can serve as a basis for further discussions and refinements. Really great feedback Patrick, thanks again for sharing! ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http
[openstack-dev] [Solum] Integration with Murano proposal
Hi, I am really excited to see Solum announcement. This is a fantastic idea to create developer friendly environment for application creation on OpenStack. I believe that this developer friendly environment will attract a lot of developers who want to write software for OpenStack platform. I think Solum will bring PaaS features to OpenStack and converts OpenStack from pure IaaS to a more complete OpenStack platform. I represent the Murano team who works on middle level orchestration for application installation. Murano initially had started as Windows services automation but we recently defined more broad roadmap for Murano service. Here is our view on Murano roadmap https://wiki.openstack.org/wiki/Murano/ApplicationServiceCatalog. Our idea is to bring existing 3rd party applications and services like Microsoft AD, MS Sharepoint to OpenStack platform by providing an integration layer for both software creators and software users. We want to provide a publishing mechanism for service creators and self-service catalog for end users. I see that Murano service is complimentary to Solum. While you are providing a framework to create a new application, this application still may have a dependencies from existing software components which may be provisioned by Murano. It will be beneficial for Solum to be able to request any 3rd party software listed in Murano catalog and provision it in application environment. Application developer will be focused on actual development without spending time on solving the problems of 3rd party components deployment. We already have working service for OpenStack which allows to deploy complex applications over multiple VMs in order to prepare an environment for some specific application. We provide a simple UI which allows you configure application environments in easy way by adding software services like Active Directory, MS SQL cluster, IIS server farm. I and potentially some of our Murano team members would like to participate in Solum design and development. Do you have design sessions scheduled? What will be the best way to discuss integration between these services? -- Georgy Okrokvertskhov Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Comments on Steve Baker's Proposal on HOT Software Config
Hi Lakshminarayanan, I believe the extensions you proposed will extend HOT software components usability. In general I have only one concern related to components naming. In your examples you have software components like install_mysql (you got it from Steve's example) and configure_app. I would say, that these components are not software components, but some actions on software components. This is a small deviation from declarative approach as in general declarative objects more or less independent while actions are naturally have some sequence. For example your configure_app should not be executed before install_app and technically it is the same component on different stages. I don't know what is inside puppet manifest in configure_app but it might contain app installation phase or might not. From my perspective it will be better to add some concepts of actions for components and have some predefined actions like pre_install, install, post_install and probably some custom actions. This will be more clear approach then declaring action as a software component. This is quite typical approach in different PaaS solutions. Thanks Georgy On Mon, Oct 28, 2013 at 8:08 AM, Randall Burt randall.b...@rackspace.comwrote: On Oct 28, 2013, at 9:49 AM, Steven Hardy sha...@redhat.com wrote: On Mon, Oct 28, 2013 at 02:33:40PM +, Randall Burt wrote: On Oct 28, 2013, at 8:53 AM, Steven Hardy sha...@redhat.com wrote: On Sun, Oct 27, 2013 at 11:23:20PM -0400, Lakshminaraya Renganarayana wrote: A few us at IBM studied Steve Baker's proposal on HOT Software Configuration. Overall the proposed constructs and syntax are great -- we really like the clean syntax and concise specification of components. We would like to propose a few minor extensions that help with better expression of dependencies among components and resources, and in-turn enable cross-vm coordination. We have captured our thoughts on this on the following Wiki page https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-ibm-response Thanks for putting this together. I'll post inline below with cut/paste from the wiki followed by my response/question: E2: Allow usage of component outputs (similar to resources): There are fundamental differences between components and resources... So... lately I've been thinking this is not actually true, and that components are really just another type of resource. If we can implement the software-config functionality without inventing a new template abstraction, IMO a lot of the issues described in your wiki page no longer exist. Can anyone provide me with a clear argument for what the fundamental differences actually are? My opinion is we could do the following: - Implement software config components as ordinary resources, using the existing interfaces (perhaps with some enhancements to dependency declaration) - Give OS::Nova::Server a components property, which simply takes a list of resources which describe the software configuration(s) to be applied I see the appeal here, but I'm leaning toward having the components define the resources they apply to rather than extending the interfaces of every compute-related resource we have or may have in the future. True, this may make things trickier in some respects with regard to bootstrapping the compute resource, but then again, don't most configuration management systems work on active compute instances? What every though? Don't we have exactly one compute resource, OS::Nova::Server? (I'm assuming this functionality won't be available via AWS compatible Instance resource) Yes, I suppose it wouldn't do to go extending the AWS compatibility interface with this functionality, so I withdraw my concern. Steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Comments on Steve Baker's Proposal on HOT Software Config
A component is implemented by a bit of user code (and/or other sorts of instructions) embedded in or referenced by a template, with no fixed API and not invoked with Keystone credentials. We desire the heat engine to invoke operations on resources; we do not desire the heat engine to invoke components (the VMs do that themselves, via whatever bootstrapping mechanism is used). I believe we had a discussion about difference between declarative approach and workflows. A component approach is consistent with declarative format as all actions\operations are hidden inside the service. If you want to use actions and operations explicitly you will have to add a workflows specific language to HOT format. You will need to have some conditions and other control structures. I also want to highlight that in most of examples on wiki pages there are actions instead of components. Just check names: install_mysql, configure_app. I think you revealed the major difference between resource and component. While the first has a fixed API and Heat already knows how to work with it, components are not determined and Heat does not know what this component actually does. I remember the first draft for Software components and it had a specific examples for yum invocation for package installation. This is a good example of declarative component. When scripts and recipes appeared a component definition was blurred. Thanks, Georgy On Mon, Oct 28, 2013 at 1:48 PM, Mike Spreitzer mspre...@us.ibm.com wrote: Steve Baker sba...@redhat.com wrote on 10/28/2013 04:24:30 PM: On 10/29/2013 02:53 AM, Steven Hardy wrote: ... Can anyone provide me with a clear argument for what the fundamental differences actually are? ... Since writing those proposals my thinking has evolved too. I'm currently thinking it would be best to implement software configuration resources rather than create a new component construct. Please pardon the newbie question, but I do not understand. A resource type is implemented in OpenStack code --- a part of Heat that calls a fixed service API that expects Keystone credentials. A component is implemented by a bit of user code (and/or other sorts of instructions) embedded in or referenced by a template, with no fixed API and not invoked with Keystone credentials. We desire the heat engine to invoke operations on resources; we do not desire the heat engine to invoke components (the VMs do that themselves, via whatever bootstrapping mechanism is used). So yes, I do see fundamental differences. What am I missing? Thanks, Mike ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO][Nova][neutron][Heat][Oslo][Ceilometer][Havana]Single Subscription Point for event notification
Hi, I am not sure that there is an existing service or API for event subscription. The event management was listed as a part of Mistral project which is not implemented though. If I am not mistaken, Mistral will allow you to refer events and alarms coming from different sources and trig some actions\task execution including call of some external hook. Scheduling is also covered by Mistral proposal but it is just an example of some timer based events. Thanks, Georgy On Mon, Oct 28, 2013 at 3:30 PM, Qing He qing...@radisys.com wrote: All, I found multiple places/components you can get event alarms, e.g., Heat, Ceilometer, Oslo, Nova etc, notification. But I fail to find any documents as to how to do it in the respective component documents. I 'm wondering if there is document as to if there is a single API entry point where you can subscribe and get event notification from all components, such as Nova, Neutron. Thanks, Qing ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Comments on Steve Baker's Proposal on HOT Software Config
Hi Steve, I am sorry for my confusing message. Just for clarification, I am against adding new abstracts to the HOT template. I just wanted to highlight that in Lakshminarayana proposal there are multiple steps which represent the same component in different stages. This might be confusing, because in you initial proposal you refer one component section for the whole component description, if I am not mistaken. Thanks Georgy On Tue, Oct 29, 2013 at 12:23 AM, Steven Hardy sha...@redhat.com wrote: On Mon, Oct 28, 2013 at 02:34:44PM -0700, Georgy Okrokvertskhov wrote: I believe we had a discussion about difference between declarative approach and workflows. A component approach is consistent with declarative format as all actions\operations are hidden inside the service. If you want to use actions and operations explicitly you will have to add a workflows specific language to HOT format. You will need to have some conditions and other control structures. Please don't confuse the component/resource discussion further by adding all these unrelated terms into the mix: - Resources are declarative, components aren't in any way more declarative - The resource/component discussion is unrelated to workflows, we're discussing the template level interfaces. - Adding imperative control-flow interfaces to the template is the opposite of a declarative approach I also want to highlight that in most of examples on wiki pages there are actions instead of components. Just check names: install_mysql, configure_app. Having descriptions of the actions required to do configure an application is not declarative. Having a resource define the properties of the application is. I think you revealed the major difference between resource and component. While the first has a fixed API and Heat already knows how to work with it, A resource doesn't have a fixed API as such - it has flexible, user-definable interfaces (inputs/properties and outputs/attributes) components are not determined and Heat does not know what this component actually does. Heat doesn't need to know what a resource or component actually does, it needs to know what do do with the inputs/properties, and how to obtain the outputs/attributes. I remember the first draft for Software components and it had a specific examples for yum invocation for package installation. This is a good example of declarative component. When scripts and recipes appeared a component definition was blurred. This makes no sense, scripts defining platform specific installation methods are the exact opposite of a declarative component. The blurred component definition you refer to is a very good reason not to add a new abstraction IMO - we should focus on adding the functionality via the existing, well understood interfaces. Steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Locking and ZooKeeper - a space oddysey
Hi Clint, I think you rose a point here. We implemented distributed engine in Murano without locking mechanism by keeping state consistent on each step. We extracted this engine from Murano and plan to put it as a part of Mistral project for task management and execution. Working Mistral implementation will appear during IceHouse development. We are working closely with taskflow team, so I think you can expect to have distributed task execution support in taskflow library natively or through Mistral. I am not against ZooKeeper but I think that for OpenStack service it is better to use oslo library shared with other projects instead of adding some custom locking mechanism for one project. Thanks Georgy On Wed, Oct 30, 2013 at 10:42 AM, Clint Byrum cl...@fewbar.com wrote: So, recently we've had quite a long thread in gerrit regarding locking in Heat: https://review.openstack.org/#/c/49440/ In the patch, there are two distributed lock drivers. One uses SQL, and suffers from all the problems you might imagine a SQL based locking system would. It is extremely hard to detect dead lock holders, so we end up with really long timeouts. The other is ZooKeeper. I'm on record as saying we're not using ZooKeeper. It is a little embarrassing to have taken such a position without really thinking things through. The main reason I feel this way though, is not because ZooKeeper wouldn't work for locking, but because I think locking is a mistake. The current multi-engine paradigm has a race condition. If you have a stack action going on, the state is held in the engine itself, and not in the database, so if another engine starts working on another action, they will conflict. The locking paradigm is meant to prevent this. But I think this is a huge mistake. The engine should store _all_ of its state in a distributed data store of some kind. Any engine should be aware of what is already happening with the stack from this state and act accordingly. That includes the engine currently working on actions. When viewed through this lense, to me, locking is a poor excuse for serializing the state of the engine scheduler. It feels like TaskFlow is the answer, with an eye for making sure TaskFlow can be made to work with distributed state. I am not well versed on TaskFlow's details though, so I may be wrong. It worries me that TaskFlow has existed a while and doesn't seem to be solving real problems, but maybe I'm wrong and it is actually in use already. Anyway, as a band-aid, we may _have_ to do locking. For that, ZooKeeper has some real advantages over using the database. But there is hesitance because it is not widely supported in OpenStack. What say you, OpenStack community? Should we keep ZooKeeper out of our.. zoo? ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Tuesday Lightning talks @ Expo Breakout Room 1
Dear presenters, All Lightning talks will be in Expo Breakout Room 1. Please arrive before 1:20 and try your laptops if you have slides to present. There is only VGA connector available, so please bring your adapters!!! Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Gated Source Code Flow (was: Weekly Team Meeting)
Hi Adrian, It looks like that the final stage on all pictures is a Deploy stage. What kind of process do you have in mind for CI\CD? When you use gate system it is typical to have multiple gates. The usual ones are: code review\approved, smoke test \ unit test pass, integration test pass, performance\scalability test pass, accepted for production. Each gate might be a quite complex process for the large application including multiple deployment to different stage environments. Also it is typical to have one build and then promote it between different stages. Will Solum API support flexible CI\CD flows where user can define specific stages and gates and actions for each of them? Thanks Georgy On Wed, Nov 13, 2013 at 12:27 PM, Adrian Otto adrian.o...@rackspace.comwrote: Clayton, On Nov 13, 2013, at 11:41 AM, Clayton Coleman ccole...@redhat.com wrote: - Original Message - Hello, Solum meets Tuesdays at 1600 UTC in #openstack-meeting-alt (formerly in #solum) Note: Due to the Nov 3rd change in Daylight Savings Time, this now happens at 08:00 US/Pacific (starts in about 45 minutes from now) Agenda: https://wiki.openstack.org/wiki/Meetings/Solum In the meeting yesterday there was a mention of a gated source code flow (where a push might go to an external system, and the gate system github/gerritt/etc would control when the commit goes back to the primary repository). I've added that flow to https://wiki.openstack.org/wiki/File:Solum_r01_flow.jpeg as well as a mention of the DNS abstraction (a deployed assembly may or may not have an assigned DNS identity). Are the two source change notification abstraction flows really different? Could we express this with two lines converging on Notify Solum API … in a single flow with two similar entrances. One key difference that I noticed between those two proposed flows are that the gate type uses the Solum API to test code, and the push one does not. Perhaps both should run unit tests in the same way with an option to bypass steps for those who don't want them? Adrian ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API
Hi, It would be great if API specs contain a list of attributes\parameters one can pass during group creation. I believe Zane already asked about LaunchConfig, but I think new autoscaling API creation was specifically designed to move from limited AWS ElasticLB to something with more broad features. There is a BP I submitted while ago https://blueprints.launchpad.net/heat/+spec/autoscaling-instancse-typization. We discussed it in IRC chat with heat team and we got to the conclusion that this will be supported in new autoscaling API. Probably it is already supported, but it is quite hard to figure this out from the existing API specs without examples. Thanks Georgy On Thu, Nov 14, 2013 at 9:56 AM, Zane Bitter zbit...@redhat.com wrote: On 14/11/13 17:19, Christopher Armstrong wrote: http://docs.heatautoscale.apiary.io/ I've thrown together a rough sketch of the proposed API for autoscaling. It's written in API-Blueprint format (which is a simple subset of Markdown) and provides schemas for inputs and outputs using JSON-Schema. The source document is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp Things we still need to figure out: - how to scope projects/domains. put them in the URL? get them from the token? - how webhooks are done (though this shouldn't affect the API too much; they're basically just opaque) My 2c: the way I designed the Heat API was such that extant stacks can be addressed uniquely by name. Humans are pretty good with names, not so much with 128 bit numbers. The consequences of this for the design were: - names must be unique per-tenant - the tenant-id appears in the endpoint URL However, the rest of OpenStack seems to have gone in a direction where the name is really just a comment field, everything is addressed only by UUID. A consequence of this is that it renders the tenant-id in the URL pointless, so many projects are removing it. Unfortunately, one result is that if you create a resource and e.g. miss the Created response for any reason and thus do not have the UUID, there is now no safe, general automated way to delete it again. (There are obviously heuristics you could try.) To solve this problem, there is a proposal floating about for clients to provide another unique ID when making the request, which would render a retry of the request idempotent. That's insufficient, though, because if you decide to roll back instead of retry you still need a way to delete using only this ID. So basically, that design sucks for both humans (who have to remember UUIDs instead of names) and machines (Heat). However, it appears that I am in a minority of one on this point, so take it with a grain of salt. Please read and comment :) A few comments... #1 thing is that the launch configuration needs to be somehow represented. In general we want the launch configuration to be a provider template, but we'll want to create a shortcut for the obvious case of just scaling servers. Maybe we pass a provider template (or URL) as well as parameters, and the former is optional. Successful creates should return 201 Created, not 200 OK. Responses from creates should include the UUID as well as the URI. (Getting into minor details here.) Policies are scoped within groups, so do they need a unique id or would a name do? I'm not sure I understand the webhooks part... webhook-exec is the thing that e.g. Ceilometer will use to signal an alarm, right? Why is it not called something like /groups/{group_id}/policies/{policy_id}/alarm ? (Maybe because it requires different auth middleware? Or does it?) And the other ones are setting up the notification actions? Can we call them notifications instead of webhooks? (After all, in the future we will probably want to add Marconi support, and maybe even Mistral support.) And why are these attached to the policy? Isn't the notification connected to changes in the group, rather than anything specific to the policy? Am I misunderstanding how this works? What is the difference between 'uri' and 'capability_uri'? You need to define PUT/PATCH methods for most of these also, obviously (I assume you just want to get this part nailed down first). cheers, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum/Heat] Is Solum really necessary?
Hi, I think that Heat is mostly focused on deployment even with new software configs and convergence. HOT template is quite static description of desired state we want to achieve and it is up to Heat engine how to achieve this state. Solum is focused on managing the process of converting source code to some deployable entity (image or container). The power of Solum is an ability to fully describe and control the process of building and testing of application. Some of the stages of build and testing process might require actual deployment and stack creation, but this is not an ultimate goal of the Solum. If someone will try to use just Heat for building process description they will figure out quickly that they need different templates for different build\testing stages. As Heat itself can't modify templates you will need some external mechanism for template creation, and this is what Solum actually does. Thanks Georgy On Thu, Nov 14, 2013 at 11:08 AM, Christopher Armstrong chris.armstr...@rackspace.com wrote: On Thu, Nov 14, 2013 at 11:04 AM, Sam Alba sam.a...@gmail.com wrote: Hi Jay, I think Heat is an ingredient for Solum. When you build a PaaS, you need to control the app at different levels: #1 describing your app (basically your stack) #2 Pushing your code #3 Deploying it #4 Controlling the runtime (restart, get logs, scale, changing resources allocation, etc...) I think Heat is a major component for step 3. But I think Heat's job ends at the end of the deployment (the status of the stack is COMPLETED in Heat after processing the template correctly). It's nice though to rely on Heat's template generation for describing the stack, it's one more thing to delegate to Heat. In other words, I see Heat as an engine for deployment (at least in the context of Solum) and have something on top to manage the other steps. I'd say that Heat does (or should do) more than just the initial deployment -- especially with recent discussion around healing / convergence. -- IRC: radix Christopher Armstrong Rackspace ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum/Heat] Is Solum really necessary?
Hi, 1) Find all of the applications using PHP 5.4. in their stack and update them to PHP 5.4.+1 a) test that the application still works using its built in test suites b) if the PHP 5.4.+1 upgrade fails, go back to using PHP 5.4. for only the affected applications Actually I think Heat can do this. Software components in HOT templates can use Chef, puppet, SaltStack, so as these tools can do this Heat also is capable to do this. As soon as you can use stack update and supply new software component with all necessary automation scripts, you can upgrade the whole stack. That perfectly fits to Heat and I believe Solum is intended to use this Heat feature for upgrades and application roll-out. 2) Allow developers to deploy versions of a heat stack for testing, and then allow a release engineer to easily convert that heat stack to a production version a) Do that 3 times a day for 100 apps b) run the integration test suites on the stack to verify that the production version is not bugged That is a good example of Solum usage. I think that was mentioned in yesterday discussion in Solum IRC chat, that there will be probably a concept of promotion. So you can promote image to different stages and environments and Solum will have an API to describe these flows. That is where Solum adds a huge value as it introduces concepts which are absent in Heat. Ideas of code, build, test and gates are common in developers world of CI\CD rather then in DevOps world where Heat plays great role. HEAT/HOT is orchestration of components - should it attempt to define the *when* and *why* of when stack changes occur? Solum I see as providing a basis for the *when* and *why*, and relying on HEAT for the *how*. I am not sure that Heat should add why and when into the syntax. I think it will overcomplicate HOT syntax. Currently there are dependencies and waitconditions available in HOT and this should be enough to describe deployments. Thanks Georgy On Thu, Nov 14, 2013 at 11:46 AM, Clayton Coleman ccole...@redhat.comwrote: - Original Message - So while I have been on vacation, I've been thinking about Solum and Heat. And I have some lingering questions in my mind that make me question whether a new server project is actually necessary at all, and whether we really should just be targeting innovation and resources towards the Heat project. What exactly is Solum's API going to control that is not already represented in Heat's API and the HOT templating language? At this point, I'm really not sure, and I'm hoping that we can discuss this important topic before going any further with Solum. Right now, I see so much overlap that I'm questioning where the differences really are. Thoughts? -jay A few interesting scenarios that I assume heat would not cover, for discussion (trying to keep in mind what Georgy and others have said) 1) Find all of the applications using PHP 5.4. in their stack and update them to PHP 5.4.+1 a) test that the application still works using its built in test suites b) if the PHP 5.4.+1 upgrade fails, go back to using PHP 5.4. for only the affected applications 2) Allow developers to deploy versions of a heat stack for testing, and then allow a release engineer to easily convert that heat stack to a production version a) Do that 3 times a day for 100 apps b) run the integration test suites on the stack to verify that the production version is not bugged 3) Generate a deployable glance image automatically and a new heat template when a developer pushes a change to a source repository a) Do that for 10k developers pushing changes 10x a day d) Keep 3 glance images referenced in 90% of stacks, 10 glance images referenced in 9% of stacks, and all glance images referenced by 1% of stacks A lot of Solum's precepts are based on the observation that there are patterns in application development and lifecycle that work well for 90% of developers 90% of the time. It is certainly possible to build custom tooling around OpenStack that handle each of these scenarios... but each of those are slightly different in ways that are typically historical rather than technological. HEAT/HOT is orchestration of components - should it attempt to define the *when* and *why* of when stack changes occur? Solum I see as providing a basis for the *when* and *why*, and relying on HEAT for the *how*. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo
Re: [openstack-dev] [Heat] Continue discussing multi-region orchestration
that is not right feel free a please tell me about it. References: [1] Blueprint: https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_ Region_Support_for_Heat [2] Etherpad: https://etherpad.openstack.org/p/icehouse-summit-heat-multi-region-cloud [3] Patch with POC version: https://review.openstack.org/#/c/53313/ Best, Bartosz Górski NTTi3 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Continue discussing multi-region orchestration
On Fri, Nov 15, 2013 at 10:44 AM, Stephen Gran stephen.g...@theguardian.com wrote: Surely those are local deployment policy decisions that shouldn't affect the development of capabilities in heat itself, right? If a deployer does not want one heat deployment to be able to reach some endpoints, they'll set up a local heat that can reach those endpoints and deploy their stack through that one, right? You are right. At the same time heat feature should not enforce some specific deployment requirements to other openstack components especially taking into account different security considerations. I am trying to rise a concern about possible security implications of that particular approach of using exposed openstack APIs and bring security requirements to the table for discussion. Probably this will help to design better solution for heat to heat or DC to DC communication if it exists. I hope that there is a room for discussion and it is possible to influence on the final design and implementation. I really want this feature to be flexible and useful for most of OpenStack deployments rather then for some specific deployment case. -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Heat] Continue discussing multi-region orchestration
will be specified. If only default context will be used heat will create all resource in the same region where it is located. So, to be clear, this is option (4) from the diagram I put together here: https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_H eat/The_Missing_Diagram It's got a couple of major problems: * When a whole region goes down, you can lose access to the Heat instance that was managing still-available resources. This makes it more or less impossible to use Heat to manage a highly-available global application. * Instances have to communicate back to the Heat instance that owns them (e.g. for WaitConditions), and it's not yet clear that this is feasible in general. There are also a number of other things I really don't like about this solution (listed on the wiki page), though reasonable people may disagree. cheers, Zane. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Working group on language packs
Hi, Just for clarification for Windows images, I think Windows image creation is closer to Docker approach. In order to create a special Windows image we use KVM\QEMU VM with initial base image, then install all necessary components, configure them and then run special tool sysprep to remove all machine specific information like passwords and SIDS and then create a snapshot. I got an impression that Docker does the same, installs application on running VM and then creates a snapshot. It looks like that this can be done with using Heat + HOT software orchestration\Deployment tools without any additional services. This solution scales very well as all configuration steps are executed inside a VM. Thanks Georgy On Sat, Nov 23, 2013 at 4:30 PM, Clayton Coleman ccole...@redhat.comwrote: On Nov 23, 2013, at 6:48 PM, Robert Collins robe...@robertcollins.net wrote: Ok, so no - diskimage-builder builds regular OpenStack full disk disk images. Translating that to a filesystem is easy; doing a diff against another filesystem version is also doable, and if the container service for Nova understands such partial container contents you could certainly glue it all in together, but we don't have any specific glue for that today. I think docker is great, and if the goal of solum is to deploy via docker, I'd suggest using docker - no need to make diskimage-builder into a docker clone. OTOH if you're deploying via heat, I think Diskimage-builder is targeted directly at your needs : we wrote it for deploying OpenStack after all. I think we're targeting all possible deployment paths, rather than just one. Docker simply represents one emerging direction for deployments due to its speed and efficiency (which vms can't match). The base concept (images and image like constructs that can be started by nova) provides a clean abstraction - how those images are created is specific to the ecosystem or organization. An organization that is heavily invested in a particular image creation technology already can still take advantage of Solum, because all that is necessary for Solum to know about is a thin shim around transforming that base image into a deployable image. The developer and administrative support roles can split responsibilities - one that maintains a baseline, and one that consumes that baseline. -Rob On 24 November 2013 12:24, Adrian Otto adrian.o...@rackspace.com wrote: On Nov 23, 2013, at 2:39 PM, Robert Collins robe...@robertcollins.net wrote: On 24 November 2013 05:42, Clayton Coleman ccole...@redhat.com wrote: Containers will work fine in diskimage-builder. One only needs to hack in the ability to save in the container image format rather than qcow2. That's good to know. Will diskimage-builder be able to break those down into multiple layers? What do you mean? Docker images can be layered. You can have a base image on the bottom, and then an arbitrary number of deltas on top of that. It essentially works like incremental backups do. You can think of it as each layer has a parent image, and if they all collapse together, you get the current state. Keeping track of past layers gives you the potential for rolling back to a particular restore point, or only distributing incremental changes when you know that the previous layer is already on the host. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Solum] Configuration options placement
Hi, I am working on the user-authentication BP implementation. I need to introduce a new configuration option for enable or disable keystone authentication for incoming request. I am looking for a right place for this option. The current situation is that we have two places for configuration, one is oslo.config and second one is a pecan configuration. My initial intension was to add all parameters to solum.conf file like it is done for nova. Keystone middleware anyway use oslo.config for keystone connection parameters. At the same time there are projects (Ceilometer and Ironic) which have enable_acl parameter as a part of pecan config. From my perspective it is not reasonable to have authentication options in two different places. I would rather use solum.conf for all parameters and limit pecan config usage to pecan specific options. I am looking for your input on this. Thanks, Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Configuration options placement
Ok. So I will keep all authentication parameters in one place: solum.conf There will be standard section for keystone [keystone_authentication] with all common parameters like keystone URL, port, etc. In the default section [DEFAUT] there will be a parameter enable_authentication=True. Actually it will be a deviation from a common practice as other core OpenStack components use auth_strategy=noauth (or keystone). Thanks Georgy On Wed, Nov 27, 2013 at 3:39 PM, Adrian Otto adrian.o...@rackspace.comwrote: On Nov 27, 2013, at 2:25 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, I am working on the user-authentication BP implementation. I need to introduce a new configuration option for enable or disable keystone authentication for incoming request. I am looking for a right place for this option. The current situation is that we have two places for configuration, one is oslo.config and second one is a pecan configuration. My initial intension was to add all parameters to solum.conf file like it is done for nova. Keystone middleware anyway use oslo.config for keystone connection parameters. At the same time there are projects (Ceilometer and Ironic) which have enable_acl parameter as a part of pecan config. From my perspective it is not reasonable to have authentication options in two different places. I would rather use solum.conf for all parameters and limit pecan config usage to pecan specific options. I agree that we should not require administrators to edit a bunch of config files to get a working solum config. I think config options in solum.conf should override ones et elsewhere. If auth is already set up for keystone in oslo.config, and no equivalent options are set in solum.conf, then we should use the oslo.config settings. I agree that the pecan config should only be used for pecan specific options. I am looking for your input on this. Thanks, Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Keystoneclient] Last released keystoneclient does not work on python33
Hi, I have failed tests in gate-solum-python33 because kesytoneclient fails to import xmlrpclib. The exact error is: File /home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/keystoneclient/openstack/common/jsonutils.py, line 42, in module 2013-11-28 18:27:12.655 | import xmlrpclib 2013-11-28 18:27:12.655 | ImportError: No module named 'xmlrpclib Is there any plan to release a new version of keystoneclient with the fix for that issue? As I see it is fixed in master. If there is no new release for keystoneclient can you recommend any workaround for this issue? Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Keystoneclient] [Keystone] Last released version of keystoneclient does not work with python33
Hi, I have failed tests in gate-solum-python33 because kesytoneclient fails to import xmlrpclib. The exact error is: File /home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/ keystoneclient/openstack/common/jsonutils.py, line 42, in module 2013-11-28 18:27:12.655 | import xmlrpclib 2013-11-28 18:27:12.655 | ImportError: No module named 'xmlrpclib This issue appeared because of xmlrpclib was renamed in python33. Is there any plan to release a new version of keystoneclient with the fix for that issue? As I see it is fixed in master. If there is no new release for keystoneclient can you recommend any workaround for this issue? Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Solum] Unicode strings in Python3
Hi, I am working on unit tests for Solum as a side effect of new unit tests I found that we use unicode strings in the way which is not compatible with python3. Here is an exception form python3 gate: Server-side error: global name 'unicode' is not defined. Detail: 2013-12-04 Traceback (most recent call last): File /home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/wsmeext/pecan.py result = f(self, *args, **kwargs) File ./solum/api/controllers/v1/assembly.py, line 59, in get raise wsme.exc.ClientSideError(unicode(error)) NameError: global name 'unicode' is not defined Here is a documentation for python3: http://docs.python.org/3.0/whatsnew/3.0.html Quick summary: you can't use unicode() function and u' ' strings in Pyhton3. Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Unicode strings in Python3
No, This is Solum code: https://github.com/stackforge/solum/blob/master/solum/api/controllers/v1/assembly.py#L59 Thanks Georgy On Wed, Dec 4, 2013 at 1:41 PM, Adrian Otto adrian.o...@rackspace.comwrote: Am I interpreting this to mean that WSME is calling unicode()? On Dec 4, 2013, at 1:32 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, I am working on unit tests for Solum as a side effect of new unit tests I found that we use unicode strings in the way which is not compatible with python3. Here is an exception form python3 gate: Server-side error: global name 'unicode' is not defined. Detail: 2013-12-04 Traceback (most recent call last): File /home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/wsmeext/pecan.py result = f(self, *args, **kwargs) File ./solum/api/controllers/v1/assembly.py, line 59, in get raise wsme.exc.ClientSideError(unicode(error)) NameError: global name 'unicode' is not defined Here is a documentation for python3: http://docs.python.org/3.0/whatsnew/3.0.html Quick summary: you can't use unicode() function and u' ' strings in Pyhton3. Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Unicode strings in Python3
I opened a bug https://bugs.launchpad.net/solum/+bug/1257929 for that issue. Ben, thank you for a quick fix proposal. Thanks Georgy On Wed, Dec 4, 2013 at 1:41 PM, Ben Nemec openst...@nemebean.com wrote: I don't think so. It looks like ./solum/api/controllers/v1/assembly.py is calling unicode(). It will need to be changed to six.text_type() for Python 3 compat. -Ben On 2013-12-04 15:41, Adrian Otto wrote: Am I interpreting this to mean that WSME is calling unicode()? On Dec 4, 2013, at 1:32 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, I am working on unit tests for Solum as a side effect of new unit tests I found that we use unicode strings in the way which is not compatible with python3. Here is an exception form python3 gate: Server-side error: global name 'unicode' is not defined. Detail: 2013-12-04 Traceback (most recent call last): File /home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/wsmeext/pecan.py result = f(self, *args, **kwargs) File ./solum/api/controllers/v1/assembly.py, line 59, in get raise wsme.exc.ClientSideError(unicode(error)) NameError: global name 'unicode' is not defined Here is a documentation for python3: http://docs.python.org/3.0/whatsnew/3.0.html Quick summary: you can't use unicode() function and u' ' strings in Pyhton3. Thanks Georgy ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Unicode strings in Python3
On Thu, Dec 5, 2013 at 8:32 AM, Christopher Armstrong chris.armstr...@rackspace.com wrote: On Thu, Dec 5, 2013 at 3:26 AM, Julien Danjou jul...@danjou.info wrote: On Wed, Dec 04 2013, Georgy Okrokvertskhov wrote: Quick summary: you can't use unicode() function and u' ' strings in Pyhton3. Not that it's advised, but you can use u' ' back again with Python 3.3. And this is a very useful feature for projects that want to have a single codebase that runs on both python 2 and python 3, so it's worth taking advantage of. You are right. PEP 414 introduces u literals in Python3.3. Unicode() function still does not work and should be avoided in the code, though. -- IRC: radix Christopher Armstrong Rackspace -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] Heater Proposal
Hi Randall, Thank you for your feedback here. Let me reply to your questions: Q: Do you mean to imply that a repository of orchestration templates is a bad fit for the orchestration program? A: Yes. Its the same situation as we have with Glance. Glance is a repository of images for Nova but it is a separate project slightly coupled with nova. Here you are trying to build template storage service which is reasonable but it not necessarily a part of orchestration program because it stores something which might be used with Heat. I think that it will be fair to say that Heater mission does not fit to Orchestrate program mission. Q:Doesn't that same argument hold for Murano Metadata Repository as well? A: Yes. With the current implementation of metadata service in Murano it mostly falls to the same area as Glance and Swift. We plan to rewrite it because we need more functionality that we have right now. If you take a look to the etherpad with further roadmap you will see that we plan to move from simple storing to transformation of metadata and related objects. And this is driven by the goals of Application Catalog. And I want to highlight that again, I like the idea of template storage, but I just had some concerns about its specific implementation and some implications which might appear when you put it as a part of existing Core program. Thanks Georgy On Thu, Dec 5, 2013 at 3:04 PM, Randall Burt randall.b...@rackspace.comwrote: On Dec 5, 2013, at 4:08 PM, Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote: Hi, I am really glad to see the line of thinking close to what we at Murano see as a right direction for OpenStack development. This is a good initiative which potentially will be useful for other projects. We have very similar idea about repository in Murano project and we even implemented the first version of it. We are very open for collaboration and exchanging the ideas. In terms of overlap with Murano, I can see the overlap in the area of Murano Metadata Repository. We already have done some work in this area and you can find the detailed description here https://wiki.openstack.org/wiki/Murano/SimplifiedMetadataRepository. The implementation of the first version is already done and we plan to include the implementation in Murano 0.4 release which will go out in a week. For the future roadmap with more advanced functionality we have created etherpad: https://etherpad.openstack.org/p/MuranoMetadata My concerns around Heater lie in two areas: - Fit for OpenStack Orchestration program Do you mean to imply that a repository of orchestration templates is a bad fit for the orchestration program? - Too narrow focus as it formulated right now making hard for other projects like Murano take advantage of this services as general purpose metadata repository That's what the discussion around using Glance is about though. The proposal started out as a separate service, but arguments are being made that the use cases fit into Glance. The use cases don't change as their focused on templates and not general object cataloging, but that's something to sort if/when we land on an implementation. I am not sure how metadata repository related to orchestration program as it does not orchestrate anything. I would rather consider creating separate Service Catalog/Metadata Repository program or consider storage programs like Glance or Swift as Heater has the similar feature set. If you replace “template” with “object” you will actually propose a new swift implementation with replacing existing Swift’s versioning, acl, and metadata for objects. Doesn't that same argument hold for Murano Metadata Repository as well? And, as initially proposed, its not a generic metadata repository but a template cataloging system. The difference maybe academic, but I think its important. That being said, maybe there's a case for something even more generic (store some meta information about some consumable artifact and a pointer to where to get it), but IMO, the arguments for Glance then become more compelling (not that I've bought in to that completely yet). Murano as Application Catalog also could be a fit, but I don’t insist :) It sounds to me like conceptually it would suffer from the same scoping issues we're already discussing though. At the current moment Heat is not opinionated about template placement and this provides a lot of flexibility for other projects which use Heat under the hood. With your proposal, you are creating new metadata repository solution for specific use case of template storage making Heat much more prescriptive. I'm not sure where this impression comes from. The Heat orchestration api/engine would in no way be dependent on the repository. Heat would still accept and orchestrate any template you passed it. At best, Heat would be extended to be aware of catalog urls and template id's, but in no way
Re: [openstack-dev] [heat] [glance] Heater Proposal
type can have its own set of fields that matter to it. This doesn't have to be a minor change to glance to still have many advantages over writing something from scratch and asking people to deploy another service that is 99% the same as Glance. My suggestion for long-term architecture would be to use Murano for catalog/metadata information (for images/templates/whatever) and move the block-streaming drivers into Cinder, and get rid of the Glance project entirely. Murano would then become the catalog/registry of objects in the OpenStack world, Cinder would be the thing that manages and streams blocks of data or block devices, and Glance could go away. Imagine it... OpenStack actually *reducing* the number of projects instead of expanding! :) I think it is good to mention the idea of shrinking the overall OpenStack code base. The fact that the best code offers a lot of features without a hugely expanded codebase often seems forgotten--perhaps because it is somewhat incompatible with our low-barrier-to-entry model of development. However, as a mild defense of Glance's place in the OpenStack ecosystem, I'm not sure yet that a general catalog/metadata service would be a proper replacement. There are two key distinctions between Glance and a catalog/metadata service. One is that Glance *owns* the reference to the underlying data--meaning Glance can control the consistency of its references. I.e. you should not be able to delete the image data out from underneath Glance while the Image entry exists, in order to avoid a terrible user experience. Two is that Glance understands and coordinates the meaning and relationships of Image metadata. Without these distinctions, I'm not sure we need any OpenStack project at all--we should probably just publish an LDAP schema for Images/Templates/what-have-you and use OpenLDAP. To clarify, I think these functions are critical to Glance's role as a gatekeeper and helper, especially in public clouds--but having this role in your deployment is probably something that should ultimately become optional. Perhaps Glance should not be in the required path for all deployments. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] [glance] Heater Proposal
it better. +1 Vish ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] [glance] Heater Proposal
That is great. How this work will be coordinated? I just want to be sure that all assets are covered. Thanks Georgy On Fri, Dec 6, 2013 at 3:15 PM, Randall Burt randall.b...@rackspace.comwrote: On Dec 6, 2013, at 5:04 PM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Randall Burt's message of 2013-12-06 14:43:05 -0800: I too have warmed to this idea but wonder about the actual implementation around it. While I like where Edmund is going with this, I wonder if it wouldn't be valuable in the short-to-mid-term (I/J) to just add /templates to Glance (/assemblies, /applications, etc) along side /images. Initially, we could have separate endpoints and data structures for these different asset types, refactoring the easy bits along the way and leveraging the existing data storage and caching bits, but leaving more disruptive changes alone. That can get the functionality going, prove some concepts, and allow all of the interested parties to better plan a more general v3 api. +1 on bolting the different views for things on as new v2 pieces instead of trying to solve the API genericism immediately. I would strive to make this a facade, and start immediately on making Glance more generic under the hood. Otherwise these will just end up as silos inside Glance instead of silos inside OpenStack. Totally agreed. Where it makes sense to refactor we should do that rather than implementing essentially different services underneath. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Technical Program Manager, Cloud and Infrastructure Services, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Murano] Nominations to Murano core
+1 for both! Thanks for the great work! Regards, Gosha On Thu, Jun 26, 2014 at 7:05 AM, Timur Sufiev tsuf...@mirantis.com wrote: +1 for both. On Thu, Jun 26, 2014 at 3:29 PM, Stan Lagun sla...@mirantis.com wrote: +1 for both (or should I say +2?) Sincerely yours, Stan Lagun Principal Software Engineer @ Mirantis On Thu, Jun 26, 2014 at 1:49 PM, Alexander Tivelkov ativel...@mirantis.com wrote: +1 on both Serge and Steve -- Regards, Alexander Tivelkov On Thu, Jun 26, 2014 at 1:37 PM, Ruslan Kamaldinov rkamaldi...@mirantis.com wrote: I would like to nominate Serg Melikyan and Steve McLellan to Murano core. Serge has been a significant reviewer in the Icehouse and Juno release cycles. Steve has been providing consistent quality reviews and they continue to get more frequent and better over time. Thanks, Ruslan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Timur Sufiev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Solum] Core Reviewer Change
+1. Glad to see Pierre in Solum Core team! On Wed, Jul 9, 2014 at 2:55 AM, Julien Vey vey.jul...@gmail.com wrote: +1 Pierre always provide valuable feedback. Glad to see him in the core team 2014-07-09 11:26 GMT+02:00 Adrian Otto adrian.o...@rackspace.com: Solum Core Reviewer Team, I propose the following change to our core reviewer group: +stannie (Pierre Padrixe) Please let me know your votes (+1, 0, or -1). Thanks, Adrian ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] ova support in glance
us your feedback. Regards Malini ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Georgy Okrokvertskhov Architect, OpenStack Platform Products, Mirantis http://www.mirantis.com Tel. +1 650 963 9828 Mob. +1 650 996 3284 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev