Re: [openstack-dev] [sahara] Nominating new members to Sahara Core
+2 to all! Thank you for your contributions, folks! Regards, Alexander Ignatov > On 13 May 2016, at 19:26, Ethan Gafford <egaff...@redhat.com> wrote: > > On Fri, May 13, 2016 at 11:33 AM, Vitaly Gridnev <vgrid...@mirantis.com > <mailto:vgrid...@mirantis.com>> wrote: > Hello Sahara core folks! > > I'd like to bring the following folks to Sahara Core: > > 1. Lu Huichun > 2. Nikita Konovalov > 3. Chad Roberts > > Let's vote with +2/-2 for additions above. > > [0] http://stackalytics.com/?module=sahara-group > <http://stackalytics.com/?module=sahara-group> > [1] http://stackalytics.com/?module=sahara-group=mitaka > <http://stackalytics.com/?module=sahara-group=mitaka> > > -- > Best Regards, > Vitaly Gridnev, > Project Technical Lead of OpenStack DataProcessing Program (Sahara) > Mirantis, Inc > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev> > > > Lu Huichin: +2 > Nikita Konovalov: +2 > Chad Roberts: +2 > > All deeply well deserved after a great deal of work. Thanks! > > - egafford > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team
+1. Thank you for your contributions, Vitaly! Regards, Alexander Ignatov > On 12 Oct 2015, at 17:06, Ethan Gafford <egaff...@redhat.com> wrote: > > I'm a very hearty +1 to this; Vitaly's a critical driver of both reviews and > features in Sahara. > > Cheers, > Ethan > > - Original Message - > From: "michael mccune" <m...@redhat.com> > To: openstack-dev@lists.openstack.org > Sent: Monday, October 12, 2015 8:49:18 AM > Subject: Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core > reviewer team > > i'm +1 for this, Vitaly has been doing a great job contributing code and > reviews to the project. > > mike > > On 10/12/2015 07:19 AM, Sergey Lukjanov wrote: >> Hi folks, >> >> I'd like to propose Vitaly Gridnev as a member of the Sahara core >> reviewer team. >> >> Vitaly contributing to Sahara for a long time and doing a great job on >> reviewing and improving Sahara. Here are the statistics for reviews >> [0][1][2] and commits [3]. >> >> Existing Sahara core reviewers, please vote +1/-1 for the addition of >> Vitaly to the core reviewer team. >> >> Thanks. >> >> [0] >> https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z >> [1] http://stackalytics.com/report/contribution/sahara-group/180 >> [2] http://stackalytics.com/?metric=marks_id=vgridnev >> [3] >> https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z >> >> -- >> Sincerely yours, >> Sergey Lukjanov >> Sahara Technical Lead >> (OpenStack Data Processing) >> Principal Software Engineer >> Mirantis Inc. >> >> >> __ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Proposing Ethan Gafford for the core reviewer team
+1 Regards, Alexander Ignatov On 13 Aug 2015, at 18:29, Sergey Reshetnyak sreshetn...@mirantis.com wrote: +2 2015-08-13 18:07 GMT+03:00 Matthew Farrellee m...@redhat.com mailto:m...@redhat.com: On 08/13/2015 10:56 AM, Sergey Lukjanov wrote: Hi folks, I'd like to propose Ethan Gafford as a member of the Sahara core reviewer team. Ethan contributing to Sahara for a long time and doing a great job on reviewing and improving Sahara. Here are the statistics for reviews [0][1][2] and commits [3]. BTW Ethan is already stable maint team core for Sahara. Existing Sahara core reviewers, please vote +1/-1 for the addition of Ethan to the core reviewer team. Thanks. [0] https://review.openstack.org/#/q/reviewer:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22,n,z https://review.openstack.org/#/q/reviewer:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22,n,z [1] http://stackalytics.com/report/contribution/sahara-group/90 http://stackalytics.com/report/contribution/sahara-group/90 [2] http://stackalytics.com/?user_id=egaffordmetric=marks http://stackalytics.com/?user_id=egaffordmetric=marks [3] https://review.openstack.org/#/q/owner:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22+status:merged,n,z https://review.openstack.org/#/q/owner:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22+status:merged,n,z -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. +1 ethan has really taken to sahara, providing valuable input to both development and deployments as well has taking on the manila integration __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Fuel] Plugins for Fuel: repo, doc, spec - where?
Mike, I also wanted to add that there is a PR already on adding plugins repos to stackforge: https://review.openstack.org/#/c/147169/ All this looks good, but it’s not clear when this patch will be merged and repos are created. So the question is what should we do with the current spec made in fuel-specs[1,2] which are targeted for plugins? And how will look development process for plugins added to 6.1 roadmap? Especially for plugins came not from external vendors and partners. Will we create separate projects on the Launchpad and duplicate our For now I’m not sure if we need to wait for new infrastructure created in stackforge/launchpad for each plugin and follow the common procedure to land current plugins to existing repos during 6.1 milestone. [1] https://review.openstack.org/#/c/129586/ https://review.openstack.org/#/c/129586/ [2] https://review.openstack.org/#/c/148475/4 https://review.openstack.org/#/c/148475/4 Regards, Alexander Ignatov On 23 Jan 2015, at 12:43, Nikolay Markov nmar...@mirantis.com wrote: I also wanted to add that there is a PR already on adding plugins repos to stackforge: https://review.openstack.org/#/c/147169/ There is a battle in comments right now, because some people are not agree that so many repos are needed. On Fri, Jan 23, 2015 at 1:25 AM, Mike Scherbakov mscherba...@mirantis.com wrote: Hi Fuelers, we've implemented pluggable architecture piece in 6.0, and got a number of plugins already. Overall development process for plugins is still not fully defined. We initially thought that having all the plugins in one repo on stackforge is Ok, we also put some docs into existing fuel-docs repo, and specs to fuel-specs. We might need a change here. Plugins are not tight to any particular release date, and they can also be separated each from other in terms of committers and core reviewers. Also, it seems to be pretty natural to keep all docs and design specs associated with particular plugin. With all said, following best dev practices, it is suggested to: Have a separate stackforge repo per Fuel plugin in format fuel-plugin-name, with separate core-reviewers group which should have plugin contributor initially Have docs folder in the plugin, and ability to build docs out of it do we want Sphinx or simple Github docs format is Ok? So people can just go to github/stackforge to see docs Have specification in the plugin repo also, do we need Sphinx here? Have plugins tests in the repo Ideas / suggestions / comments? Thanks, -- Mike Scherbakov #mihgen __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Nick Markov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Nominate Michael McCune to sahara-core
+2 Regards, Alexander Ignatov On 11 Nov 2014, at 20:47, Trevor McKay tmc...@redhat.com wrote: + 2 On 11/11/2014 12:37 PM, Sergey Lukjanov wrote: Hi folks, I'd like to propose Michael McCune to sahara-core. He has a good knowledge of codebase and implemented important features such as Swift auth using trusts. Mike has been consistently giving us very well thought out and constructive reviews for Sahara project. Sahara core team members, please, vote +/- 2. Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Nominate Sergey Reshetniak to sahara-core
+2 Regards, Alexander Ignatov On 11 Nov 2014, at 20:47, Trevor McKay tmc...@redhat.com wrote: +2 On 11/11/2014 12:35 PM, Sergey Lukjanov wrote: Hi folks, I'd like to propose Sergey to sahara-core. He's made a lot of work on different parts of Sahara and he has a very good knowledge of codebase, especially in plugins area. Sergey has been consistently giving us very well thought out and constructive reviews for Sahara project. Sahara core team members, please, vote +/- 2. Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [sahara] team meeting minutes June 19
Thanks everyone who have joined today's Sahara meeting. Here are the logs from the meeting: Minutes: http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-06-19-18.04.html Log: http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-06-19-18.04.log.html Regards, Alexander Ignatov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: subprojects
On 28 May 2014, at 20:02, Sergey Lukjanov slukja...@mirantis.com wrote: Hey folks, it's a small wrap-up for the topic Sahara subprojects releasing and versioning that was discussed partially on summit and requires some more discussions. You can find details in [0]. common We'll include only one tarball for sahara to the release launchpad pages. All other links will be provided in docs. +1. And keep python-saharaclient on the corresponding launchpad page. sahara-dashboard The merging to Horizon process is now in progress. We've decided that j1 is the deadline for merging main code parts and during the j2 all the code should be merged into Horizon, so, if in time of j2 we'll have some work on merging sahara-dashboard to Horizon not done we'll need to fallback to the separated sahara-dashboard repo release for Juno cycle and continue merging the code into the Horizon to be able to completely kill sahara-dashboard repo in K release. Where we should keep our UI integration tests? Once sahara-dashboard code is not merged to Horizon we could keep integration tests in the same repo. Once dashboard code is merged we could keep tests in sahara-extra repo. AFAIR we have plans to convert our UI tests to Horizon-capable tests with mocked rest calls. So we could keep non-converted UI tests in sahara-extra until they are done. sahara-image-elements We're agreed that some common parts should be merged into the diskimage-builder repo (like java support, ssh, etc.). The main issue of keeping -image-elements separated is how to release them and provide mapping sahara version - elements version. You can find different options in etherpad [0], I'll write here about the option that I think will work best for us. So, the idea is that sahara-image-elements is a bunch of scripts and tools for building images for Sahara. It's high coupled with plugins's code in Sahara, so, we need to align them good. Current default decision is to keep aligned versioning like 2014.1 and etc. It'll be discussed on the weekly irc team meeting May 29. I vote to keep sahara-image-elements as separate repo and release it as you Sergey propose. I see problems with sahara-ci when running all bunch of integration tests for checking image-elements and core sahara code on each patch sent to sahara repo in case of collapsed two repos. sahara-extra Keep it as is, no need to stop releasing, because we're not publishing anything to pypi. No real need for tags. +1. Also I think we can move our rest-api-samples from sahara to sahara-extra repo as well. open questions If you have any objections for this model, please, share your thoughts before June 3 due to the Juno-1 (June 12) to have enough time to apply selected approach. [0] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: backward compat
On 28 May 2014, at 17:14, Sergey Lukjanov slukja...@mirantis.com wrote: 1. How should we handle addition of new functionality to the API, should we bump minor version and just add new endpoints? Agree with most of folks. No new versions on adding new endpoints. Semantic changes require new major version of rest api. 2. For which period of time should we keep deprecated API and client for it? One release cycle for deprecation period. 3. How to publish all images and/or keep stability of building images for plugins? We should keep all images for all plugins (non-deprecated as Matt mentioned) for each release. In addition we could keep at least one image which could be downloaded and used with master branch of Sahara. Plugin vendors could keep its own set of images and we can reflect it in the docs. Regards, Alexander Ignatov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: backward compat
On 29 May 2014, at 18:43, Matthew Farrellee m...@redhat.com wrote: i do not think we should release any images that have a root password set (essentially a backdoor). for K we should deprecate the hadoop1 versions and thus significantly cut the size of the new image artifact. Agree don’t publish images with root password. This option is made for debug purposes and if needed users may build its own image for that. Regards, Alexander Ignatov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Nominate Trevor McKay for sahara-core
+1 Thank you Trevor for your efforts are done in EDP! Regards, Alexander Ignatov On 12 May 2014, at 17:31, Sergey Lukjanov slukja...@mirantis.com wrote: Hey folks, I'd like to nominate Trevor McKay (tmckay) for sahara-core. He is among the top reviewers of Sahara subprojects. Trevor is working on Sahara full time since summer 2013 and is very familiar with current codebase. His code contributions and reviews have demonstrated a good knowledge of Sahara internals. Trevor has a valuable knowledge of EDP part and Hadoop itself. He's working on both bugs and new features implementation. Some links: http://stackalytics.com/report/contribution/sahara-group/30 http://stackalytics.com/report/contribution/sahara-group/90 http://stackalytics.com/report/contribution/sahara-group/180 https://review.openstack.org/#/q/owner:tmckay+sahara+AND+-status:abandoned,n,z https://launchpad.net/~tmckay Sahara cores, please, reply with +1/0/-1 votes. Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] bug triage day after summit
++ Let’s do it on Monday Regards, Alexander Ignatov On 06 May 2014, at 12:13, Sergey Lukjanov slukja...@mirantis.com wrote: Hey sahara folks, let's make a Bug Triage Day after the summit. I'm proposing the May, 26 for it. Any thoughts/objections? Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Nominate Andrew Lazarew for savanna-core
+1 Regards, Alexander Ignatov On 20 Feb 2014, at 03:02, John Speidel jspei...@hortonworks.com wrote: +1 Andrew would be a good addition. -John On Wed, Feb 19, 2014 at 5:40 PM, Sergey Lukjanov slukja...@mirantis.com wrote: Hey folks, I'd like to nominate Andrew Lazarew (alazarev) for savanna-core. He is among the top reviewers of Savanna subprojects. Andrew is working on Savanna full time since September 2013 and is very familiar with current codebase. His code contributions and reviews have demonstrated a good knowledge of Savanna internals. Andrew have a valuable knowledge of both core and EDP parts, IDH plugin and Hadoop itself. He's working on both bugs and new features implementation. Some links: http://stackalytics.com/report/reviews/savanna-group/30 http://stackalytics.com/report/reviews/savanna-group/90 http://stackalytics.com/report/reviews/savanna-group/180 https://review.openstack.org/#/q/owner:alazarev+savanna+AND+-status:abandoned,n,z https://launchpad.net/~alazarev Savanna cores, please, reply with +1/0/-1 votes. Thanks. -- Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] plugin version or hadoop version?
Agree to rename this legacy field to ‘version’. Adding to John's words about HDP, Vanilla plugin is able to run different hadoop versions by doing some manipulations with DIB scripts :-) So the right name of this field should be ‘version’ as version of engine of concrete plugin. Regards, Alexander Ignatov On 18 Feb 2014, at 01:01, John Speidel jspei...@hortonworks.com wrote: Andrew +1 The HDP plugin also returns the HDP distro version. The version needs to make sense in the context of the plugin. Also, many plugins including the HDP plugin will support deployment of several hadoop versions. -John On Mon, Feb 17, 2014 at 2:36 PM, Andrew Lazarev alaza...@mirantis.com wrote: IDH uses version of IDH distro and there is no direct mapping between distro version and hadoop version. E.g. IDH 2.5.1 works with apache hadoop 1.0.3. I suggest to call the field as just 'version' everywhere and assume this version as plugin specific property. Andrew. On Mon, Feb 17, 2014 at 5:06 AM, Matthew Farrellee m...@redhat.com wrote: $ savanna plugins-list +-+--+---+ | name| versions | title | +-+--+---+ | vanilla | 1.2.1| Vanilla Apache Hadoop | | hdp | 1.3.2| Hortonworks Data Platform | +-+--+---+ above is output from the /plugins endpoint - http://docs.openstack.org/developer/savanna/userdoc/rest_api_v1.0.html#plugins the question is, should the version be the version of the plugin or the version of hadoop the plugin installs? i ask because it seems like we have version == plugin version for hdp and version == hadoop version for vanilla. the documentation is somewhat vague on the subject, mostly stating version without qualification. however, the json passed to the service references hadoop_version and the arguments in the client are called hadoop_version fyi, this could be complicated by the idh and spark plugins. best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Mission Statement wording
lgtm what was proposed by Doug. Regards, Alexander Ignatov On 13 Feb 2014, at 16:29, Sergey Lukjanov slukja...@mirantis.com wrote: Hi folks, I'm working now on adding Savanna's mission statement to governance docs [0]. There are some comments on our current one to make it simpler and remove marketing like stuff. So, current option is: To provide a scalable data processing stack and associated management interfaces. (thanks for Doug for proposing it). So, please, share your objections (and suggestions too). Additionally I'd like to talk about it on todays IRC meeting. Thanks. [0] https://review.openstack.org/#/c/71045/1/reference/programs.yaml -- Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] savann-ci, Re: [savanna] Alembic migrations and absence of DROP column in sqlite
Indeed. We should create a bug around that and move our savanna-ci to mysql. Regards, Alexander Ignatov On 05 Feb 2014, at 01:01, Trevor McKay tmc...@redhat.com wrote: This brings up an interesting problem: In https://review.openstack.org/#/c/70420/ I've added a migration that uses a drop column for an upgrade. But savann-ci is apparently using a sqlite database to run. So it can't possibly pass. What do we do here? Shift savanna-ci tests to non sqlite? Trevor On Sat, 2014-02-01 at 18:17 +0200, Roman Podoliaka wrote: Hi all, My two cents. 2) Extend alembic so that op.drop_column() does the right thing We could, but should we? The only reason alembic doesn't support these operations for SQLite yet is that SQLite lacks proper support of ALTER statement. For sqlalchemy-migrate we've been providing a work-around in the form of recreating of a table and copying of all existing rows (which is a hack, really). But to be able to recreate a table, we first must have its definition. And we've been relying on SQLAlchemy schema reflection facilities for that. Unfortunately, this approach has a few drawbacks: 1) SQLAlchemy versions prior to 0.8.4 don't support reflection of unique constraints, which means the recreated table won't have them; 2) special care must be taken in 'edge' cases (e.g. when you want to drop a BOOLEAN column, you must also drop the corresponding CHECK (col in (0, 1)) constraint manually, or SQLite will raise an error when the table is recreated without the column being dropped) 3) special care must be taken for 'custom' type columns (it's got better with SQLAlchemy 0.8.x, but e.g. in 0.7.x we had to override definitions of reflected BIGINT columns manually for each column.drop() call) 4) schema reflection can't be performed when alembic migrations are run in 'offline' mode (without connecting to a DB) ... (probably something else I've forgotten) So it's totally doable, but, IMO, there is no real benefit in supporting running of schema migrations for SQLite. ...attempts to drop schema generation based on models in favor of migrations As long as we have a test that checks that the DB schema obtained by running of migration scripts is equal to the one obtained by calling metadata.create_all(), it's perfectly OK to use model definitions to generate the initial DB schema for running of unit-tests as well as for new installations of OpenStack (and this is actually faster than running of migration scripts). ... and if we have strong objections against doing metadata.create_all(), we can always use migration scripts for both new installations and upgrades for all DB backends, except SQLite. Thanks, Roman On Sat, Feb 1, 2014 at 12:09 PM, Eugene Nikanorov enikano...@mirantis.com wrote: Boris, Sorry for the offtopic. Is switching to model-based schema generation is something decided? I see the opposite: attempts to drop schema generation based on models in favor of migrations. Can you point to some discussion threads? Thanks, Eugene. On Sat, Feb 1, 2014 at 2:19 AM, Boris Pavlovic bpavlo...@mirantis.com wrote: Jay, Yep we shouldn't use migrations for sqlite at all. The major issue that we have now is that we are not able to ensure that DB schema created by migration models are same (actually they are not same). So before dropping support of migrations for sqlite switching to model based created schema we should add tests that will check that model migrations are synced. (we are working on this) Best regards, Boris Pavlovic On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev alaza...@mirantis.com wrote: Trevor, Such check could be useful on alembic side too. Good opportunity for contribution. Andrew. On Fri, Jan 31, 2014 at 6:12 AM, Trevor McKay tmc...@redhat.com wrote: Okay, I can accept that migrations shouldn't be supported on sqlite. However, if that's the case then we need to fix up savanna-db-manage so that it checks the db connection info and throws a polite error to the user for attempted migrations on unsupported platforms. For example: Database migrations are not supported for sqlite Because, as a developer, when I see a sql error trace as the result of an operation I assume it's broken :) Best, Trevor On Thu, 2014-01-30 at 15:04 -0500, Jay Pipes wrote: On Thu, 2014-01-30 at 14:51 -0500, Trevor McKay wrote: I was playing with alembic migration and discovered that op.drop_column() doesn't work with sqlite. This is because sqlite doesn't support dropping a column (broken imho, but that's another discussion). Sqlite throws a syntax error. To make this work with sqlite, you have to copy the table to a temporary excluding the column(s) you don't want and delete the old one, followed by a rename of the new table. The existing 002 migration uses op.drop_column(), so I'm assuming it's broken, too (I need
[openstack-dev] [savanna] team meeting minutes Jan 30
Thanks everyone who have joined Savanna meeting. Here are the logs from the meeting: Minutes: http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-30-18.05.html Log: http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-30-18.05.log.html Regards, Alexander Ignatov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] How to handle diverging EDP job configuration settings
Thank you for bringing this up, Trevor. EDP gets more diverse and it's time to change its model. I totally agree with your proposal, but one minor comment. Instead of savanna. prefix in job_configs wouldn't it be better to make it as edp.? I think savanna. is too more wide word for this. And one more bureaucratic thing... I see you already started implementing it [1], and it is named and goes as new EDP workflow [2]. I think new bluprint should be created for this feature to track all code changes as well as docs updates. Docs I mean public Savanna docs about EDP, rest api docs and samples. [1] https://review.openstack.org/#/c/69712 [2] https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce Regards, Alexander Ignatov On 28 Jan 2014, at 20:47, Trevor McKay tmc...@redhat.com wrote: Hello all, In our first pass at EDP, the model for job settings was very consistent across all of our job types. The execution-time settings fit into this (superset) structure: job_configs = {'configs': {}, # config settings for oozie and hadoop 'params': {}, # substitution values for Pig/Hive 'args': []}# script args (Pig and Java actions) But we have some things that don't fit (and probably more in the future): 1) Java jobs have 'main_class' and 'java_opts' settings Currently these are handled as additional fields added to the structure above. These were the first to diverge. 2) Streaming MapReduce (anticipated) requires mapper and reducer settings (different than the mapred..class settings for non-streaming MapReduce) Problems caused by adding fields The job_configs structure above is stored in the database. Each time we add a field to the structure above at the level of configs, params, and args, we force a change to the database tables, a migration script and a change to the JSON validation for the REST api. We also cause a change for python-savannaclient and potentially other clients. This kind of change seems bad. Proposal: Borrow a page from Oozie and add savanna. configs - I would like to fit divergent job settings into the structure we already have. One way to do this is to leverage the 'configs' dictionary. This dictionary primarily contains settings for hadoop, but there are a number of oozie.xxx settings that are passed to oozie as configs or set by oozie for the benefit of running apps. What if we allow savanna. settings to be added to configs? If we do that, any and all special configuration settings for specific job types or subtypes can be handled with no database changes and no api changes. Downside Currently, all 'configs' are rendered in the generated oozie workflow. The savanna. settings would be stripped out and processed by Savanna, thereby changing that behavior a bit (maybe not a big deal) We would also be mixing savanna. configs with config_hints for jobs, so users would potentially see savanna. settings mixed with oozie and hadoop settings. Again, maybe not a big deal, but it might blur the lines a little bit. Personally, I'm okay with this. Slightly different -- We could also add a 'savanna-configs': {} element to job_configs to keep the configuration spaces separate. But, now we would have 'savanna-configs' (or another name), 'configs', 'params', and 'args'. Really? Just how many different types of values can we come up with? :) I lean away from this approach. Related: breaking up the superset - It is also the case that not every job type has every value type. Configs ParamsArgs HiveY YN Pig Y YY MapReduce Y NN JavaY NY So do we make that explicit in the docs and enforce it in the api with errors? Thoughts? I'm sure there are some :) Best, Trevor ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Undoing a change in the alembic migrations
Yes, you need create new migration script. Btw, we already have started doing this. The first example was when Jon added ‘neutron’ param to the ‘job_execution’ object: https://review.openstack.org/#/c/63517/17/savanna/db/migration/alembic_migrations/versions/002_add_job_exec_extra.py Regards, Alexander Ignatov On 30 Jan 2014, at 02:25, Andrew Lazarev alaza...@mirantis.com wrote: +1 on new migration script. Just to be consecutive. Andrew. On Wed, Jan 29, 2014 at 2:17 PM, Trevor McKay tmc...@redhat.com wrote: Hi Sergey, In https://review.openstack.org/#/c/69982/1 we are moving the 'main_class' and 'java_opts' fields for a job execution into the job_configs['configs'] dictionary. This means that 'main_class' and 'java_opts' don't need to be in the database anymore. These fields were just added in the initial version of the migration scripts. The README says that migrations work from icehouse. Since this is the initial script, does that mean we can just remove references to those fields from the db models and the script, or do we need a new migration script (002) to erase them? Thanks, Trevor ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [savanna] Choosing provisioning engine during cluster launch
Today Savanna has two provisioning engines, heat and old one known as 'direct'. Users can choose which engine will be used by setting special parameter in 'savanna.conf'. I have an idea to give an ability for users to define provisioning engine not only when savanna is started but when new cluster is launched. The idea is simple. We will just add new field 'provisioning_engine' to 'cluster' and 'cluster_template' objects. And profit is obvious, users can easily switch from one engine to another without restarting savanna service. Of course, this parameter can be omitted and the default value from the 'savanna.conf' will be applied. Is this viable? What do you think? Regards, Alexander Ignatov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] savannaclient v2 api
Current EDP config-hints are not only plugin specific. Several types of jobs must have certain key/values and without it job will fail. For instance, MapReduce (former Jar) job type requires Mapper/Reducer classes parameters to be set[1]. Moreover, for such kind of jobs we already have separated configuration defaults [2]. Also initial versions of patch implementing config-hints contained plugin-independent defaults for all each job types [3]. I remember we postponed decision about which configs are commmon for all plugins and agreed to show users all vanilla-specific defaults. That's why now we have several TODOs in the code about config-hints should be plugin-specific. So I propose to leave config-hints REST call in EDP internal and make it plugin-independent (or job-specific) by removing of parsing all vanilla-specific defaults and define small list of configs which is definitely common for each type of jobs. The first things come to mind: - For MapReduce jobs it's already defined in [1] - Configs like number of map and reduce tasks are common for all type of jobs - At least user always has an ability to set any key/value(s) as params/arguments for job [1] http://docs.openstack.org/developer/savanna/userdoc/edp.html#workflow [2] https://github.com/openstack/savanna/blob/master/savanna/service/edp/resources/mapred-job-config.xml [3] https://review.openstack.org/#/c/45419/10 Regards, Alexander Ignatov On 20 Jan 2014, at 22:04, Matthew Farrellee m...@redhat.com wrote: On 01/20/2014 12:50 PM, Andrey Lazarev wrote: Inlined. On Mon, Jan 20, 2014 at 8:15 AM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: (inline, trying to make this readable by a text-only mail client that doesn't use tabs to indicate quoting) On 01/20/2014 02:50 AM, Andrey Lazarev wrote: -- FIX - @rest.get('/jobs/config-hints/job_type') - should move to GET /plugins/plugin_name/plugin_version, similar to get_node_processes and get_required_image_tags -- Not sure if it should be plugin specific right now. EDP uses it to show some configs to users in the dashboard. it's just a cosmetic thing. Also when user starts define some configs for some job he might not define cluster yet and thus plugin to run this job. I think we should leave it as is and leave only abstract configs like Mapper/Reducer class and allow users to apply any key/value configs if needed. FYI, the code contains comments suggesting it should be plugin specific. https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179 https://github.com/openstack/__savanna/blob/master/savanna/__service/edp/workflow_creator/__workflow_factory.py#L179 https://github.com/openstack/__savanna/blob/master/savanna/__service/edp/workflow_creator/__workflow_factory.py#L179 https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179 IMHO, the EDP should have no plugin specific dependencies. If it currently does, we should look into why and see if we can't eliminate this entirely. [AL] EDP uses plugins in two ways: 1. for HDFS user 2. for config hints I think both items should not be plugin specific on EDP API level. But implementation should go to plugin and call plugin API for result. In fact they are both plugin specific. The user is forced to click through a plugin selection (when launching a job on transient cluster) or the plugin selection has already occurred (when launching a job on an existing cluster). Since the config is something that is plugin specific, you might not have hbase hints from vanilla but you would from hdp, and you already have plugin information whenever you ask for a hint, my view that this be under the /plugins namespace is growing stronger. [AL] Disagree. They are plugin specific, but EDP itself could have additional plugin-independent logic inside. Now config hints return EDP properties (like mapred.input.dir) as well as plugin-specific properties. Placing it under /plugins namespace will give a vision that it is fully plugin specific. I like to see EDP API fully plugin independent and in one workspace. If core side needs some information internally it can easily go into the plugin. I'm not sure if we're disagreeing. We may, in fact, be in violent agreement. The EDP API is fully plugin independent, and should stay that way
Re: [openstack-dev] [savanna] savannaclient v2 api
++ for generic PUT for both ‘cancel’ and ‘refresh-status’, Andrew. Thanks! Regards, Alexander Ignatov On 17 Jan 2014, at 06:19, Andrey Lazarev alaza...@mirantis.com wrote: My 5 cents: -- REMOVE - @rest.put('/node-group-templates/node_group_template_id') - Not Implemented REMOVE - @rest.put('/cluster-templates/cluster_template_id') - Not Implemented -- Disagree with that. Samsung people did great job in both savanna/savanna-dashboard to make this implemented [2], [3]. We should leave and support these calls in savanna. -- [AL] Agree with Alexander. Ability to modify templates is very useful feature. REMOVE - @rest.get('/job-executions/job_execution_id/refresh-status') - refresh and return status - GET should not side-effect, status is part of details and updated periodically, currently unused This call goes to Oozie directly to ask it about job status. It allows not to wait too long when periodic task will update status JobExecution object in Savanna. The current GET asks status of JobExecution from savanna-db. I think we can leave this call, it might be useful for external clients. [AL] Agree that GET shouldn't have side effect (or at least documented side effect). I think it could be generic PUT on '/job-executions/job_execution_id' which can refresh status or cancel job on hadoop side. REMOVE - @rest.get('/job-executions/job_execution_id/cancel') - cancel job-execution - GET should not side-effect, currently unused, use DELETE /job/executions/job_execution_id Disagree. We have to leave this call. This methods stops job executing on the Hadoop cluster but doesn't remove all its related info from savanna-db. DELETE removes it completely. [AL] We need 'cancel'. Vote on generic PUT (see previous item). Thanks, Andrew. On Thu, Jan 16, 2014 at 5:10 AM, Alexander Ignatov aigna...@mirantis.com wrote: Matthew, I'm ok with proposed solution. Some comments/thoughts below: - FIX - @rest.post_file('/plugins/plugin_name/version/convert-config/name') - this is an RPC call, made only by a client to do input validation, move to POST /validations/plugins/:name/:version/check-config-import - AFAIR, this rest call was introduced not only for validation. The main idea was to create method which converts plugin specific config for cluster creation to savanna's cluster template [1]. So maybe we may change this rest call to: /plugins/convert-config/name and include all need fields to data. Anyway we have to know Hortonworks guys opinion. Currently HDP plugin implements this method only. -- REMOVE - @rest.put('/node-group-templates/node_group_template_id') - Not Implemented REMOVE - @rest.put('/cluster-templates/cluster_template_id') - Not Implemented -- Disagree with that. Samsung people did great job in both savanna/savanna-dashboard to make this implemented [2], [3]. We should leave and support these calls in savanna. -- CONSIDER rename /jobs - /job-templates (consistent w/ cluster-templates clusters) CONSIDER renaming /job-executions to /jobs --- Good idea! -- FIX - @rest.get('/jobs/config-hints/job_type') - should move to GET /plugins/plugin_name/plugin_version, similar to get_node_processes and get_required_image_tags -- Not sure if it should be plugin specific right now. EDP uses it to show some configs to users in the dashboard. it's just a cosmetic thing. Also when user starts define some configs for some job he might not define cluster yet and thus plugin to run this job. I think we should leave it as is and leave only abstract configs like Mapper/Reducer class and allow users to apply any key/value configs if needed. - CONSIDER REMOVING, MUST ALWAYS UPLOAD TO Swift FOR /job-binaries - Disagree. It was discussed before starting EDP implementation that there are a lot of OS installations which don't have Swift deployed, and ability to run jobs using savanna internal db is a good option in this case. But yes, Swift is more preferred. Waiting for Trevor's and maybe Nadya's comments here under this section. REMOVE - @rest.get('/job-executions/job_execution_id/refresh-status') - refresh and return status - GET should not side-effect, status is part of details and updated periodically, currently unused This call goes to Oozie directly to ask it about job status. It allows not to wait too long when periodic task will update status JobExecution object in Savanna. The current GET asks status of JobExecution from savanna-db. I think we can leave this call, it might be useful for external clients. REMOVE - @rest.get('/job-executions/job_execution_id/cancel') - cancel job-execution - GET should not side-effect, currently unused, use DELETE /job/executions/job_execution_id Disagree. We have to leave this call. This methods stops job executing on the Hadoop
Re: [openstack-dev] [savanna] savannaclient v2 api
Matthew, I'm ok with proposed solution. Some comments/thoughts below: - FIX - @rest.post_file('/plugins/plugin_name/version/convert-config/name') - this is an RPC call, made only by a client to do input validation, move to POST /validations/plugins/:name/:version/check-config-import - AFAIR, this rest call was introduced not only for validation. The main idea was to create method which converts plugin specific config for cluster creation to savanna's cluster template [1]. So maybe we may change this rest call to: /plugins/convert-config/name and include all need fields to data. Anyway we have to know Hortonworks guys opinion. Currently HDP plugin implements this method only. -- REMOVE - @rest.put('/node-group-templates/node_group_template_id') - Not Implemented REMOVE - @rest.put('/cluster-templates/cluster_template_id') - Not Implemented -- Disagree with that. Samsung people did great job in both savanna/savanna-dashboard to make this implemented [2], [3]. We should leave and support these calls in savanna. -- CONSIDER rename /jobs - /job-templates (consistent w/ cluster-templates clusters) CONSIDER renaming /job-executions to /jobs --- Good idea! -- FIX - @rest.get('/jobs/config-hints/job_type') - should move to GET /plugins/plugin_name/plugin_version, similar to get_node_processes and get_required_image_tags -- Not sure if it should be plugin specific right now. EDP uses it to show some configs to users in the dashboard. it's just a cosmetic thing. Also when user starts define some configs for some job he might not define cluster yet and thus plugin to run this job. I think we should leave it as is and leave only abstract configs like Mapper/Reducer class and allow users to apply any key/value configs if needed. - CONSIDER REMOVING, MUST ALWAYS UPLOAD TO Swift FOR /job-binaries - Disagree. It was discussed before starting EDP implementation that there are a lot of OS installations which don't have Swift deployed, and ability to run jobs using savanna internal db is a good option in this case. But yes, Swift is more preferred. Waiting for Trevor's and maybe Nadya's comments here under this section. REMOVE - @rest.get('/job-executions/job_execution_id/refresh-status') - refresh and return status - GET should not side-effect, status is part of details and updated periodically, currently unused This call goes to Oozie directly to ask it about job status. It allows not to wait too long when periodic task will update status JobExecution object in Savanna. The current GET asks status of JobExecution from savanna-db. I think we can leave this call, it might be useful for external clients. REMOVE - @rest.get('/job-executions/job_execution_id/cancel') - cancel job-execution - GET should not side-effect, currently unused, use DELETE /job/executions/job_execution_id Disagree. We have to leave this call. This methods stops job executing on the Hadoop cluster but doesn't remove all its related info from savanna-db. DELETE removes it completely. [1] http://docs.openstack.org/developer/savanna/devref/plugin.spi.html#convert-config-plugin-name-version-template-name-cluster-template-create [2] https://blueprints.launchpad.net/savanna/+spec/modifying-cluster-template [3] https://blueprints.launchpad.net/savanna/+spec/modifying-node-group-template Regards, Alexander Ignatov On 14 Jan 2014, at 21:24, Matthew Farrellee m...@redhat.com wrote: https://blueprints.launchpad.net/savanna/+spec/v2-api I've finished a review of the v1.0 and v1.1 APIs with an eye to making them more consistent and RESTful. Please use this thread to comment on my suggestions for v1.0 v1.1, or to make further suggestions. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Anti-affinity
Arindam, What exact AA configurations did you use for your cluster? Did you configure scheduler filters in Nova as it’s described here? http://docs.openstack.org/developer/savanna/userdoc/features.html#anti-affinity Also please send Savanna usage related questions to [openstack] openst...@lists.openstack.org not to openstack-dev. Regards, Alexander Ignatov On 05 Dec 2013, at 15:30, Arindam Choudhury arin...@live.com wrote: HI, I have 11 compute nodes. I want to create a hadoop cluster with 1 master(namenode+jobtracker) with 20 worker (datanode+tasktracker). How to configure the Anti-affinty so I can run the master in one host, while others will be hosting two worker? I tried some configuration, but I can not achieve it. Regards, Arindam ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] bug day - Nov 19
Sergey, Thank you. I’ve edited information section related to assigning tags in bug title. Just added more examples. Regards, Alexander Ignatov On 19 Nov 2013, at 15:13, Sergey Lukjanov slukja...@mirantis.com wrote: Reminder: today is the bug triage day on Savanna project. I’ve prepared the wiki page for it - https://wiki.openstack.org/wiki/Savanna/BugTriage (based on https://wiki.openstack.org/wiki/BugTriage). See you in #savanna channel. Thank you. Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. On Nov 15, 2013, at 11:34 AM, Sergey Lukjanov slukja...@mirantis.com wrote: Hi team, we have some unmanaged bugs in LP for Savanna project and, so, we’ll have a bug day to triage/cleanup them at November, 19 (Tuesday) as we discussed on the last irc team meeting. I’ll send a reminder at the start of the day. Thanks. Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove][Savanna][Murano] Unified Agent proposal discussion at Summit
Ilya, Igor I’ve created separate etherpad for UnifiedAgents approach: https://etherpad.openstack.org/p/UnifiedAgents I described here initial Savanna's requirements from GuestAgent solution which were discussed in the design summit. Trove and Murano team are welcome to describe their needs in guest agents in this document. Regards, Alexander Ignatov On 12 Nov 2013, at 19:24, Ilya Sviridov isviri...@mirantis.com wrote: Igor, better to create another one to track the requirements for such agent framework as far as this etherpad is official result of design session. With best regards, Ilya Sviridov On Tue, Nov 12, 2013 at 5:02 PM, Igor Marnat imar...@mirantis.com wrote: Ilya, that's cool! Mind if Murano and Savanna teams join the same etherpad? Regards, Igor Marnat On Tue, Nov 12, 2013 at 6:58 PM, Ilya Sviridov isviri...@mirantis.com wrote: Thinking in that direction, the Trove team had a design session about current status of agent in project. Just take a look https://etherpad.openstack.org/p/TroveGuestAgents With best regards, Ilya Sviridov On Tue, Nov 12, 2013 at 4:29 PM, Igor Marnat imar...@mirantis.com wrote: Just to summarize, there was an interest expressed from Murano, Trove, Savanna and Heat teams in regards with implementation of this unified agent. Nothing specific was decided expect suggestion to keep pushing. I'd suggest to keep pushing this way: - create an etherpad - each team interested in having unified agent writes there detailed use cases for an agent to this etherpad - based on these use-cases we can generate very specific and detailed requirements to the agent - based on these requirements we can agree on architecture and approach to implementation. Teams? Regards, Igor Marnat On Tue, Nov 5, 2013 at 6:10 AM, Alexander Tivelkov ativel...@mirantis.com wrote: Hi guys, Recently we had several discussions about the guest VM agents: lot's of projects have the similar needs to run some special logic on the side of guest virtual machines. As far as I know, there are such agents in Savanna, Trove, Murano and may be some other projects as well. The obvious idea is to unite the efforts and have the unified solution which may satisfy everybody's needs. We've discussed this topic before with some of the teams, and got the promising-looking idea to create kind of unified agent library and put it in Oslo or some other shared project. We've scheduled an unconference session on the Summit, this Friday at 3:10 pm. Let's continue discussing the idea there: we may gather the common requirements, discuss the basic design concepts etc. See you there! -- Kind Regards, Alexander Tivelkov Principal Software Engineer OpenStack Platform Product division Mirantis, Inc +7(495) 640 4904, ext 0236 +7-926-267-37-97(cell) Vorontsovskaya street 35 B, building 3, Moscow, Russia. ativel...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Savanna] Definition of template
Hi, Andrew Agreed with your opinion. Initially Savanna’s templates approach is the option 1 you are talking about. This was designed at the start of Savanna 0.2 release cycle. It was also documented here: https://wiki.openstack.org/wiki/Savanna/Templates . Maybe some points are outdated but the idea is the same as the option 1: user can create cluster template and don’t need to specify all fields, for example ’node_groups’ field. And these fields, both required and optional, can be overwritten in the cluster object even if it contains ‘cluster_template_id’. I see you raised this question because of patch https://review.openstack.org/#/c/56060/. I think it’s just a bug in the validation level not in api. I also agree that we should change UI part accordingly, at least add an ability for users to override fields set in cluster and node group templates during the cluster creation. Regards, Alexander Ignatov On 12 Nov 2013, at 23:20, Andrey Lazarev alaza...@mirantis.com wrote: Hi all, I want to raise the question What template is. Answer to this question could influence UI, validation and user experience significantly. I see two possible answers: 1. Template is a simplification for object creation. It allows to keep common params in one place and not specify them each time. 2. Template is a full description of object. User should be able to create object from template without specifying any params. As I see the current approach is the option 1, but UI is done mostly for option 2. This leads to situations when user creates incomplete template (backend allows it because of option 1), but can't use it later (UI doesn't allow to work with incomplete templates). Let's define common vision on how will we treat templates and document this somehow. My opinion is that we should proceed with the option 1 and change UI accordingly. Thanks, Andrew ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Savanna] Release 0.3 retrospective
Hi, Here is the wiki page with Savanna release 0.3 retrospective: https://wiki.openstack.org/wiki/Savanna/Release_0.3_Retrospective Thanks everyone who sent your opinions. If someone wants to add more thoughts you are welcome to edit above page! -- Regards, Alexander Ignatov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Savanna] Weekly team meeting minutes Sep 19, 2013
Thanks everyone who have joined Savanna meeting. Here are the logs from the meeting: Minutes: http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-19-18.05.html Minutes (text): http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-19-18.05.txt Log: http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-19-18.05.log.html -- Regards, Alexander Ignatov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Savanna] Hadoop 1.1.2 replacement to 1.2.1 in vanilla plugin
Hi Savanna folks, Due to replacement of Hadoop distro from 1.1.2 to 1.2.1 into Vanilla plugin newly created CRs in master branch may fails on integration tests. Replacement related patch: https://review.openstack.org/#/c/46490/ DIB script changes: https://review.openstack.org/#/c/46720/ These changes were tested manually and all Savanna related tests are worked fine, eventually savanna-ci set +1. I will retrigger your failed tests manually. Sorry for inconvenience. -- Regards, Alexander Ignatov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Savanna] problem starting namenode
Hi, Arindam Current Savanna's vanilla plugin pushes two configs directly into hdfs-site.xml for all DataNodes and NameNode: dfs.name.dir =/lib/hadoop/hdfs/namenode, dfs.data.dir = /lib/hadoop/hdfs/datanode https://github.com/stackforge/savanna/blob/master/savanna/plugins/vanilla/config_helper.py#L178-L181 All these pathes are joined with /mnt dir which as a root place for mounted Ephemeral drives. These configs are responsible for placement of HDFS data. Particularly /mnt/lib/hadoop/hdfs/namenode should be created before formatting NameNode. I'm not sure about proper behaviour of Hadoop 0.20.203.0 you are using in your plugin but in 1.1.2 version supported by Vanilla Plugin /mnt/lib/hadoop/hdfs/namenode is created during formatting namenode automatically. Maybe 0.20.203.0 this is not implemented. I'd recommend you to check it with manual cluster deployment w/o Savanna cluster provisioning. If that is case then you should write your code with creating these directories before starting Hadoop services. Regards, Alexander Ignatov On 9/16/2013 6:11 PM, Arindam Choudhury wrote: Hi, I am trying to a custom plugin to provision hadoop 0.20.203.0 with jdk1.6u45. So I created a custom pre-installed image tweaking savanna-image-elements and a new plugin called mango. I am having this error on namenode: 2013-09-16 13:34:27,463 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: / STARTUP_MSG: Starting NameNode STARTUP_MSG: host = test-master-starfish-001/192.168.32.2 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.203.0 STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011 / 2013-09-16 13:34:27,784 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2013-09-16 13:34:27,797 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2013-09-16 13:34:27,799 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2013-09-16 13:34:27,799 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2013-09-16 13:34:27,964 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2013-09-16 13:34:27,966 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 2013-09-16 13:34:27,976 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 2013-09-16 13:34:27,976 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit 2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB 2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries 2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152 2013-09-16 13:34:28,047 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop 2013-09-16 13:34:28,047 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 2013-09-16 13:34:28,047 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true 2013-09-16 13:34:28,060 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 2013-09-16 13:34:28,060 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 2013-09-16 13:34:28,306 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 2013-09-16 13:34:28,326 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 2013-09-16 13:34:28,329 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /mnt/lib/hadoop/hdfs/namenode does not exist. 2013-09-16 13:34:28,330 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /mnt/lib/hadoop/hdfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:353) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
Re: [openstack-dev] [Savanna]Creating new plugin
Hi Arindam, It seems you forgot to do 'git add' on 'savanna/plugins/mango/resources/core-default.xml' Please do this on the other xml and resource files you are using in the mango plugin. Regards, Alexander Ignatov On 9/13/2013 5:33 PM, Arindam Choudhury wrote: Hi, I am trying to provision hadoop0.20.203.0 with jdk6u45. So, I tweaked savanna-image-elements and created a pre-installed vm image. Then I copied vanilla and edit it to create a new plugin named mango. also to include the new plugin, I edited etc/savanna/savanna.conf as follows: plugins=vanilla,mango [plugin:vanilla] plugin_class=savanna.plugins.vanilla.plugin:VanillaProvider [plugin:mango] plugin_class=savanna.plugins.mango.plugin:MangoProvider Then, When I try to start the savanna daemon I get the following error: # tools/install_venv removing /root/savanna/.tox/log using tox.ini: /root/savanna/tox.ini using tox-1.6.1 from /usr/lib/python2.6/site-packages/tox/__init__.pyc GLOB start: packaging GLOB sdist-make: /root/savanna/setup.py removing /root/savanna/.tox/dist /root/savanna$ /usr/bin/python /root/savanna/setup.py sdist --formats=zip --dist-dir /root/savanna/.tox/dist /root/savanna/.tox/log/tox-0.log GLOB finish: packaging after 3.06 seconds copying new sdistfile to '/root/.tox/distshare/savanna-0.2.a26.g3a8ddfb.zip' venv start: getenv /root/savanna/.tox/venv venv reusing: /root/savanna/.tox/venv venv finish: getenv after 0.03 seconds venv start: installpkg /root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip venv inst-nodeps: /root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip setting PATH=/root/savanna/.tox/venv/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin /root/savanna$ /root/savanna/.tox/venv/bin/pip install --pre /root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip -U --no-deps /root/savanna/.tox/venv/log/venv-10.log venv finish: installpkg after 2.85 seconds venv start: runtests venv runtests: commands[0] | python --version setting PATH=/root/savanna/.tox/venv/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin /root/savanna$ /root/savanna/.tox/venv/bin/python --version Python 2.6.6 venv finish: runtests after 0.00 seconds _ summary __ venv: commands succeeded congratulations :) # tox -evenv -- savanna-api --config-file etc/savanna/savanna.conf -d GLOB sdist-make: /root/savanna/setup.py venv inst-nodeps: /root/savanna/.tox/dist/savanna-0.2.a26.g3a8ddfb.zip venv runtests: commands[0] | savanna-api --config-file etc/savanna/savanna.conf -d /root/savanna/.tox/venv/lib/python2.6/site-packages/sqlalchemy/engine/strategies.py:117: SADeprecationWarning: The 'listeners' argument to Pool (and create_engine()) is deprecated. Use event.listen(). pool = poolclass(creator, **pool_args) /root/savanna/.tox/venv/lib/python2.6/site-packages/sqlalchemy/pool.py:160: SADeprecationWarning: Pool.add_listener is deprecated. Use event.listen() self.add_listener(l) 2013-09-13 15:28:23.443 4783 DEBUG savanna.plugins.base [-] List of requested plugins: ['vanilla', 'mango'] _load_all_plugins /root/savanna/.tox/venv/lib/python2.6/site-packages/savanna/plugins/base.py:113 2013-09-13 15:28:23.501 4783 CRITICAL savanna [-] [Errno 2] No such file or directory: '/root/savanna/.tox/venv/lib/python2.6/site-packages/savanna/plugins/mango/resources/core-default.xml' ERROR: InvocationError: '/root/savanna/.tox/venv/bin/savanna-api --config-file etc/savanna/savanna.conf -d' _ summary __ ERROR: venv: commands failed ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev