Re: [openstack-dev] [sahara] Proposing Vitaly Gridnev to core reviewer team
+1! On 10/12/2015 07:19 AM, Sergey Lukjanov wrote: Hi folks, I'd like to propose Vitaly Gridnev as a member of the Sahara core reviewer team. Vitaly contributing to Sahara for a long time and doing a great job on reviewing and improving Sahara. Here are the statistics for reviews [0][1][2] and commits [3]. Existing Sahara core reviewers, please vote +1/-1 for the addition of Vitaly to the core reviewer team. Thanks. [0] https://review.openstack.org/#/q/reviewer:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z [1] http://stackalytics.com/report/contribution/sahara-group/180 [2] http://stackalytics.com/?metric=marks_id=vgridnev [3] https://review.openstack.org/#/q/status:merged+owner:%22Vitaly+Gridnev+%253Cvgridnev%2540mirantis.com%253E%22,n,z -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Proposing Ethan Gafford for the core reviewer team
On 08/13/2015 10:56 AM, Sergey Lukjanov wrote: Hi folks, I'd like to propose Ethan Gafford as a member of the Sahara core reviewer team. Ethan contributing to Sahara for a long time and doing a great job on reviewing and improving Sahara. Here are the statistics for reviews [0][1][2] and commits [3]. BTW Ethan is already stable maint team core for Sahara. Existing Sahara core reviewers, please vote +1/-1 for the addition of Ethan to the core reviewer team. Thanks. [0] https://review.openstack.org/#/q/reviewer:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22,n,z [1] http://stackalytics.com/report/contribution/sahara-group/90 [2] http://stackalytics.com/?user_id=egaffordmetric=marks [3] https://review.openstack.org/#/q/owner:%22Ethan+Gafford+%253Cegafford%2540redhat.com%253E%22+status:merged,n,z -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. +1 ethan has really taken to sahara, providing valuable input to both development and deployments as well has taking on the manila integration __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] team meeting Nov 27 1800 UTC
On 11/26/2014 01:10 PM, Sergey Lukjanov wrote: Hi folks, We'll be having the Sahara team meeting as usual in #openstack-meeting-alt channel. Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141127T18 -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. fyi, it's the Thanksgiving holiday for folks in the US, so we'll be absent. best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Nominate Sergey Reshetniak to sahara-core
On 11/11/2014 12:35 PM, Sergey Lukjanov wrote: Hi folks, I'd like to propose Sergey to sahara-core. He's made a lot of work on different parts of Sahara and he has a very good knowledge of codebase, especially in plugins area. Sergey has been consistently giving us very well thought out and constructive reviews for Sahara project. Sahara core team members, please, vote +/- 2. Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev +2 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Nominate Michael McCune to sahara-core
On 11/11/2014 12:37 PM, Sergey Lukjanov wrote: Hi folks, I'd like to propose Michael McCune to sahara-core. He has a good knowledge of codebase and implemented important features such as Swift auth using trusts. Mike has been consistently giving us very well thought out and constructive reviews for Sahara project. Sahara core team members, please, vote +/- 2. Thanks. -- Sincerely yours, Sergey Lukjanov Sahara Technical Lead (OpenStack Data Processing) Principal Software Engineer Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev +2 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Sahara] Verbosity of Sahara overview image
On 09/26/2014 02:27 PM, Sharan Kumar M wrote: Hi all, I am trying to modify the diagram in http://docs.openstack.org/developer/sahara/overview.html so that it syncs with the contents. In the diagram, is it nice to mark the connections between the openstack components like, Nova with Cinder, Nova with Swift, components with Keystone, Nova with Neutron, etc? Or would it be too verbose for this diagram and should I be focusing on links between Sahara and other components? Thanks, Sharan Kumar M ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://docs.openstack.org/developer/sahara/architecture.html has a better diagram imho i think the diagram should focus on links between sahara and other components only. best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Sahara] Error config?
On 07/09/2014 07:24 AM, Dat Tran wrote: Hi all, I'm install sahara follow the instructions below: http://docs.openstack.org/developer/sahara/devref/quickstart.html With script install sahara as follows: https://docs.google.com/document/d/18j4zR4ENibxA-WBVkryzkMFU9PRiuop3U2pEU6JQrOk/edit But when to step: Register image in Image Registry http POST $SAHARA_URL/images/$IMAGE_ID X-Auth-Token:$AUTH_TOKEN username=ubuntu have a message: /HTTP/1.1 401 Unauthorized/ /Content-Length: 23/ /Content-Type: text/plain/ /Date: Wed, 09 Jul 2014 10:28:22 GMT/ /Www-Authenticate: Keystone uri='http://127.0.0.1:5000/v2.0/'/ /Authentication required/ Do you know where I was wrong? Thank you very much :) use the sahara cli instead. sahara image-register --id $IMAGE_ID --username ubuntu best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Sahara] Error config?
On 07/09/2014 11:26 AM, Dat Tran wrote: Thank matt! But, I have a error: ERROR: Could not find Sahara endpoint in catalog you do have to setup sahara to be in the keystone service catalog, something like step 4 from, docs.openstack.org/developer/sahara/horizon/installation.guide.html#sahara-dashboard-installation keystone service-create --name sahara --type data_processing \ --description Sahara Data Processing keystone endpoint-create --service sahara --region RegionOne \ --publicurl http://10.0.0.2:8386/v1.1/%(tenant_id)s \ --adminurl http://10.0.0.2:8386/v1.1/%(tenant_id)s \ --internalurl http://10.0.0.2:8386/v1.1/%(tenant_id)s best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Mahout-as-a-service [sahara]
On 05/28/2014 12:37 PM, Dat Tran wrote: Hi everyone, I have a idea for new project: Mahout-as-a-service. Main idea of this project: - Install OpenStack - Deploying OpenStack Sahara source - Deploying Mahout on Sahara OpenStack system. - Construction of the API. Through web or mobile interface, users can: - Enable / Disable Mahout on Hadoop cluster - Run Mahout job - Get information on surveillance systems related to Mahout job. - Statistics and service costs over time and total resource use. Definitely!!! APIs will be public. Look forward to your comments. Hopefully in this summer, we can do something together. Thank you very much! :) dat, since mahout is a great ml library that leverages mapreduce (and now spark h2o), it may be simpler for you to make sure that mahout is installed by the various sahara plugins. in fact, i bet you could run mahout jobs using edp and the java action right now in sahara. if that's true it's probably a bit clunky and worth the effort to streamline. best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: subprojects
On 05/29/2014 07:23 AM, Alexander Ignatov wrote: On 28 May 2014, at 20:02, Sergey Lukjanov slukja...@mirantis.com wrote: sahara-image-elements We're agreed that some common parts should be merged into the diskimage-builder repo (like java support, ssh, etc.). The main issue of keeping -image-elements separated is how to release them and provide mapping sahara version - elements version. You can find different options in etherpad [0], I'll write here about the option that I think will work best for us. So, the idea is that sahara-image-elements is a bunch of scripts and tools for building images for Sahara. It's high coupled with plugins's code in Sahara, so, we need to align them good. Current default decision is to keep aligned versioning like 2014.1 and etc. It'll be discussed on the weekly irc team meeting May 29. I vote to keep sahara-image-elements as separate repo and release it as you Sergey propose. I see problems with sahara-ci when running all bunch of integration tests for checking image-elements and core sahara code on each patch sent to sahara repo in case of collapsed two repos. this problem was raised during the design summit and i thought the resolution was that the sahara-ci could be smart about which set of itests it ran. for instance, a change in the elements would trigger image rebuild, a change outside the elements would trigger service itests. a change that covered both elements and the service could trigger all tests. is that still possible? best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: subprojects
On 05/29/2014 09:59 AM, Trevor McKay wrote: below, sahara-extra sahara-extra Keep it as is, no need to stop releasing, because we're not publishing anything to pypi. No real need for tags. Even if we keep the repo for now, I think we could simplify a little bit. The edp-examples could be moved to the Sahara repo. Some of those examples we use in the integration tests anyway -- why have them duplicated? +1 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: subprojects
On 05/29/2014 10:15 AM, Michael McCune wrote: - Original Message - Re sahara-image-elements we found a bunch of issues that we should solve and that's why I think that keeping current releasing is still the best option. - we should test it better and depend on stable diskimage-builder version The dib is now published to pypi, so, we could make sahara-image-elements in dib-style and publish it to pypi in the same style. It makes us able to add some sanity tests for images checking and add gate jobs for running them (it could be done anyway, but this approach with separated repo looks more consistent). Developing sahara-image-elements as a pip-installable project we could add diskimage-builder to the requirements.txt of it and manage it's version, it'll provide us good flexibility - for example, we'll be able to specify to use latest release dib. - all scripts and dib will not be installed with sahara (50/50) I think if we are going to make sahara-image-elements into a full-fledged pypi package we should refactor diskimage-create.sh into a python script. It will give up better options for argument parsing and I feel more control over the flow of operations. mike the image-elements is too unstable to be used by anyone but an expert at this point. imho we should make sure the experts produce working images first, it's what our users will need in the first place, then make the image generation more stable. best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: backward compat
On 05/29/2014 10:22 AM, Sergey Lukjanov wrote: So, it looks like we have an agreement on all question. There is only one technical question - keeping release images means that we need to keep the whole matrix of images: plugin X version X OSy [X root-passwdord]. I'll take a look on total size of them and ability to publish them on OS infra. that's definitely an upper bound. in practice it will be considerably less. for juno we'd have - . vanilla hadoop1 fedora . vanilla hadoop1 ubuntu . vanilla hadoop1 centos6 . ?vanilla hadoop1 centos7? . vanilla hadoop2 fedora . vanilla hadoop2 ubuntu . vanilla hadoop2 centos6 . ?vanilla hadoop2 centos7? . hdp hadoop1 centos . hdp hadoop2 centos . spark ubuntu . ?spark fedora? . ?spark centos? i do not think we should release any images that have a root password set (essentially a backdoor). for K we should deprecate the hadoop1 versions and thus significantly cut the size of the new image artifact. best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: subprojects
On 05/28/2014 12:02 PM, Sergey Lukjanov wrote: Hey folks, it's a small wrap-up for the topic Sahara subprojects releasing and versioning that was discussed partially on summit and requires some more discussions. You can find details in [0]. common We'll include only one tarball for sahara to the release launchpad pages. All other links will be provided in docs. safe to assume this is in addition to the client tarball? sahara-dashboard The merging to Horizon process is now in progress. We've decided that j1 is the deadline for merging main code parts and during the j2 all the code should be merged into Horizon, so, if in time of j2 we'll have some work on merging sahara-dashboard to Horizon not done we'll need to fallback to the separated sahara-dashboard repo release for Juno cycle and continue merging the code into the Horizon to be able to completely kill sahara-dashboard repo in K release. we really need to kill sahara-dashboard before the juno release Where we should keep our UI integration tests? ideally w/ the code it tests, so horizon. are there problems w/ that approach? as a fallback they can go into the sahara repo sahara-image-elements We're agreed that some common parts should be merged into the diskimage-builder repo (like java support, ssh, etc.). The main issue of keeping -image-elements separated is how to release them and provide mapping sahara version - elements version. You can find different options in etherpad [0], I'll write here about the option that I think will work best for us. So, the idea is that sahara-image-elements is a bunch of scripts and tools for building images for Sahara. It's high coupled with plugins's code in Sahara, so, we need to align them good. Current default decision is to keep aligned versioning like 2014.1 and etc. It'll be discussed on the weekly irc team meeting May 29. i vote for merging sahara-image-elements into the sahara repo and keeping the strategic direction that common-enough elements get pushed to diskimage-builder sahara-extra Keep it as is, no need to stop releasing, because we're not publishing anything to pypi. No real need for tags. we still need to figure out the examples and swift plugin, but seems reasonable to punt that from the juno cycle if there is no bandwidth open questions If you have any objections for this model, please, share your thoughts before June 3 due to the Juno-1 (June 12) to have enough time to apply selected approach. [0] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward so ideal situation imho - . sahara (includes image elements and possibly ui tests) . python-saharaclient (as before) . sahara-extra (handle later) . horizon (everything that was in sahara-dashboard) this misses the puppet modules. possibly they should also be merged into the sahara repo. best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: backward compat
On 05/28/2014 09:14 AM, Sergey Lukjanov wrote: Hey folks, it's a small wrap-up for the two topics Sahara backward compat and Hadoop cluster backward compatibility, both were discussed on design summit, etherpad [0] contains info about them. There are some open questions listed in the end of email, please, don't skip them :) Sahara backward compat Keeping released APIs stable since the Icehouse release. So, for now we have one stable API v1.1 (and v1.0 as a subset for it). Any changes to existing semantics requires new API version, additions handling is a question. As part of API stability decision python client should work with all previous Sahara versions. API of python-saharaclient should be stable itself, because we aren't limiting the client version for OpenStack release, so, the client v123 shouldn't change own API exposed to user that is working with stable release REST API versions. for juno we should just have a v1 api (there can still be a v1.1 endpoint, but it should be deprecated), and maybe a v2 api +1 any semantic changes require new major version number +1 api should only have a major number (no 1.1 or 2.1) Hadoop cluster backward compat It was decided to at least keep released versions of cluster (Hadoop) plugins for the next release, so, It means if we have vanilla-2.0.1 released as part of Icehouse, than we could remove it's support only after releasing it as part of Juno with note that it's deprecated and will not be available in the next release. Additionally, we've decided to add some docs with upgrade recommendations. we should only be producing images for the currently supported plugin versions. images for deprecated versions can be found with the releases where the version wasn't deprecated. best, matt Open questions 1. How should we handle addition of new functionality to the API, should we bump minor version and just add new endpoints? 2. For which period of time should we keep deprecated API and client for it? 3. How to publish all images and/or keep stability of building images for plugins? [0] https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward Thanks. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: backward compat
On 05/28/2014 01:59 PM, Michael McCune wrote: - Original Message - Open questions 1. How should we handle addition of new functionality to the API, should we bump minor version and just add new endpoints? I think we should not include the minor revision number in the url. Looking at some of the other projects (nova, keystone) it looks like the preference to make the version endpoint able to return information about the specific version implemented. I think going forward, if we choose to bump the minor version for small features we can just change what the version endpoint returns. Any client would then be able to decide if they can use newer features based on the version reported from the return value. If we maintain a consistent version api endpoint then I don't see an issue with increasing the minor version based on new features being added. But, I only endorse this if we decide to solidify the version endpoint (e.g. /v2, not /v2.1). I realize this creates some confusion as we already have /v1 and /v1.1. I'm guessing we might subsume v1.1 at a point in time where we choose to deprecate. i agree, no minor version number. we should even collapse v1.1 and v1 for juno. i don't think we need a capability discovery step. the api should already properly response w/ 404 for endpoints that do not exist. the concern about only discovering a function isn't available until a few steps into a call sequence can be addressed with upfront endpoint detection. and i think this is an extremely rare corner case. 2. For which period of time should we keep deprecated API and client for it? Not sure what the standard for OpenStack project is, but I would imagine we keep the deprecated API version for one release to give users time to migrate. i'd say 1-2 cycles. pragmatically, we will probably never be able to remove api versions. 3. How to publish all images and/or keep stability of building images for plugins? This is a good question, I don't have a strong opinion at this time. My gut feeling is that we should maintain official images somewhere, but I realize this introduces more work in maintenance. for each release we should distribute images for the non-deprecated plugin versions. best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] summit wrap-up: backward compat
On 05/28/2014 03:50 PM, Andrew Lazarev wrote: for juno we should just have a v1 api (there can still be a v1.1 endpoint, but it should be deprecated), and maybe a v2 api +1 any semantic changes require new major version number +1 api should only have a major number (no 1.1 or 2.1) In this case we will end up with new major number each release. Even if no major changes were done. a semantic addition (e.g. adding EDP and v1.1) doesn't warrant a new version. so more specifically: +1 any change in existing semantics requires a new major version number but maybe i'm missing why we'd end up w/ a new version per release best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Nominate Trevor McKay for sahara-core
On 05/12/2014 05:31 PM, Sergey Lukjanov wrote: Hey folks, I'd like to nominate Trevor McKay (tmckay) for sahara-core. He is among the top reviewers of Sahara subprojects. Trevor is working on Sahara full time since summer 2013 and is very familiar with current codebase. His code contributions and reviews have demonstrated a good knowledge of Sahara internals. Trevor has a valuable knowledge of EDP part and Hadoop itself. He's working on both bugs and new features implementation. Some links: http://stackalytics.com/report/contribution/sahara-group/30 http://stackalytics.com/report/contribution/sahara-group/90 http://stackalytics.com/report/contribution/sahara-group/180 https://review.openstack.org/#/q/owner:tmckay+sahara+AND+-status:abandoned,n,z https://launchpad.net/~tmckay Sahara cores, please, reply with +1/0/-1 votes. Thanks. +1 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Design Summit Sessions
bummer. it seems to me like having the api discussion on the same day as the other outwardly facing topics would be a good idea. best, matt On 04/28/2014 10:29 AM, Sergey Lukjanov wrote: Matt, I'd like to keep the v2 api discussion in the end of our design sessions track to have enough input on other areas. IMO we should discuss first what we need to have and then how it'll looks like. On Fri, Apr 25, 2014 at 9:29 PM, Matthew Farrellee m...@redhat.com wrote: On 04/24/2014 10:51 AM, Sergey Lukjanov wrote: Hey folks, I've pushed the draft schedule for Sahara sessions on ATL design summit. The description isn't fully completed, I'm working on it. I'll do it till the end of week and add an etherpad to each session. Sahara folks, please, take a look on a schedule and share your thoughts / comments. Thanks. http://junodesignsummit.sched.org/overview/type/sahara+%28ex-savanna%29 will you swap v2-api and scalable slots? part of it will flow into ux re image-registry. maybe add some error handling / state machine to the ux improvements best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] cancel next team meeting May 1
On 04/25/2014 07:23 AM, Sergey Lukjanov wrote: Hey folks, May 1 is a non-working day in Russia and I'm starting traveling next day, so, I'll not be able to chair it. So, I'm proposing to cancel this meeting. Any thoughts/objections? if folks have topics they'd like to cover, use the mailing list see you all at summit! best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Design Summit Sessions
On 04/24/2014 10:51 AM, Sergey Lukjanov wrote: Hey folks, I've pushed the draft schedule for Sahara sessions on ATL design summit. The description isn't fully completed, I'm working on it. I'll do it till the end of week and add an etherpad to each session. Sahara folks, please, take a look on a schedule and share your thoughts / comments. Thanks. http://junodesignsummit.sched.org/overview/type/sahara+%28ex-savanna%29 will you swap v2-api and scalable slots? part of it will flow into ux re image-registry. maybe add some error handling / state machine to the ux improvements best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [sahara] apache hive no longer distributing 0.11.0 - image-elements don't build
https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-ubuntu/d37fe82/console.html https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-fedora/da83f57/console.html https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-centos/27d14d1/console.html all fail with... --2014-04-21 22:35:45-- http://www.apache.org/dist/hive/hive-0.11.0/hive-0.11.0-bin.tar.gz Resolving www.apache.org (www.apache.org)... 192.87.106.229, 140.211.11.131, 2001:610:1:80bc:192:87:106:229 Connecting to www.apache.org (www.apache.org)|192.87.106.229|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2014-04-21 22:35:45 ERROR 404: Not Found. it looks like the 0.11.0 tarball is no more, http://www.apache.org/dist/hive/ it'll take me a day or two to build up an image w/ 0.12.0 (or better: 0.13.0) and see if it works so we can upgrade the dib scripts. if someone from mirantis wants to cache a copy of 0.11.0 on sahara-files, update the url and file a bug about updating to 0.13.0, i'll fast track a +2/+A. best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] apache hive no longer distributing 0.11.0 - image-elements don't build
On 04/22/2014 08:21 AM, Matthew Farrellee wrote: https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-ubuntu/d37fe82/console.html https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-fedora/da83f57/console.html https://sahara.mirantis.com/logs/97/86997/4/check/diskimage-integration-centos/27d14d1/console.html all fail with... --2014-04-21 22:35:45-- http://www.apache.org/dist/hive/hive-0.11.0/hive-0.11.0-bin.tar.gz Resolving www.apache.org (www.apache.org)... 192.87.106.229, 140.211.11.131, 2001:610:1:80bc:192:87:106:229 Connecting to www.apache.org (www.apache.org)|192.87.106.229|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2014-04-21 22:35:45 ERROR 404: Not Found. it looks like the 0.11.0 tarball is no more, http://www.apache.org/dist/hive/ it'll take me a day or two to build up an image w/ 0.12.0 (or better: 0.13.0) and see if it works so we can upgrade the dib scripts. if someone from mirantis wants to cache a copy of 0.11.0 on sahara-files, update the url and file a bug about updating to 0.13.0, i'll fast track a +2/+A. best, matt fyi - i found a copy of hive 0.11.0 in archive.apache.org and have filed, https://review.openstack.org/#/c/89561/ best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [horizon][sahara] Merging Sahara-UI Dashboard code into horizon
On 04/17/2014 03:06 PM, Chad Roberts wrote: Per blueprint https://blueprints.launchpad.net/horizon/+spec/merge-sahara-dashboard we are merging the Sahara Dashboard UI code into the Horizon code base. Over the last week, I have been working on making this merge happen and along the way some interesting questions have come up. Hopefully, together we can make the best possible decisions. Sahara is the Data Processing platform for Openstack. During incubation and prior to that, a horizon dashboard plugin was developed to work with the data processing api. Our original implementation was a separate dashboard that we would activate by adding to HORIZON_CONFIG and INSTALLED_APPS. The layout gave us a root of Sahara on the same level as Admin and Project. Under Sahara, we have 9 panels that make-up the entirety of the functionality for the Sahara dashboard. Over the past week there seems to be at least 2 questions that have come up. I'd like to get input from anyone interested. 1) Where should the functionality live within the Horizon UI? So far, 2 options have been presented. a) In a separate dashboard (same level as Admin and Project). This is what we had in the past, but it doesn't seem to fit the flow of Horizon very well. I had a review up for this method at one point, but it was shot down, so it is currently abandoned. b) In a panel group under Project. This is what I have stared work on recently. This seems to mimic the way other things have been integrated, but more than one person has disagreed with this approach. c) Any other options? 2) Where should the code actually reside? a) Under openstack_dashboards/dashboards/sahara (or data_processing). This was the initial approach when the target was a separate dashboard. b) Have all 9 panels reside in openstack_dashboards/dashboards/project. To me, this is likely to eventually make a mess of /project if more and more things are integrated there. c) Place all 9 data_processing panels under openstack_dashboards/dashboards/project/data_processing This essentially groups the code by panel group and might make for a bit less mess. d) Somewhere else? The current plan is to discuss this at the next Horizon weekly meeting, but even if you can't be there, please do add your thoughts to this thread. Thanks, Chad Roberts (crobertsrh on irc) hopefully (1) can be altered after a merge based on ux evaluation, so i'd say go w/ the most consistent approach to start (b). best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Savanna 2014.1.b3 (Icehouse-3) dev milestone available
On 03/06/2014 04:00 PM, Sergey Lukjanov wrote: Hi folks, the third development milestone of Icehouse cycle is now available for Savanna. Here is a list of new features and fixed bug: https://launchpad.net/savanna/+milestone/icehouse-3 and here you can find tarballs to download it: http://tarballs.openstack.org/savanna/savanna-2014.1.b3.tar.gz http://tarballs.openstack.org/savanna-dashboard/savanna-dashboard-2014.1.b3.tar.gz http://tarballs.openstack.org/savanna-image-elements/savanna-image-elements-2014.1.b3.tar.gz http://tarballs.openstack.org/savanna-extra/savanna-extra-2014.1.b3.tar.gz There are 20 blueprint implemented, 45 bugs fixed during the milestone. It includes savanna, savanna-dashboard, savanna-image-element and savanna-extra sub-projects. In addition python-savannaclient 0.5.0 that was released early this week supports all new features introduced in this savanna release. Thanks. rdo packages - f21 - savanna - http://koji.fedoraproject.org/koji/taskinfo?taskID=6634141 el6 - savanna - http://koji.fedoraproject.org/koji/taskinfo?taskID=6634119 f21 - python-django-savanna - http://koji.fedoraproject.org/koji/taskinfo?taskID=6634139 el6 - python-django-savanna - http://koji.fedoraproject.org/koji/taskinfo?taskID=6634116 best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] team meeting minutes March 13 [savanna]
On 03/13/2014 03:24 PM, Jay Pipes wrote: On Thu, 2014-03-13 at 23:13 +0400, Sergey Lukjanov wrote: Thanks everyone who have joined Savanna meeting. You mean Sahara? :P -jay sergey now has to put some bitcoins in the jar... ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [sahara] context module update
andrew, chad, trevor, please take another look at https://review.openstack.org/#/c/78208/ best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [sahara] Sahara (ex. Savanna) project renaming process [savanna]
On 03/07/2014 04:50 PM, Sergey Lukjanov wrote: Hey folks, we're now starting working on the project renaming. You can find details in the etherpad [0]. We'll move all work items to the blueprints, one blueprint per sub-project to well track progress and work items. The general blueprint is [1], it'll depend on all other blueprints and it's currently consists of general renaming tasks. Current plan is to assign each subproject blueprint to volunteer. Please, contact me and Matthew Farrellee if you'd like to take the renaming bp. Please, share your ideas/suggestions in ML or etherpad. [0] https://etherpad.openstack.org/p/savanna-renaming-process [1] https://blueprints.launchpad.net/openstack?searchtext=savanna-renaming Thanks. P.S. Please, prepend email topics with [sahara] and append [savanna] to the end of topic (like in this email) for the transition period. savann^wsahara team, i've separated out most of the activities that can happen in parallel, aligned them on repository boundaries, and filed blueprints for the efforts. now we need community members to take ownership (be the assignee) of the blueprints. taking ownership means you'll be responsible for the renaming in the repository, coordinating with other owners and getting feedback from the community about important questions (such as compatibility requirements). to take ownership, just go to the blueprint and assign it to yourself. if there is already an assignee, reach out to that person and offer them assistance. blueprints up for grabs - what: savanna^wsahara ci blueprint: https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-ci comments: this should be taken by someone already familiar with the ci. i'd nominate skolekonov what: saraha puppet modules blueprint: https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-puppet comments: this should be taken by someone who can validate the changes. i'd nominate sbadia or dizz what: sahara extras blueprint: https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-extra comments: this could be taken by anyone what: sahara dib image elements blueprint: https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-image-elements comments: this could be taken by anyone what: sahara python client blueprint: https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-client comments: this should be done by someone w/ experience in the client. i'd nominate tmckay what: sahara horizon plugin blueprint: https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-dashboard comments: this will require experience and care. i'd nominate croberts what: sahara guestagent blueprint: https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-guestagent comments: i'd nominate dmitrymex what: sahara section of openstack wiki blueprint: https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-wiki comments: this could be taken by anyone what: sahara service blueprint: https://blueprints.launchpad.net/savanna/+spec/savanna-renaming-service comments: this requires experience, care and is a lot of work. i'd nominate alazarev aignatov to tag team it ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] client 0.5.0 release
builds for fedora rawhide and epel6 - rawhide - http://koji.fedoraproject.org/koji/taskinfo?taskID=6582748 epel6 - http://koji.fedoraproject.org/koji/taskinfo?taskID=6582798 On 02/25/2014 03:50 PM, Sergey Lukjanov wrote: Hi folks, I'm glad to announce that python-savannaclient v0.5.0 released! pypi: https://pypi.python.org/pypi/python-savannaclient/0.5.0 tarball: http://tarballs.openstack.org/python-savannaclient/python-savannaclient-0.5.0.tar.gz launchpad: https://launchpad.net/python-savannaclient/0.5.x/0.5.0 Notes: * it's first release with CLI covers mostly all features; * dev docs moved to client from the main repo; * support for all new Savanna features introduced in Icehouse release cycle; * single common entrypoint, actual - savannaclient.client.Client('1.1); * auth improvements; * base resource class improvements; * 93 commits from the prev. release. Thanks. On Thu, Feb 20, 2014 at 3:53 AM, Sergey Lukjanov slukja...@mirantis.com wrote: Additionally, it contains support for the latest EDP features. On Thu, Feb 20, 2014 at 3:52 AM, Sergey Lukjanov slukja...@mirantis.com wrote: Hi folks, I'd like to make a 0.5.0 release of savanna client soon, please, share your thoughts about stuff that should be included to it. Currently we have the following major changes/fixes: * mostly implemented CLI; * unified entry point for python bindings like other OpenStack clients; * auth improvements; * base resource class improvements. Full diff: https://github.com/openstack/python-savannaclient/compare/0.4.1...master Thanks. -- Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. -- Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Nominate Andrew Lazarew for savanna-core
On 02/19/2014 05:40 PM, Sergey Lukjanov wrote: Hey folks, I'd like to nominate Andrew Lazarew (alazarev) for savanna-core. He is among the top reviewers of Savanna subprojects. Andrew is working on Savanna full time since September 2013 and is very familiar with current codebase. His code contributions and reviews have demonstrated a good knowledge of Savanna internals. Andrew have a valuable knowledge of both core and EDP parts, IDH plugin and Hadoop itself. He's working on both bugs and new features implementation. Some links: http://stackalytics.com/report/reviews/savanna-group/30 http://stackalytics.com/report/reviews/savanna-group/90 http://stackalytics.com/report/reviews/savanna-group/180 https://review.openstack.org/#/q/owner:alazarev+savanna+AND+-status:abandoned,n,z https://launchpad.net/~alazarev Savanna cores, please, reply with +1/0/-1 votes. Thanks. -- Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. fyi, some of those links don't work, but these do, http://stackalytics.com/report/contribution/savanna-group/30 http://stackalytics.com/report/contribution/savanna-group/90 http://stackalytics.com/report/contribution/savanna-group/180 i'm very happy to see andrew evolving in the savanna community, making meaningful contributions, demonstrating a reasoned approach to resolve disagreements, and following guidelines such as GitCommitMessages more closely. i expect he will continue his growth as well as influence others to contribute positively. +1 best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [savanna] plugin version or hadoop version?
$ savanna plugins-list +-+--+---+ | name| versions | title | +-+--+---+ | vanilla | 1.2.1| Vanilla Apache Hadoop | | hdp | 1.3.2| Hortonworks Data Platform | +-+--+---+ above is output from the /plugins endpoint - http://docs.openstack.org/developer/savanna/userdoc/rest_api_v1.0.html#plugins the question is, should the version be the version of the plugin or the version of hadoop the plugin installs? i ask because it seems like we have version == plugin version for hdp and version == hadoop version for vanilla. the documentation is somewhat vague on the subject, mostly stating version without qualification. however, the json passed to the service references hadoop_version and the arguments in the client are called hadoop_version fyi, this could be complicated by the idh and spark plugins. best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] plugin version or hadoop version?
ok, i spent a little time looking at what the change impacts and it looks like all the template validations we have currently require hadoop_version. additionally, the client uses the name and documentation references it. due to the large number of changes and the difficulty in providing backward compatibility, i propose that we leave it as is for the v1 api client and we change it for the v2 api client. to that end, i've added 'verifying hadoop_version - version' as a work item for both the v2-api-impl and v2-client. https://blueprints.launchpad.net/savanna/+spec/v2-api-impl and https://blueprints.launchpad.net/python-savannaclient/+spec/v2-client best, matt On 02/17/2014 04:23 PM, Alexander Ignatov wrote: Agree to rename this legacy field to ‘version’. Adding to John's words about HDP, Vanilla plugin is able to run different hadoop versions by doing some manipulations with DIB scripts :-) So the right name of this field should be ‘version’ as version of engine of concrete plugin. Regards, Alexander Ignatov On 18 Feb 2014, at 01:01, John Speidel jspei...@hortonworks.com mailto:jspei...@hortonworks.com wrote: Andrew +1 The HDP plugin also returns the HDP distro version. The version needs to make sense in the context of the plugin. Also, many plugins including the HDP plugin will support deployment of several hadoop versions. -John On Mon, Feb 17, 2014 at 2:36 PM, Andrew Lazarev alaza...@mirantis.com mailto:alaza...@mirantis.com wrote: IDH uses version of IDH distro and there is no direct mapping between distro version and hadoop version. E.g. IDH 2.5.1 works with apache hadoop 1.0.3. I suggest to call the field as just 'version' everywhere and assume this version as plugin specific property. Andrew. On Mon, Feb 17, 2014 at 5:06 AM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: $ savanna plugins-list +-+--+__---+ | name| versions | title | +-+--+__---+ | vanilla | 1.2.1| Vanilla Apache Hadoop | | hdp | 1.3.2| Hortonworks Data Platform | +-+--+__---+ above is output from the /plugins endpoint - http://docs.openstack.org/__developer/savanna/userdoc/__rest_api_v1.0.html#plugins http://docs.openstack.org/developer/savanna/userdoc/rest_api_v1.0.html#plugins the question is, should the version be the version of the plugin or the version of hadoop the plugin installs? i ask because it seems like we have version == plugin version for hdp and version == hadoop version for vanilla. the documentation is somewhat vague on the subject, mostly stating version without qualification. however, the json passed to the service references hadoop_version and the arguments in the client are called hadoop_version fyi, this could be complicated by the idh and spark plugins. best, matt _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Choosing provisioning engine during cluster launch
i imagine this is something that can be useful in a development and testing environment, especially during the transition period from direct to heat. so having the ability is not unreasonable, but i wouldn't expose it to users via the dashboard (maybe not even directly in the cli) generally i want to reduce the number of parameters / questions the user is asked best, matt On 01/30/2014 04:42 AM, Dmitry Mescheryakov wrote: I agree with Andrew. I see no value in letting users select how their cluster is provisioned, it will only make interface a little bit more complex. Dmitry 2014/1/30 Andrew Lazarev alaza...@mirantis.com mailto:alaza...@mirantis.com Alexander, What is the purpose of exposing this to user side? Both engines must do exactly the same thing and they exist in the same time only for transition period until heat engine is stabilized. I don't see any value in proposed option. Andrew. On Wed, Jan 29, 2014 at 8:44 PM, Alexander Ignatov aigna...@mirantis.com mailto:aigna...@mirantis.com wrote: Today Savanna has two provisioning engines, heat and old one known as 'direct'. Users can choose which engine will be used by setting special parameter in 'savanna.conf'. I have an idea to give an ability for users to define provisioning engine not only when savanna is started but when new cluster is launched. The idea is simple. We will just add new field 'provisioning_engine' to 'cluster' and 'cluster_template' objects. And profit is obvious, users can easily switch from one engine to another without restarting savanna service. Of course, this parameter can be omitted and the default value from the 'savanna.conf' will be applied. Is this viable? What do you think? Regards, Alexander Ignatov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Savanna 2014.1.b2 (Icehouse-2) dev milestone available
On 01/23/2014 11:59 AM, Sergey Lukjanov wrote: Hi folks, the second development milestone of Icehouse cycle is now available for Savanna. Here is a list of new features and fixed bug: https://launchpad.net/savanna/+milestone/icehouse-2 and here you can find tarballs to download it: http://tarballs.openstack.org/savanna/savanna-2014.1.b2.tar.gz http://tarballs.openstack.org/savanna-dashboard/savanna-dashboard-2014.1.b2.tar.gz http://tarballs.openstack.org/savanna-image-elements/savanna-image-elements-2014.1.b2.tar.gz http://tarballs.openstack.org/savanna-extra/savanna-extra-2014.1.b2.tar.gz There are 15 blueprint implemented, 37 bugs fixed during the milestone. It includes savanna, savanna-dashboard, savanna-image-element and savanna-extra sub-projects. In addition python-savannaclient 0.4.1 that was released early this week supports all new features introduced in this savanna release. Please, note that the next milestone, icehouse-3, is scheduled for March, 6th. Thanks. -- Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. rdo packages - el6 - savanna - http://koji.fedoraproject.org/koji/buildinfo?buildID=494307 savanna-dashboard - http://koji.fedoraproject.org/koji/buildinfo?buildID=494286 f20 - savanna - https://admin.fedoraproject.org/updates/openstack-savanna-2014.1.b2-3.fc20 savanna-dashboard - https://admin.fedoraproject.org/updates/python-django-savanna-2014.1.b2-1.fc20 notes - . you need paramiko = 1.10.1 (http://koji.fedoraproject.org/koji/buildinfo?buildID=492749) . you need stevedore = 0.13 (http://koji.fedoraproject.org/koji/buildinfo?buildID=494300) (https://bugs.launchpad.net/savanna/+bug/1273459) best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] why swift-internal:// ?
andrew, what about having swift:// which defaults to the configured tenant and auth url for what we now call swift-internal, and we allow for user input to change tenant and auth url for what would be swift-external? in fact, we may need to add the tenant selection in icehouse. it's a pretty big limitation to only allow a single tenant. best, matt On 01/23/2014 11:15 PM, Andrew Lazarev wrote: Matt, For swift-internal we are using the same keystone (and identity protocol version) as for savanna. Also savanna admin tenant is used. Thanks, Andrew. On Thu, Jan 23, 2014 at 6:17 PM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: what makes it internal vs external? swift-internal needs user pass swift-external needs user pass ?auth url? best, matt On 01/23/2014 08:43 PM, Andrew Lazarev wrote: Matt, I can easily imagine situation when job binaries are stored in external HDFS or external SWIFT (like data sources). Internal and external swifts are different since we need additional credentials. Thanks, Andrew. On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com mailto:m...@redhat.com mailto:m...@redhat.com wrote: trevor, job binaries are stored in swift or an internal savanna db, represented by swift-internal:// and savanna-db:// respectively. why swift-internal:// and not just swift://? fyi, i see mention of a potential future version of savanna w/ swift-external:// best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.__openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] why swift-internal:// ?
thanks for all the feedback folks.. i've registered a bp for this... https://blueprints.launchpad.net/savanna/+spec/swift-url-proto-cleanup On 01/24/2014 11:30 AM, Sergey Lukjanov wrote: Looks like we need to review prefixes and cleanup them. After the first look I'd like the idea of using common prefix for swift data. On Fri, Jan 24, 2014 at 7:05 PM, Trevor McKay tmc...@redhat.com mailto:tmc...@redhat.com wrote: Matt et al, Yes, swift-internal was meant as a marker to distinguish it from swift-external someday. I agree, this could be indicated by setting other fields. Little bit of implementation detail for scope: In the current EDP implementation, SWIFT_INTERNAL_PREFIX shows up in essentially two places. One is validation (pretty easy to change). The other is in Savanna's binary_retrievers module where, as others suggested, the auth url (proto, host, port, api) and admin tenant from the savanna configuration are used with the user/passw to make a connection through the swift client. Handling of different types of job binaries is done in binary_retrievers/dispatch.py, where the URL determines the treatment. This could easily be extended to look at other indicators. Best, Trev On Fri, 2014-01-24 at 07:50 -0500, Matthew Farrellee wrote: andrew, what about having swift:// which defaults to the configured tenant and auth url for what we now call swift-internal, and we allow for user input to change tenant and auth url for what would be swift-external? in fact, we may need to add the tenant selection in icehouse. it's a pretty big limitation to only allow a single tenant. best, matt On 01/23/2014 11:15 PM, Andrew Lazarev wrote: Matt, For swift-internal we are using the same keystone (and identity protocol version) as for savanna. Also savanna admin tenant is used. Thanks, Andrew. On Thu, Jan 23, 2014 at 6:17 PM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com mailto:m...@redhat.com mailto:m...@redhat.com wrote: what makes it internal vs external? swift-internal needs user pass swift-external needs user pass ?auth url? best, matt On 01/23/2014 08:43 PM, Andrew Lazarev wrote: Matt, I can easily imagine situation when job binaries are stored in external HDFS or external SWIFT (like data sources). Internal and external swifts are different since we need additional credentials. Thanks, Andrew. On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com mailto:m...@redhat.com mailto:m...@redhat.com mailto:m...@redhat.com mailto:m...@redhat.com mailto:m...@redhat.com mailto:m...@redhat.com wrote: trevor, job binaries are stored in swift or an internal savanna db, represented by swift-internal:// and savanna-db:// respectively. why swift-internal:// and not just swift://? fyi, i see mention of a potential future version of savanna w/ swift-external:// best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists. mailto:OpenStack-dev@lists.__openstack.org http://openstack.org mailto:OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ OpenStack-dev mailing list OpenStack-dev
Re: [openstack-dev] [savanna] savannaclient v2 api
what do you consider EDP internal, and how does it relate to the v1.1 or v2 API? i'm ok with making it plugin independent. i'd just suggest moving it out of /jobs and to something like /extra/config-hints/{type}, maybe along with /extra/validations/config. best, matt On 01/22/2014 06:25 AM, Alexander Ignatov wrote: Current EDP config-hints are not only plugin specific. Several types of jobs must have certain key/values and without it job will fail. For instance, MapReduce (former Jar) job type requires Mapper/Reducer classes parameters to be set[1]. Moreover, for such kind of jobs we already have separated configuration defaults [2]. Also initial versions of patch implementing config-hints contained plugin-independent defaults for all each job types [3]. I remember we postponed decision about which configs are commmon for all plugins and agreed to show users all vanilla-specific defaults. That's why now we have several TODOs in the code about config-hints should be plugin-specific. So I propose to leave config-hints REST call in EDP internal and make it plugin-independent (or job-specific) by removing of parsing all vanilla-specific defaults and define small list of configs which is definitely common for each type of jobs. The first things come to mind: - For MapReduce jobs it's already defined in [1] - Configs like number of map and reduce tasks are common for all type of jobs - At least user always has an ability to set any key/value(s) as params/arguments for job [1] http://docs.openstack.org/developer/savanna/userdoc/edp.html#workflow [2] https://github.com/openstack/savanna/blob/master/savanna/service/edp/resources/mapred-job-config.xml [3] https://review.openstack.org/#/c/45419/10 Regards, Alexander Ignatov On 20 Jan 2014, at 22:04, Matthew Farrellee m...@redhat.com wrote: On 01/20/2014 12:50 PM, Andrey Lazarev wrote: Inlined. On Mon, Jan 20, 2014 at 8:15 AM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: (inline, trying to make this readable by a text-only mail client that doesn't use tabs to indicate quoting) On 01/20/2014 02:50 AM, Andrey Lazarev wrote: -- FIX - @rest.get('/jobs/config-hints/job_type') - should move to GET /plugins/plugin_name/plugin_version, similar to get_node_processes and get_required_image_tags -- Not sure if it should be plugin specific right now. EDP uses it to show some configs to users in the dashboard. it's just a cosmetic thing. Also when user starts define some configs for some job he might not define cluster yet and thus plugin to run this job. I think we should leave it as is and leave only abstract configs like Mapper/Reducer class and allow users to apply any key/value configs if needed. FYI, the code contains comments suggesting it should be plugin specific. https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179 https://github.com/openstack/__savanna/blob/master/savanna/__service/edp/workflow_creator/__workflow_factory.py#L179 https://github.com/openstack/__savanna/blob/master/savanna/__service/edp/workflow_creator/__workflow_factory.py#L179 https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179 IMHO, the EDP should have no plugin specific dependencies. If it currently does, we should look into why and see if we can't eliminate this entirely. [AL] EDP uses plugins in two ways: 1. for HDFS user 2. for config hints I think both items should not be plugin specific on EDP API level. But implementation should go to plugin and call plugin API for result. In fact they are both plugin specific. The user is forced to click through a plugin selection (when launching a job on transient cluster) or the plugin selection has already occurred (when launching a job on an existing cluster). Since the config is something that is plugin specific, you might not have hbase hints from vanilla but you would from hdp, and you already have plugin information whenever you ask for a hint, my view that this be under the /plugins namespace is growing stronger. [AL] Disagree. They are plugin specific, but EDP itself could have additional plugin-independent logic inside. Now config hints return EDP properties (like mapred.input.dir) as well as plugin-specific properties. Placing it under /plugins namespace will give a vision that it is fully plugin specific. I like to see EDP API
Re: [openstack-dev] [savanna] why swift-internal:// ?
what makes it internal vs external? swift-internal needs user pass swift-external needs user pass ?auth url? best, matt On 01/23/2014 08:43 PM, Andrew Lazarev wrote: Matt, I can easily imagine situation when job binaries are stored in external HDFS or external SWIFT (like data sources). Internal and external swifts are different since we need additional credentials. Thanks, Andrew. On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: trevor, job binaries are stored in swift or an internal savanna db, represented by swift-internal:// and savanna-db:// respectively. why swift-internal:// and not just swift://? fyi, i see mention of a potential future version of savanna w/ swift-external:// best, matt _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] savannaclient v2 api
(inline, trying to make this readable by a text-only mail client that doesn't use tabs to indicate quoting) On 01/20/2014 02:50 AM, Andrey Lazarev wrote: -- FIX - @rest.get('/jobs/config-hints/__job_type') - should move to GET /plugins/plugin_name/__plugin_version, similar to get_node_processes and get_required_image_tags -- Not sure if it should be plugin specific right now. EDP uses it to show some configs to users in the dashboard. it's just a cosmetic thing. Also when user starts define some configs for some job he might not define cluster yet and thus plugin to run this job. I think we should leave it as is and leave only abstract configs like Mapper/Reducer class and allow users to apply any key/value configs if needed. FYI, the code contains comments suggesting it should be plugin specific. https://github.com/openstack/__savanna/blob/master/savanna/__service/edp/workflow_creator/__workflow_factory.py#L179 https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179 IMHO, the EDP should have no plugin specific dependencies. If it currently does, we should look into why and see if we can't eliminate this entirely. [AL] EDP uses plugins in two ways: 1. for HDFS user 2. for config hints I think both items should not be plugin specific on EDP API level. But implementation should go to plugin and call plugin API for result. In fact they are both plugin specific. The user is forced to click through a plugin selection (when launching a job on transient cluster) or the plugin selection has already occurred (when launching a job on an existing cluster). Since the config is something that is plugin specific, you might not have hbase hints from vanilla but you would from hdp, and you already have plugin information whenever you ask for a hint, my view that this be under the /plugins namespace is growing stronger. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] savannaclient v2 api
(inline-ish) On 01/20/2014 02:36 AM, Andrey Lazarev wrote: On Sun, Jan 19, 2014 at 7:53 AM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: On 01/16/2014 09:19 PM, Andrey Lazarev wrote: REMOVE - @rest.get('/job-executions/__job_execution_id/refresh-__status') - refresh and return status - GET should not side-effect, status is part of details and updated periodically, currently unused This call goes to Oozie directly to ask it about job status. It allows not to wait too long when periodic task will update status JobExecution object in Savanna. The current GET asks status of JobExecution from savanna-db. I think we can leave this call, it might be useful for external clients. [AL] Agree that GET shouldn't have side effect (or at least documented side effect). I think it could be generic PUT on '/job-executions/job___execution_id' which can refresh status or cancel job on hadoop side. From what I can tell, this endpoint is not exposed by the savannaclient or used directly from the horizon plugin. I imagine that having a savanna-api, please go faster call is enticing, but if we're not using it yet, let's make sure we have a well defined need before adding/keeping it. [AL] I like to disable 'periodic' in dev environment. And this is the only way to update job status without periodic. So, I vote on adding it to savannaclient and to horizon. IMHO, we should not be adding calls to the client or horizon app that would use this command. Instead we should have a well tuned periodic value that meets user expectations. I propose we not expose this as part of the official Savanna API, and we look into other options for developer environments that allow for triggering a refresh of oozie information. Possibly when savanna-api gets a SIGUSR1 it should re-run all periodic tasks? REMOVE - @rest.get('/job-executions/__job_execution_id/cancel') - cancel job-execution - GET should not side-effect, currently unused, use DELETE /job/executions/job___execution_id Disagree. We have to leave this call. This methods stops job executing on the Hadoop cluster but doesn't remove all its related info from savanna-db. DELETE removes it completely. [AL] We need 'cancel'. Vote on generic PUT (see previous item). AFAICT, this is also not used. Where is the need? [AL] I can easily imagine scenario where canceling is useful. Both features give some benefit, but not extremely needed. So, it is a question of priorities. My vote is on leaving both of them. I don't disagree that we could come up with scenarios, but we should not add these to the Savanna API until we have concrete scenarios to implement in the horizon app or CLI. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] savannaclient v2 api
On 01/20/2014 12:50 PM, Andrey Lazarev wrote: Inlined. On Mon, Jan 20, 2014 at 8:15 AM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: (inline, trying to make this readable by a text-only mail client that doesn't use tabs to indicate quoting) On 01/20/2014 02:50 AM, Andrey Lazarev wrote: -- FIX - @rest.get('/jobs/config-hints/job_type') - should move to GET /plugins/plugin_name/plugin_version, similar to get_node_processes and get_required_image_tags -- Not sure if it should be plugin specific right now. EDP uses it to show some configs to users in the dashboard. it's just a cosmetic thing. Also when user starts define some configs for some job he might not define cluster yet and thus plugin to run this job. I think we should leave it as is and leave only abstract configs like Mapper/Reducer class and allow users to apply any key/value configs if needed. FYI, the code contains comments suggesting it should be plugin specific. https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179 https://github.com/openstack/__savanna/blob/master/savanna/__service/edp/workflow_creator/__workflow_factory.py#L179 https://github.com/openstack/__savanna/blob/master/savanna/__service/edp/workflow_creator/__workflow_factory.py#L179 https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179 IMHO, the EDP should have no plugin specific dependencies. If it currently does, we should look into why and see if we can't eliminate this entirely. [AL] EDP uses plugins in two ways: 1. for HDFS user 2. for config hints I think both items should not be plugin specific on EDP API level. But implementation should go to plugin and call plugin API for result. In fact they are both plugin specific. The user is forced to click through a plugin selection (when launching a job on transient cluster) or the plugin selection has already occurred (when launching a job on an existing cluster). Since the config is something that is plugin specific, you might not have hbase hints from vanilla but you would from hdp, and you already have plugin information whenever you ask for a hint, my view that this be under the /plugins namespace is growing stronger. [AL] Disagree. They are plugin specific, but EDP itself could have additional plugin-independent logic inside. Now config hints return EDP properties (like mapred.input.dir) as well as plugin-specific properties. Placing it under /plugins namespace will give a vision that it is fully plugin specific. I like to see EDP API fully plugin independent and in one workspace. If core side needs some information internally it can easily go into the plugin. I'm not sure if we're disagreeing. We may, in fact, be in violent agreement. The EDP API is fully plugin independent, and should stay that way as a project goal. config-hints is extra data that the horizon app can use to help give users suggestions about what config they may want to optionally add to their job. Those config options are independent of the job and specific to the cluster where the job will run, which is the purview of the plugin. Moving config-hints out of the EDP API will make this even more clear. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] savannaclient v2 api
On 01/16/2014 08:10 AM, Alexander Ignatov wrote: Matthew, I'm ok with proposed solution. Some comments/thoughts below: - FIX - @rest.post_file('/plugins/plugin_name/version/convert-config/name') - this is an RPC call, made only by a client to do input validation, move to POST /validations/plugins/:name/:version/check-config-import - AFAIR, this rest call was introduced not only for validation. The main idea was to create method which converts plugin specific config for cluster creation to savanna's cluster template [1]. So maybe we may change this rest call to: /plugins/convert-config/name and include all need fields to data. Anyway we have to know Hortonworks guys opinion. Currently HDP plugin implements this method only. The case of converting savanna cluster templates to plugin specific can be done internally, i.e. w/o exposing an API call. The Savanna API should talk savanna cluster templates only. ACAICT, that leaves the validation justification for exposing it, so possibly a move to a /validations namespace. -- REMOVE - @rest.put('/node-group-templates/node_group_template_id') - Not Implemented REMOVE - @rest.put('/cluster-templates/cluster_template_id') - Not Implemented -- Disagree with that. Samsung people did great job in both savanna/savanna-dashboard to make this implemented [2], [3]. We should leave and support these calls in savanna. Absolutely. Now that they're implemented they should not be removed. -- CONSIDER rename /jobs - /job-templates (consistent w/ cluster-templates clusters) CONSIDER renaming /job-executions to /jobs --- Good idea! -- FIX - @rest.get('/jobs/config-hints/job_type') - should move to GET /plugins/plugin_name/plugin_version, similar to get_node_processes and get_required_image_tags -- Not sure if it should be plugin specific right now. EDP uses it to show some configs to users in the dashboard. it's just a cosmetic thing. Also when user starts define some configs for some job he might not define cluster yet and thus plugin to run this job. I think we should leave it as is and leave only abstract configs like Mapper/Reducer class and allow users to apply any key/value configs if needed. FYI, the code contains comments suggesting it should be plugin specific. https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179 IMHO, the EDP should have no plugin specific dependencies. If it currently does, we should look into why and see if we can't eliminate this entirely. - CONSIDER REMOVING, MUST ALWAYS UPLOAD TO Swift FOR /job-binaries - Disagree. It was discussed before starting EDP implementation that there are a lot of OS installations which don't have Swift deployed, and ability to run jobs using savanna internal db is a good option in this case. But yes, Swift is more preferred. Waiting for Trevor's and maybe Nadya's comments here under this section. While it's true that you can deploy OS w/o Swift, it's ok for us to start preferring deployments w/ Swift. Best, matt REMOVE - @rest.get('/job-executions/job_execution_id/refresh-status') - refresh and return status - GET should not side-effect, status is part of details and updated periodically, currently unused This call goes to Oozie directly to ask it about job status. It allows not to wait too long when periodic task will update status JobExecution object in Savanna. The current GET asks status of JobExecution from savanna-db. I think we can leave this call, it might be useful for external clients. REMOVE - @rest.get('/job-executions/job_execution_id/cancel') - cancel job-execution - GET should not side-effect, currently unused, use DELETE /job/executions/job_execution_id Disagree. We have to leave this call. This methods stops job executing on the Hadoop cluster but doesn't remove all its related info from savanna-db. DELETE removes it completely. [1] http://docs.openstack.org/developer/savanna/devref/plugin.spi.html#convert-config-plugin-name-version-template-name-cluster-template-create [2] https://blueprints.launchpad.net/savanna/+spec/modifying-cluster-template [3] https://blueprints.launchpad.net/savanna/+spec/modifying-node-group-template Regards, Alexander Ignatov On 14 Jan 2014, at 21:24, Matthew Farrellee m...@redhat.com wrote: https://blueprints.launchpad.net/savanna/+spec/v2-api I've finished a review of the v1.0 and v1.1 APIs with an eye to making them more consistent and RESTful. Please use this thread to comment on my suggestions for v1.0 v1.1, or to make further suggestions. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo
Re: [openstack-dev] [savanna] savannaclient v2 api
. The current GET asks status of JobExecution from savanna-db. I think we can leave this call, it might be useful for external clients. REMOVE - @rest.get('/job-executions/job_execution_id/cancel') - cancel job-execution - GET should not side-effect, currently unused, use DELETE /job/executions/job_execution_id Disagree. We have to leave this call. This methods stops job executing on the Hadoop cluster but doesn't remove all its related info from savanna-db. DELETE removes it completely. [1] http://docs.openstack.org/developer/savanna/devref/plugin.spi.html#convert-config-plugin-name-version-template-name-cluster-template-create [2] https://blueprints.launchpad.net/savanna/+spec/modifying-cluster-template [3] https://blueprints.launchpad.net/savanna/+spec/modifying-node-group-template Regards, Alexander Ignatov On 14 Jan 2014, at 21:24, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: https://blueprints.launchpad.net/savanna/+spec/v2-api I've finished a review of the v1.0 and v1.1 APIs with an eye to making them more consistent and RESTful. Please use this thread to comment on my suggestions for v1.0 v1.1, or to make further suggestions. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Savanna] Spark plugin status
On 01/10/2014 04:05 AM, Daniele Venzano wrote: On 01/09/14 19:12, Matthew Farrellee wrote: This is definitely great news! +2 to the things Sergey mentioned below. Additionally, will you fill out the blueprint or wiki w/ details that will help others write integration tests for your plugin? We already implemented at least some part of the integration tests for Spark, mimicking the ones that are provided with the Vanilla plugin. The Spark plugin works almost exactly as the Vanilla one, it can install a datanode, namenode, Spark master or Spark worker and resize the cluster. What kind of documentation is needed? That's great. Documentation of how and when to use the plugin will be great. And, did you integrate (or have plans to integrate) Spark into the EDP workflows in Horizon? We would like to have that functionality. Currently we are limited by the lack of a Swift service in our cluster. We will have one test installation in a short while and then we will see. What is the status of the HDFS datasource? We are very interested in that, but I lost track of the development during the holidays. It's coming along well. You could ping tmckay or croberts on #savanna to get specifics. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [savanna] paramiko requirement of = 1.9.0?
jon, please confirm a suspicion of mine. the neutron-private-net-provisioning bp impl added a sock= parameter to the ssh.connect call in remote.py (https://github.com/openstack/savanna/commit/9afb5f60). we currently require paramiko = 1.8.0, but it looks like the sock param was only added to paramiko 1.9.0 (https://github.com/paramiko/paramiko/commit/31ea4f0734a086f2345aaea57fd6fc1c3ea4a87e) do we need paramiko = 1.9.0 as our requirement? also, what version are you using in your installation? best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [savanna] savannaclient v2 api
https://blueprints.launchpad.net/savanna/+spec/v2-api I've finished a review of the v1.0 and v1.1 APIs with an eye to making them more consistent and RESTful. Please use this thread to comment on my suggestions for v1.0 v1.1, or to make further suggestions. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] client release 0.4.1
On 01/13/2014 02:27 PM, Sergey Lukjanov wrote: Hi folks, I'm planning to release python-savannaclient Jan 14/15 due to the number of important fixes and improvements including, for example, basic impl of CLI. This changes are needed for updating savanna-dashboard, integration tests, for adding support of scenarios tests in tempest and etc. There are several open CLI-related changes and Java EDP action support patch [1] that should be included into this release. Are there any thoughts about the things that should be done in 0.4.1 client? Thanks. [1] Allow passing extra args to JobExecutionsManager.create() https://review.openstack.org/#/c/66398/ -- Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. Nothing comes to mind that should block release of the client today. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Savanna] DiskBuilder / savanna-image-elements
On 11/15/2013 12:14 PM, Clint Byrum wrote: Excerpts from Erik Bergenholtz's message of 2013-11-15 08:20:36 -0800: Team - We’d like to move our disk creation mechanism over to using DiskBuilder so that users can build (and modify) their own VM images. We’d like to piggy back off of the existing mechanism that the vanilla plugin uses. It looks like I should be able to add image-elements to savanna-image-elements/elements directory for the HDP specific setup. Any concerns with this approach? From an organizational standpoint, it would make sense to separate elements from various plugins by creating a directory hierarchy under the elements directory; i.e. a directory for each plugin to keep them separated. I don't know how exactly Savanna calls diskimage-builder, but it would indeed make sense for there to be directories for each set of concerns. One just needs to add each directory that should be searched to ELEMENTS_PATH (colon separated). This also makes it possible to override an element by putting a directory earlier in ELEMENTS_PATH, as the first found element for each name wins. Clint, Savanna doesn't directly interact with DIB. Instead it uses images that are output from DIB w/ savanna-image-elements elements. Erik, you can think of DIB elements like packages. Each has a set of scripts that handle installation and configuration of something. For instance, you might create an ambari-agent element that makes sure the ambari-agent is installed and configured so when the image starts up there's no need to install configure before using the instance. Feel free to start throwing reviews at the savanna-image-elements repo and I'll catch them. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Savanna] Release 0.3 retrospective
On 10/31/2013 01:58 PM, Alexander Ignatov wrote: Hi, Here is the wiki page with Savanna release 0.3 retrospective: https://wiki.openstack.org/wiki/Savanna/Release_0.3_Retrospective Thanks everyone who sent your opinions. If someone wants to add more thoughts you are welcome to edit above page! -- Regards, Alexander Ignatov Thanks for pulling this together. IMHO, we could tackle bp process and bug scrub during icehouse. As a side benefit, it'll probably also improve team collaboration. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [savanna] What's the recipe to build Oozie-4.0.0.tar.gz?
Having diskimage-create.sh is a great addition for the Savanna user community. It greatly simplifies the image building process (using DIB for those of you not familiar), making it repeatable and giving everyone a hope of debugging issues. One thing it does is install oozie. It pulls oozie from http://savanna-files.mirantis.com/oozie-4.0.0.tar.gz What's the recipe to create oozie-4.0.0.tar.gz? Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] neutron and private networks
On 10/03/2013 11:21 AM, Jon Maron wrote: Hi, I'd like to raise an issue in the hopes of opening some discussion on the IRC chat later today: We see a critical requirement to support the creation of a savanna cluster with neutron networking while leveraging a private network (i.e. without the assignment of public IPs) - at least during the provisioning phase. So the current neutron solution coded in the master branch appears to be insufficient (it is dependent on the assignment of public IPs to launched instances), at least in the context of discussions we've had with users. We've been experimenting and trying to understand the viability of such an approach and have had some success establishing SSH connections over a private network using paramiko etc. So as long as there is a mechanism to ascertain the namespace associated with the given cluster/tenant (configuration? neutron client?) it appears that the modifications to the actual savanna code for the instance remote interface (the SSH client code etc) will be fairly small. The namespace selection could potentially be another field made available in the dashboard's cluster creation interface. -- Jon Last week there was an IRC discussion about this, which is by its very nature rather ephemeral. So thanks for taking this to the list. The outcome of the IRC meeting was that - 0) we don't cover the use case where only the cluster's head node has a public IP (all worker nodes have private IPs) 1) we think it's an important use case 2) there are two ways we see to address it a) do some architectural changes so that the responsibility of configuring the cluster can be delegated to the head node (from savanna-api) b) make savanna-api netns aware (e.g. ip netns exec) so that it can contact all nodes no matter the visibility of their network This is a good item for the roadmap and for a design session in Hong Kong. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Program name and Mission statement
IMHO, Big Data is even more nebulous and currently being pulled in many directions. Hadoop-as-a-Service may be too narrow. So, something in between, such as Data Processing, is a good balance. Best, matt On 09/13/2013 08:37 AM, Abhishek Lahiri wrote: IMHO data processing is too board , it makes more sense to clarify this program as big data as a service or simply openstack-Hadoop-as-a-service. Thanks Regards Abhishek Lahiri On Sep 12, 2013, at 9:13 PM, Nirmal Ranganathan rnir...@gmail.com mailto:rnir...@gmail.com wrote: On Wed, Sep 11, 2013 at 8:39 AM, Erik Bergenholtz ebergenho...@hortonworks.com mailto:ebergenho...@hortonworks.com wrote: On Sep 10, 2013, at 8:50 PM, Jon Maron jma...@hortonworks.com mailto:jma...@hortonworks.com wrote: Openstack Big Data Platform On Sep 10, 2013, at 8:39 PM, David Scott david.sc...@cloudscaling.com mailto:david.sc...@cloudscaling.com wrote: I vote for 'Open Stack Data' On Tue, Sep 10, 2013 at 5:30 PM, Zhongyue Luo zhongyue@intel.com mailto:zhongyue@intel.com wrote: Why not OpenStack MapReduce? I think that pretty much says it all? On Wed, Sep 11, 2013 at 3:54 AM, Glen Campbell g...@glenc.io mailto:g...@glenc.io wrote: performant isn't a word. Or, if it is, it means having performance. I think you mean high-performance. On Tue, Sep 10, 2013 at 8:47 AM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: Rough cut - Program: OpenStack Data Processing Mission: To provide the OpenStack community with an open, cutting edge, performant and scalable data processing stack and associated management interfaces. Proposing a slightly different mission: To provide a simple, reliable and repeatable mechanism by which to deploy Hadoop and related Big Data projects, including management, monitoring and processing mechanisms driving further adoption of OpenStack. +1. I liked the data processing aspect as well, since EDP api directly relates to that, maybe a combination of both. On 09/10/2013 09:26 AM, Sergey Lukjanov wrote: It sounds too broad IMO. Looks like we need to define Mission Statement first. Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. On Sep 10, 2013, at 17:09, Alexander Kuznetsov akuznet...@mirantis.com mailto:akuznet...@mirantis.com mailto:akuznetsov@mirantis.__com mailto:akuznet...@mirantis.com wrote: My suggestion OpenStack Data Processing. On Tue, Sep 10, 2013 at 4:15 PM, Sergey Lukjanov slukja...@mirantis.com mailto:slukja...@mirantis.com mailto:slukja...@mirantis.com mailto:slukja...@mirantis.com__ wrote: Hi folks, due to the Incubator Application we should prepare Program name and Mission statement for Savanna, so, I want to start mailing thread about it. Please, provide any ideas here. P.S. List of existing programs: https://wiki.openstack.org/__wiki/Programs https://wiki.openstack.org/wiki/Programs P.P.S. https://wiki.openstack.org/__wiki/Governance/NewPrograms https://wiki.openstack.org/wiki/Governance/NewPrograms Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.__openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org
Re: [openstack-dev] [savanna] Program name and Mission statement
You caught me trying to be fancy! On 09/10/2013 03:54 PM, Glen Campbell wrote: performant isn't a word. Or, if it is, it means having performance. I think you mean high-performance. On Tue, Sep 10, 2013 at 8:47 AM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: Rough cut - Program: OpenStack Data Processing Mission: To provide the OpenStack community with an open, cutting edge, performant and scalable data processing stack and associated management interfaces. On 09/10/2013 09:26 AM, Sergey Lukjanov wrote: It sounds too broad IMO. Looks like we need to define Mission Statement first. Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. On Sep 10, 2013, at 17:09, Alexander Kuznetsov akuznet...@mirantis.com mailto:akuznet...@mirantis.com mailto:akuznetsov@mirantis.__com mailto:akuznet...@mirantis.com wrote: My suggestion OpenStack Data Processing. On Tue, Sep 10, 2013 at 4:15 PM, Sergey Lukjanov slukja...@mirantis.com mailto:slukja...@mirantis.com mailto:slukja...@mirantis.com mailto:slukja...@mirantis.com__ wrote: Hi folks, due to the Incubator Application we should prepare Program name and Mission statement for Savanna, so, I want to start mailing thread about it. Please, provide any ideas here. P.S. List of existing programs: https://wiki.openstack.org/__wiki/Programs https://wiki.openstack.org/wiki/Programs P.P.S. https://wiki.openstack.org/__wiki/Governance/NewPrograms https://wiki.openstack.org/wiki/Governance/NewPrograms Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.__openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.__openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _ OpenStack-dev mailing list OpenStack-dev@lists.openstack.__org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- *Glen Campbell* http://glenc.io • @glenc ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Program name and Mission statement
That sounds quite good. Best, matt On 09/11/2013 11:42 AM, Andrei Savu wrote: +1 I guess this will also clarify how Savanna relates to other projects like OpenStack Trove. -- Andrei Savu On Wed, Sep 11, 2013 at 5:16 PM, Mike Spreitzer mspre...@us.ibm.com mailto:mspre...@us.ibm.com wrote: To provide a simple, reliable and repeatable mechanism by which to deploy Hadoop and related Big Data projects, including management, monitoring and processing mechanisms driving further adoption of OpenStack. That sounds like it is at about the right level of specificity. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Savanna] Guidance for adding a new plugin (CDH)
On 09/06/2013 03:00 PM, Andrei Savu wrote: On Fri, Sep 6, 2013 at 3:00 PM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: Once done, what will the procedure be for me to verify it without becoming a Cloudera customer? What will the limitations be to its use, if any, if I'm not a Cloudera customer? Cloudera Standard is free and includes CDH and a version of Cloudera Manager that has all the features we need to make this plugin useful (except some advanced management features and support). For those advanced features an enterprise license will be required but I think that's out of scope now. To answer your question: you don't have to be a Cloudera customer to use this plugin - everything should just work out of the box - you will only need OpenStack and a compatible vanilla OS image. [1] http://www.cloudera.com/content/cloudera/en/products/cloudera-standard.html -- Andrei Savu / axemblr.com http://axemblr.com/ Great. If you start to need any of the non-free features, send a heads up so we can discuss. As for the compatible vanilla OS image, is that any image from - http://cloud.fedoraproject.org or http://cloud-images.ubuntu.com/ or http://docs.openstack.org/trunk/openstack-image/content/centos-image.html With no diskimage-builder (savanna-image-elements) customization? Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] Program name and Mission statement
Rough cut - Program: OpenStack Data Processing Mission: To provide the OpenStack community with an open, cutting edge, performant and scalable data processing stack and associated management interfaces. On 09/10/2013 09:26 AM, Sergey Lukjanov wrote: It sounds too broad IMO. Looks like we need to define Mission Statement first. Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. On Sep 10, 2013, at 17:09, Alexander Kuznetsov akuznet...@mirantis.com mailto:akuznet...@mirantis.com wrote: My suggestion OpenStack Data Processing. On Tue, Sep 10, 2013 at 4:15 PM, Sergey Lukjanov slukja...@mirantis.com mailto:slukja...@mirantis.com wrote: Hi folks, due to the Incubator Application we should prepare Program name and Mission statement for Savanna, so, I want to start mailing thread about it. Please, provide any ideas here. P.S. List of existing programs: https://wiki.openstack.org/wiki/Programs P.P.S. https://wiki.openstack.org/wiki/Governance/NewPrograms Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Savanna] Guidance for adding a new plugin (CDH)
That's great. Once done, what will the procedure be for me to verify it without becoming a Cloudera customer? What will the limitations be to its use, if any, if I'm not a Cloudera customer? Best, matt On 09/05/2013 08:13 AM, Andrei Savu wrote: Thanks Matt! I've added the following blueprint (check the full specification for more details): https://blueprints.launchpad.net/savanna/+spec/cdh-plugin I'm now working on some code to get early feedback. Regards, -- Andrei Savu / axemblr.com http://axemblr.com/ On Wed, Sep 4, 2013 at 11:35 PM, Matthew Farrellee m...@redhat.com mailto:m...@redhat.com wrote: On 09/04/2013 04:06 PM, Andrei Savu wrote: Hi guys - I have just started to play with Savanna a few days ago - I'm still going through the code. Next week I want to start to work on a plugin that will deploy CDH using Cloudera Manager. What process should I follow? I'm new to launchpad / Gerrit. Should I start by creating a blueprint and a bug / improvement request? Savanna is following all OpenStack community practices so you can check out https://wiki.openstack.org/__wiki/How_To_Contribute https://wiki.openstack.org/wiki/How_To_Contribute to get a good idea of what to do. In short, yes please use launchpad and gerrit and create a blueprint. Is there any public OpenStack deployment that I can use for testing? Should 0.2 work with Grizzly at trystack.org http://trystack.org http://trystack.org? 0.2 will work with Grizzly. I've not tried trystack so let us know if it works. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Savanna] Guidance for adding a new plugin (CDH)
On 09/04/2013 04:06 PM, Andrei Savu wrote: Hi guys - I have just started to play with Savanna a few days ago - I'm still going through the code. Next week I want to start to work on a plugin that will deploy CDH using Cloudera Manager. What process should I follow? I'm new to launchpad / Gerrit. Should I start by creating a blueprint and a bug / improvement request? Savanna is following all OpenStack community practices so you can check out https://wiki.openstack.org/wiki/How_To_Contribute to get a good idea of what to do. In short, yes please use launchpad and gerrit and create a blueprint. Is there any public OpenStack deployment that I can use for testing? Should 0.2 work with Grizzly at trystack.org http://trystack.org? 0.2 will work with Grizzly. I've not tried trystack so let us know if it works. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [savanna] Fwd: Change in stackforge/savanna-extra[master]: Add diskimage-creating script, elements for mirrors
Long weekend here in the US, so I didn't get a chance to comment before this was merged, so... Re Oozie - How did you create the oozie-3.3.2.tar.gz? Re sudo image-cache - That's not the case for me, the wget is run without sudo. How are you running disk-image-create? Re DIB_work - it's best practice to use /tmp for temporary work, and mktemp. This script running concurrently with itself will result in unknown output. Best, matt Original Message Subject: Change in stackforge/savanna-extra[master]: Add diskimage-creating script, elements for mirrors Date: Thu, 29 Aug 2013 14:37:36 + From: Ivan Berezovskiy (Code Review) rev...@openstack.org Reply-To: iberezovs...@mirantis.com CC: Sergey Lukjanov slukja...@mirantis.com,Dmitry Mescheryakov dmescherya...@mirantis.com,Nadya Privalova nprival...@mirantis.com,Matthew Farrellee m...@redhat.com Ivan Berezovskiy has posted comments on this change. Change subject: Add diskimage-creating script, elements for mirrors .. Patch Set 6: (16 inline comments) File diskimage-create/diskimage-create.sh Line 11: export OOZIE_DOWNLOAD_URL=http://a8e0dce84b3f00ed7910-a5806ff0396addabb148d230fde09b7b.r31.cf1.rackcdn.com/oozie-3.3.2.tar.gz; We don't use custom tarball. It is our own tarbal. Please, show me link, if you know, where I can download oozie with all binary files. Line 15: if [ $str = 'NAME=Ubuntu' ]; then Package 'redhat-lsb' is not preinstalled in some Fedora images like cloud image. So, we can't use this command. In DIB you can see script 02-lsb ('https://github.com/openstack/diskimage-builder/blob/master/elements/fedora/pre-install.d/02-lsb') that install this package. Line 21: fi Done Line 24: sudo rm -rf /home/$USER/.cache/image-create/* Image caching execute under 'sudo'. You can try to delete images without sudo and you'll see 'premission denied' Line 31: cd DIB_work Why? This directory will be removed after creating images. Line 41: export DIB_COMMIT_ID=`git show --format=%H | head -1` https://github.com/stackforge/savanna-extra/blob/master/elements/savanna-version/install.d/01-savanna-version Line 42: cd ../ Done Line 48: export SAVANNA_ELEMENTS_COMMIT_ID=`git show --format=%H | head -1` https://github.com/stackforge/savanna-extra/blob/master/elements/savanna-version/install.d/01-savanna-version Line 49: cd ../ Done Line 64: fi We can't use 'lsb_release' as I said before. File diskimage-create/README.rst Line 7: 1. If you want to change build parameters, you should edit this script at 'export' commands. Done Line 9: 2. If you want to use your local mirrors, you can specify urls for Fedora and Ubuntu mirrors using parameters 'FEDORA_MIRROR' and 'UBUNTU_MIRROR' like this: Done Line 15: 3. If you want to add your element to this repository, you should edit this script in your commit (you should export variables for your element and add name of element to variables 'element_sequence'). Done File elements/apt-mirror/root.d/0-check Line 2: if [ -z $UBUNTU_MIRROR ]; then Done File elements/yum-mirror/root.d/0-check Line 2: if [ -z $FEDORA_MIRROR ]; then Done File README.rst Line 10: * Script for creating Fedora and Ubuntu cloud images with our elements and default parameters. You should run command only: Done -- To view, visit https://review.openstack.org/43916 To unsubscribe, visit https://review.openstack.org/settings Gerrit-MessageType: comment Gerrit-Change-Id: I12632b5cee42b1dbfd79b7b7c3a7b26962ace625 Gerrit-PatchSet: 6 Gerrit-Project: stackforge/savanna-extra Gerrit-Branch: master Gerrit-Owner: Ivan Berezovskiy iberezovs...@mirantis.com Gerrit-Reviewer: Dmitry Mescheryakov dmescherya...@mirantis.com Gerrit-Reviewer: Ivan Berezovskiy iberezovs...@mirantis.com Gerrit-Reviewer: Jenkins Gerrit-Reviewer: Matthew Farrellee m...@redhat.com Gerrit-Reviewer: Nadya Privalova nprival...@mirantis.com Gerrit-Reviewer: Sergey Lukjanov slukja...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] migration to pbr completed
On 08/27/2013 04:46 PM, Sergey Lukjanov wrote: Hi folks, migration of all Savanna sub projects to pbr has been completed. Please, inform us and/or create bugs for all packaging-related issues. Thanks. Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. Thanks for pushing this forward Sergey. I can confirm that pbr 0.5.19 works fine for building savanna, python-savannaclient and savanna-dashboard, with one minor hiccup in savanna that data_files glob'ing doesn't work. It's easily worked around in my spec though (sed -i 's,etc/savanna/\*,etc/savanna/savanna.conf.sample etc/savanna/savanna.conf.sample-full,' setup.cfg), and isn't an issue with 0.5.21. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] savanna-extra and hadoop-patch
Oh good catch. That's some poor UX. We should find out why it isn't targeted for a 1.x release. Best, matt On 08/27/2013 06:48 AM, Ruslan Kamaldinov wrote: Matt, From the bug description: Affects Version/s:1.2.0, 2.0.3-alpha Target Version/s: 3.0.0, 2.3.0 So, it seems that Hadoop folks don't intend to include this patch into Hadoop 1.x Ruslan On Tuesday, August 27, 2013 at 2:41 PM, Matthew Farrellee wrote: Howdy Ivan, FYI, https://issues.apache.org/jira/browse/HADOOP-8545 is currently targeting 1.2.0 and 2.0.3-alpha. And the code (HADOOP-8545-034.patch) appears to provide support to Hadoop 1.x HDFS, though I may be missing something. I'd suggest only adding a Swift HCFS repo only if the code is not destined to go to Apache Hadoop. +1 discuss at meeting Best, s/matt/erik/ On 08/27/2013 01:45 AM, Sergey Lukjanov wrote: Hi Erik, First of all, savanna-extra has been created exactly for such needs - to store all stuff that we need but couldn't be placed to another repos. Initially it contains elements and pre-builded jar with Swift HCFS. Now the last one has been moved to the CDN and it's a good idea to make separated project for elements. As for Swift HCFS the core attached to the HADOOP-8545 is targeted to the Hadoop 2 version and should be patched to work with Hadoop 1.X correctly. So, that's why we add it to the extra repo. It looks like that it's ok to add one more repo for Swift HCFS near the savanna at stackforge like HCFS for Gluster[0]. So, let's discuss both of the migrations at the next IRC team meeting. [0] https://github.com/gluster/hadoop-glusterfs Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. On Aug 27, 2013, at 5:18, Matthew Farrellee m...@redhat.com (mailto:m...@redhat.com) wrote: https://review.openstack.org/#/c/42926/ I didn't get back to this on Friday and it got merged this morning, so here's my feedback. The savanna-extra repository now appears to hold (a) DIB image elements as well as (b) the source for the Swift backed HCFS (Hadoop Compatible File System) implementation. If I understand this correctly, (b) is actually the patch set that is being proposed to the Apache Hadoop community. That patch set has not been accepted and is being tracked in HADOOP-8545[0], which appears stalled since July 2013. Let's break Savanna's DIB elements out of savanna-extra and into savanna-image-elements. It has a clear path forward and a good definition of scope. Let's also leave savanna-extra as a grab bag, whose only occupant is currently the Swift code. Eventually that code will need a proper home, either contributed to Apache Hadoop or broken out as its own project. Best, matt [0] https://issues.apache.org/jira/browse/HADOOP-8545 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org) http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [savanna] Fwd: Savanna - Hadoop on OpenStack
FYI Original Message Subject: Savanna - Hadoop on OpenStack Date: Mon, 26 Aug 2013 14:29:44 -0400 From: Matthew Farrellee m...@redhat.com Reply-To: Fedora Big Data SIG bigd...@lists.fedoraproject.org To: Fedora Big Data SIG bigd...@lists.fedoraproject.org Hello Big Data SIG folks, If you aren't familiar, Savanna is an OpenStack project that provides Hadoop cluster and workload management. Cluster - construct and manage the lifecycle of Hadoop clusters. Workload - workflow for big data processing with Hadoop (similar to AWS EMR). The project home page is https://launchpad.net/savanna Savanna is made up of 4 sub-projects - . savanna, the main services . savannadashboard, web UI integration with OpenStack Horizon . python-savannaclient, python bindings for the REST API . savanna-extra - diskimage-builder elements for...image building As of today all those are available in F19, F20, and EL6* openstack-savanna - https://bugzilla.redhat.com/show_bug.cgi?id=986615 python-django-savanna - https://bugzilla.redhat.com/show_bug.cgi?id=998123 python-savannaclient - https://bugzilla.redhat.com/show_bug.cgi?id=998701 savanna-image-elements - https://bugzilla.redhat.com/show_bug.cgi?id=998702 Thanks for all the community help getting Savanna into Fedora, especially the #rdo folks. With any luck the project will have Fedora based cloud images with its next release. Right now all the images are Ubuntu based. Best, matt * openstack-savanna (package for savanna sub-project) has a dep on pycrypto and isn't available on EL6 yet, savanna-image-elements depends on diskimage-builder which isn't included in EL6 yet - Install - # yum --enablerepo=updates-testing install openstack-savanna python-django-savanna (make sure you get python-django-savanna-0.2-2) Setup start savanna-api service - # sed -i s/^#os_admin_password=/os_admin_password=$OS_PASSWORD/ /etc/savanna/savanna.conf # systemctl start openstack-savanna-api Setup and load Dashboard plugin - # echo SAVANNA_URL = 'http://localhost:8386/v1.0' /etc/openstack-dashboard/local_settings Edit /usr/share/openstack-dashboard/openstack_dashboard/settings.py - . HORIZON_CONFIG = { 'dashboards': ('project', 'admin', 'settings', 'savanna'), . INSTALLED_APPS = ( 'savannadashboard', 'openstack_dashboard', # systemctl reload httpd ___ bigdata mailing list bigd...@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/bigdata ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [savanna] savanna-extra and hadoop-patch
https://review.openstack.org/#/c/42926/ I didn't get back to this on Friday and it got merged this morning, so here's my feedback. The savanna-extra repository now appears to hold (a) DIB image elements as well as (b) the source for the Swift backed HCFS (Hadoop Compatible File System) implementation. If I understand this correctly, (b) is actually the patch set that is being proposed to the Apache Hadoop community. That patch set has not been accepted and is being tracked in HADOOP-8545[0], which appears stalled since July 2013. Let's break Savanna's DIB elements out of savanna-extra and into savanna-image-elements. It has a clear path forward and a good definition of scope. Let's also leave savanna-extra as a grab bag, whose only occupant is currently the Swift code. Eventually that code will need a proper home, either contributed to Apache Hadoop or broken out as its own project. Best, matt [0] https://issues.apache.org/jira/browse/HADOOP-8545 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Savanna] PTL Election result: Sergey Lukjanov wins
Recorded in https://wiki.openstack.org/wiki/Savanna/PTL Results: Sergey Lukjanov (14), None (0) Electorate: 20 voters (70% participation) His term is effective immediately (22 Aug 2013) until the OpenStack Icehouse release. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] tarballs of savanna-extra
IMHO, the jars should be served from the Apache Hadoop community. I don't know what hoops have to jumped through for that though. It may be far simpler to put them in the mirantis CDN. Best, matt On 08/21/2013 02:21 PM, Sergey Lukjanov wrote: Agreed that storing Hadoop-Swift integration jars in the git repo is a good practice, any thoughts about where to store them? Currently I have only one option - we can store them at the public CDN (savanna-files.mirantis.com) near the images for vanilla plugin. As for publishing tarballs with the content of savanna-extra - looks like there are more pros than cons, so, we can do it. Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. On Aug 20, 2013, at 16:31, Matthew Farrellee m...@redhat.com wrote: Is there a downside to having it? A positive is it gives a snapshot of everything for each release. I'm not at fan of having a snapshot of the Hadoop swift patches compiled into a jar and stored in the repository. I'd prefer that it is hosted elsewhere. Best, matt On 08/19/2013 04:37 PM, Sergey Lukjanov wrote: Hi Matt, it is not an accident that savanna-extra has no tarballs at tarballs.o.o, because this repo is used for storing some date that is only needed for some stuff like building images for vanilla plugin, storing Swift support patch for Hadoop and etc. So, it looks like that we should not package all of them to one heterogeneous tarball. Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. On Aug 20, 2013, at 0:25, Matthew Farrellee m...@redhat.com wrote: Will someone setup a tarballs.os.o release of savanna-extra's master (https://github.com/stackforge/savanna-extra), and make sure it gets an official release for 0.3? Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] tarballs of savanna-extra
Is there a downside to having it? A positive is it gives a snapshot of everything for each release. I'm not at fan of having a snapshot of the Hadoop swift patches compiled into a jar and stored in the repository. I'd prefer that it is hosted elsewhere. Best, matt On 08/19/2013 04:37 PM, Sergey Lukjanov wrote: Hi Matt, it is not an accident that savanna-extra has no tarballs at tarballs.o.o, because this repo is used for storing some date that is only needed for some stuff like building images for vanilla plugin, storing Swift support patch for Hadoop and etc. So, it looks like that we should not package all of them to one heterogeneous tarball. Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. On Aug 20, 2013, at 0:25, Matthew Farrellee m...@redhat.com wrote: Will someone setup a tarballs.os.o release of savanna-extra's master (https://github.com/stackforge/savanna-extra), and make sure it gets an official release for 0.3? Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] OpenStack Savanna issue
On 08/04/2013 12:01 PM, Linus Nova wrote: HI, I installed OpenStack Savanna in OpenStack Grizzely release. As you can see in savanna.log, the savanna-api start and operates correctly. When I launch the cluster, the VMs start correctly but soon after they are removed as shown in the log file. Do you have any ideas on what is happening? Best regards. Linus Nova Linus, I don't know if your issue has been resolved, but if it hasn't I invite you to ask it at - https://answers.launchpad.net/savanna/+addquestion Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [savanna] Savanna PTL election proposal
This is a request for feedback from the community. The Savanna project has been operating with a benevolent dictator. It wants to upgrade to an elected PTL. There's no set process for a project that isn't incubating or integrated. Our goal is to mirror the standard election process as closely as possible, but a few options exist for components of the election. The goal is to agree on election options during this week's (15 Aug) Savanna meeting (https://wiki.openstack.org/wiki/Meetings/SavannaAgenda), start the election after the meeting, and complete the election by the following week's meeting. (This is also open for suggestions) The proposal w/ options - 0. System - a. http://www.cs.cornell.edu/w8/~andru/civs/ 1. Candidates - a. members of the electorate (OpenStack standard) 2. Candidate nomination - a. anyone can list names in https://etherpad.openstack.org/savanna-ptl-candidates-0 b. anyone mentioned during this week's IRC meeting c. both (a) and (b) - Current direction is to be inclusive and thus (c) 3. Electorate - a. all AUTHORS on the Savanna repositories b. all committers (git log --author) on Savanna repos since Grizzly release c. all committers since Savanna inception d. savanna-core members (currently 2 people) e. committers w/ filter on number of commits or size of commits - Current direction is to be broadly inclusive (not (d) or (e)) thus (a), it is believed that (a) ~= (b) ~= (c). 4. Duration of election - a. 1 week (from 15 Aug meeting to 22 Aug meeting) 5. Term - a. effective immediately through next full OpenStack election cycle (i.e. now until I release, 6 mo+) b. effective immediately until min(6 mo, incubation) c. effective immediately until end of incubation - Current direction is any option that aligns with the standard OpenStack election cycle FYI, Savanna repositories - . https://github.com/stackforge/savanna - core services . https://github.com/stackforge/savanna-extra - DIB elements . https://github.com/stackforge/savanna-dashboard - horizon integration . https://github.com/stackforge/python-savannaclient - client library Thanks to hub_cap and other folks on #savanna for the lively discussion and debate in forming this proposal. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna] scalable architecture
On 07/23/2013 12:32 PM, Sergey Lukjanov wrote: Hi evereyone, We’ve started working on upgrading Savanna architecture in version 0.3 to make it horizontally scalable. The most part of information is in the wiki page - https://wiki.openstack.org/wiki/Savanna/NextGenArchitecture. Additionally there are several blueprints created for this activity - https://blueprints.launchpad.net/savanna?searchtext=ng- We are looking for comments / questions / suggestions. Some comments on Why not provision agents to Hadoop cluster's to provision all other stuff? Re problems with scaling agents for launching large clusters - launching large clusters may be resource intensive, those resources must be provided by someone. They're either going to be provided by a) the hardware running the savanna infrastructure or b) the instance hardware provided to the tenant. If they are provided by (a) then the cost of launching the cluster is incurred by all users of savanna. If (b) then the cost is incurred by the user trying to launch the large cluster. It is true that some instance recommendations may be necessary, e.g. if you want to run a 500 instance cluster than your head node should be large (vs medium or small). That sizing decision needs to happen for (a) or (b) because enough virtual resources must be present to maintain the large cluster after it is launched. There are accounting and isolation benefits to (b). Re problems migrating agents while cluster is scaling - will you expand on this point? Re unexpected resource consumers - during launch, maybe, during execution the agent should be a minimal consumer of resources. sshd may also be an unexpected resource consumer. Re security vulnerability - the agents should only communicate within the instance network, primarily w/ the head node. The head node can relay information to the savanna infrastructure outside the instances in the same way savanna-api gets information now. So there should be no difference in vulnerability assessment. Re support multiple distros - yes, but I'd argue this is at most a small incremental complexity on what already exists today w/ properly creating savanna plugin compatible instances. - Concretely, the architecture of using instance resources for provisioning is no different than spinning an instance w/ ambari and then telling that instance to provision the rest of the cluster and report back status. - Re metrics - wherever you gather Hz (# req per sec, # queries per sec, etc), also gather standard summary statistics (mean, median, std dev, quartiles, range) Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Savanna] Merge Fedora and Ubuntu DIB elements
On 07/15/2013 07:34 AM, Ivan Berezovskiy wrote: Matt, I've sent a comment at https://review.openstack.org/#/c/36690/ . So if I believe the issue is a hadoop.rpm that is out of spec w/ fedora. For instance, it claims to own things like /usr. It also doesn't have a proper post-install to handle the library files. I've not seen the issue. Please file a bug for it. we decided to merge elements, I suggest you to do it in the following way: 1. subdirectory root.d doesn't change. 2. subdirectory install.d should be used to install java on Ubuntu and Fedora 3. subdirecotry post-install.d should be used to install hadoop and configuring ssh on Ubuntu and Fedora. What's the motivation for this split? 4. your changes in file first-boot.d/99-setup are OK, we only need to change them for Fedora 19, because default user in Fedora 19 is 'fedora'. Agreed. AFAIK we don't have F19 yet, so please file a bug on this. Whomever gets it should start tracking DIB (or help DIB get F19). Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [savanna]error while accessing Savanna UI
On 07/15/2013 08:45 AM, Arindam Choudhury wrote: Hi, I did: git clone https://github.com/stackforge/savanna-dashboard.git cd savanna-dashboard python setup.py install pip show savannadashboard --- Name: savannadashboard Version: 0.2.rc2 Location: /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg Requires: then in /usr/share/openstack-dashboard/openstack_dashboard/settings.py HORIZON_CONFIG = { 'dashboards': ('project', 'admin', 'settings', 'savanna',), INSTALLED_APPS = ( 'openstack_dashboard', 'savannadashboard', and in /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py SAVANNA_URL = 'http://localhost:8386/v1.0' But whenever I try to access savanna dashboard I get the following error in httpd error_access log: [Mon Jul 15 07:44:35 2013] [error] ERROR:django.request:Internal Server Error: /dashboard/savanna/ [Mon Jul 15 07:44:35 2013] [error] Traceback (most recent call last): [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 111, in get_response [Mon Jul 15 07:44:35 2013] [error] response = callback(request, *callback_args, **callback_kwargs) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec [Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args, **kwargs) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 54, in dec [Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args, **kwargs) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec [Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args, **kwargs) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/django/views/generic/base.py, line 48, in view [Mon Jul 15 07:44:35 2013] [error] return self.dispatch(request, *args, **kwargs) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/django/views/generic/base.py, line 69, in dispatch [Mon Jul 15 07:44:35 2013] [error] return handler(request, *args, **kwargs) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 155, in get [Mon Jul 15 07:44:35 2013] [error] handled = self.construct_tables() [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 146, in construct_tables [Mon Jul 15 07:44:35 2013] [error] handled = self.handle_table(table) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 118, in handle_table [Mon Jul 15 07:44:35 2013] [error] data = self._get_data_dict() [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 182, in _get_data_dict [Mon Jul 15 07:44:35 2013] [error] self._data = {self.table_class._meta.name: self.get_data()} [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/clusters/views.py, line 40, in get_data [Mon Jul 15 07:44:35 2013] [error] clusters = savanna.clusters.list() [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/clusters.py, line 74, in list [Mon Jul 15 07:44:35 2013] [error] return self._list('/clusters', 'clusters') [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/base.py, line 84, in _list [Mon Jul 15 07:44:35 2013] [error] resp = self.api.client.get(url) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/httpclient.py, line 28, in get [Mon Jul 15 07:44:35 2013] [error] headers={'x-auth-token': self.token}) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/requests/api.py, line 55, in get [Mon Jul 15 07:44:35 2013] [error] return request('get', url, **kwargs) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/requests/api.py, line 44, in request [Mon Jul 15 07:44:35 2013] [error] return session.request(method=method, url=url, **kwargs) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/requests/sessions.py, line 335, in request [Mon Jul 15 07:44:35 2013] [error] resp = self.send(prep, **send_kwargs) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/requests/sessions.py, line 438, in send [Mon Jul 15 07:44:35 2013] [error] r = adapter.send(request, **kwargs) [Mon Jul 15 07:44:35 2013] [error] File /usr/lib/python2.6/site-packages/requests/adapters.py, line 327, in send [Mon Jul 15 07:44:35 2013] [error] raise ConnectionError(e) [Mon Jul 15 07:44:35 2013] [error] ConnectionError:
Re: [openstack-dev] [Savanna] Savanna 0.2 is released!
Well done all, this release was no small effort! Especially, great collaboration and use of tools available from the OpenStack community. Best, matt On 07/15/2013 06:14 PM, Sergey Lukjanov wrote: Hello everyone, I'm very happy to announce the immediate release of Savanna 0.2. This release contains 3 components: Savanna core, plugin for OpenStack Dashboard and diskimage-builder elements. Release Notes (https://wiki.openstack.org/wiki/Savanna/ReleaseNotes/0.2): * Plugin Provisioning Mechanism implemented * Vanilla Hadoop plugin implemented with the following features supported: * creation of Hadoop clusters with different topologies * scaling: resizing existing node groups and adding new ones * support of Swift as input and output for Hadoop jobs * diskimage-builder elements for automation of Hadoop images creation * Cinder supported as block storage provider * Anti-affinity supported for Hadoop processes * OpenStack Dashboard plugin which supports almost all the operations exposed through Savanna REST API (screencast will be available soon) * Integration tests for Vanilla plugin Savanna wiki: https://wiki.openstack.org/wiki/Savanna Launchpad project: https://launchpad.net/savanna Savanna docs: https://savanna.readthedocs.org/en/latest/index.html (quickstart and installation, user and dev guides) Enjoy! P.S. Savanna Dashboard isn't available yet, but will be very soon. Sincerely yours, Sergey Lukjanov Savanna Technical Lead Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Savanna-all] Cluster scaling discussion
A comment on how you go about this. I believe you've already run into issues w/ using the start/stop-*.sh scripts as a foundation for this feature. Long term I believe that an active cluster need not mean every instance is up and running. The core infrastructure must be (ambari + jt + nn), and some % of worker instances (jt + dn). For example, if I want to make a 500 instance cluster, I won't need to wait for all 500 instances before I can reasonably start using the cluster. In fact, I may never have 500 instances at any given time, 98% may be acceptable operating procedure. The start/stop-*.sh scripts are not good for that use case either. However you go about this, keep the 98% cluster use case in mind. Best, matt ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev