Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-03 Thread Robert Collins
The team size was a minimum, not a maximum - please add your names.

We're currently waiting on the prerequisite blueprint to land before
work starts in earnest; and for the blueprint to be approved (he says,
without having checked to see if it has been now:))

-Rob

On 3 December 2013 20:48, Wang, Shane shane.w...@intel.com wrote:
 Lianhao Lu, Shuangtai Tian and I are also willing to join the team to 
 contribute because we are also changing scheduler, but it seems the team is 
 full. You can put us to the backup list.

 Thanks.
 --
 Shane

 -Original Message-
 From: Robert Collins [mailto:robe...@robertcollins.net]
 Sent: Friday, November 22, 2013 4:59 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest 
 proposal for an external scheduler in our lifetime

 https://etherpad.openstack.org/p/icehouse-external-scheduler

 I'm looking for 4-5 folk who have:
  - modest Nova skills
  - time to follow a fairly mechanical (but careful and detailed work
 needed) plan to break the status quo around scheduler extraction

 And of course, discussion galore about the idea :)

 Cheers,
 Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-03 Thread Calum Loudon
Hi all

More volunteers for you - myself (Calum Loudon) and Colin Tregenza Dancer from 
Metaswitch (http://metaswitch.com).

We're new to OpenStack development, so a bit of context: we develop software 
for the telecoms space, ranging from low-level network stacks to voice 
applications.  We see enormous interest in the idea of the software Telco, 
with telecoms providers now really understanding cloud and wanting to move to 
it; you may have heard of Network Functions Virtualisation (NFV), a big push by 
the telecoms industry to define how this will work, and which implicitly 
assumes OpenStack as the underlying cloud platform.

NFV needs a few things OpenStack doesn't currently provide, mainly due to the 
extremely high reliability  bandwidth/latency requirements of Telco-grade apps 
compared to typical Enterprise-grade data apps, and we want to contribute code 
to help close those gaps.  From what I learnt in Hong Kong, I think that 
initially means richer placement policies (e.g. more advanced (anti)affinity 
rules; locating VMs close to storage or networks; globally-optimal placement) 
and if I'm following this list correctly then I think this activity is the 
first step towards that goal, enabling in future phases Yathi's vision of 
instance groups with smart resource placement [1] which closely resembles our 
own.

So we'd love to help in whatever way is needed - please count us in.

cheers

Calum

[1] 
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit


Calum Loudon
Director of Architecture
Metaswitch Networks
 
P   +44 (0)208 366 1177
E   calum.lou...@metaswitch.com


-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: Friday, November 22, 2013 4:59 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest 
proposal for an external scheduler in our lifetime

https://etherpad.openstack.org/p/icehouse-external-scheduler

I'm looking for 4-5 folk who have:
 - modest Nova skills
 - time to follow a fairly mechanical (but careful and detailed work
needed) plan to break the status quo around scheduler extraction

And of course, discussion galore about the idea :)

Cheers,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] Topics Summary

2013-12-03 Thread Jaromir Coufal

Hey OpenStackers,

based on the latest discussions, it was asked if we can try to post 
regular updates of what is happening in our community (mostly on Askbot 
forum: http://ask-openstackux.rhcloud.com).


In this e-mail, I'd like to summarize ongoing issues and try to cover 
updates weekly (or each 14 days - based on content).


Issues by priority:
---
** TripleO UI - Resource Management:* 
http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/
- TripleO UI is close to start to be implemented - we really need 
to get resource on this topics
- Resource Management is about to start - it was already reviewed 
without any bigger objections, there are just smaller updates
** TripleO UI - Deployment Management:* 
http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/

- Deployment Management is completely new section
- Implementation needs to start ASAP
- At the moment the concept is the most important, so we don't have 
major changes later

- We can get into granular details later
- Review review review please
** Horizon - Navigation:* 
http://ask-openstackux.rhcloud.com/question/2/openstack-dashboard-navigation-redesign/?answer=99#post-id-99
- After couple of proposals and discussions, at summit we decided 
to go for vertical navigation

- Updated Vertical navigation proposal (wireframes)
** Horizon - Information Architecture: 
*http://ask-openstackux.rhcloud.com/question/1/openstack-ui-information-architecture/?answer=94#post-id-94

- Thanks to David we have updated IA and need more eyes on it
- Anybody who is dealing with adding features to Horizon, please 
have a look, feedback is warmly welcome


Other also important topics:
--- (without any specific order)
** OpenStack Personas*: 
http://ask-openstackux.rhcloud.com/question/68/openstack-personas/
- Couple of months ago there started initiative to create OpenStack 
Personas
- Based on Dave's document, Ju started the thread to move this 
effort forward (should hep in various areas)
- Matt shared with us their document for 3 personas which is very 
helpful

- Any other insights are very welcome
** Horizon - Updating Modals layout:* 
http://ask-openstackux.rhcloud.com/question/11/modals-and-supporting-different-screen-sizes/

- Modals are not using the screen efficiently
- Proposal for modal vs. embedded view (agreed on keeping modal)
- We need to get updated characteristics for improving the layout 
of modal window

** Horizon - Improve 'Launching Instance' workflow*
- Based on Cedric's proposal we have couple of topics to discuss 
around improving a UX of instance launching workflow
* Horizon - Improve selection of flavor and image: 
http://ask-openstackux.rhcloud.com/question/12/enhance-the-selection-of-a-flavor-and-an-image/
* Horizon - Improve Boot source selection: 
http://ask-openstackux.rhcloud.com/question/13/improve-boot-source-ux-ephemeral-vs-persistent-disk/
** Horizon - Where to use wizards:* 
http://ask-openstackux.rhcloud.com/question/81/wizard-ui-for-workflow/
- Toshi started very useful discussion about where the wizards are 
needed
** Horizon - Mobile UI: 
*http://ask-openstackux.rhcloud.com/question/67/horizon-mobile-ui/
- Maxim started to move forward Mobile UI for Horizon, they already 
have HTML5 prototype, check it out
** TripleO UI - Node / Rack / Resource class details**:* 
http://ask-openstackux.rhcloud.com/question/66/tuskar-ui-node-rack-and-resource-class-details/
- Liz and I, we put together set of wireframes for infrastructure 
detail pages

- However, this one needs to be updated based on latest direction
- We will re-use node details at the moment (racks and classes are 
postponed)
** Horizon - Overview enhancements: 
*http://ask-openstackux.rhcloud.com/question/59/improvements-to-horizon-overview/

- Liz put together proposal for improving Horizon's Overview page
** Horizon - Icon set for instance actions*: 
http://ask-openstackux.rhcloud.com/question/70/use-icon-set-instead-of-instanceaction-button/
- Garry is proposing to use icons instead of dropdown for instance 
actions

- There is ongoing discussion about how to visualize Instance objects

For the first bit, it was a little bit longer e-mail, but I hope it is 
useful for you. If you have any proposal or feedback to this format, 
please share it.


Everybody interested in helping UX efforts to move forward, you are the 
most certainly welcome


Cheers
-- Jarda

--- Jaromir Coufal (jcoufal)
--- OpenStack User Experience
--- IRC: #openstack-ux (at FreeNode)
--- Forum: http://ask-openstackux.rhcloud.com
--- Wiki: https://wiki.openstack.org/wiki/UX
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev] How to modify a bug across multiple repo?

2013-12-03 Thread Robert Collins
No, you need to manually arrange to land your changes in first one
repo then the other.

-Rob

On 3 December 2013 19:17, wu jiang win...@gmail.com wrote:
 Hi all,

 Recently, I found a bug at API layer in Cinder, but the modifications relate
 to CinderClient  Tempest.
 So, I'm confused how to commit it. Can 'git --dependence' cross different
 Repo?

 Any help would be much appreciated.

 Regards,
 wingwj




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][database] Update compute_nodes table

2013-12-03 Thread Abbass MAROUNI
I am aware of this work, in fact I reused a column (pci_stats) in the
compute_nodes table to store a JSON blob.
I track the resource in the resource_tracker and update the column and then
use the blob in a filter.
Maybe I should reformulate my question, How can I add a column to the table
and use it in resource_tracker without breaking something ?

Best regards,


2013/12/2 openstack-dev-requ...@lists.openstack.org



 --

 Message: 1
 Date: Mon, 02 Dec 2013 12:06:21 -0500
 From: Russell Bryant rbry...@redhat.com
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova][database] Update compute_nodes
 table
 Message-ID: 529cbe0d@redhat.com
 Content-Type: text/plain; charset=ISO-8859-1

 On 12/02/2013 11:47 AM, Abbass MAROUNI wrote:
  Hello,
 
  I'm looking for way to a add new attribute to the compute nodes by
   adding a column to the compute_nodes table in nova database in order to
  track a metric on the compute nodes and use later it in nova-scheduler.
 
  I checked the  sqlalchemy/migrate_repo/versions and thought about adding
  my own upgrade then sync using nova-manage db sync.
 
  My question is :
  What is the process of upgrading a table in the database ? Do I have to
  modify or add a new variable in some class in order to associate the
  newly added column with a variable that I can use ?

 Don't add this.  :-)

 There is work in progress to just have a column with a json blob in it
 for additional metadata like this.

 https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
 https://wiki.openstack.org/wiki/ExtensibleResourceTracking

 --
 Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][cinder][glance] Should glance be installed on cinder only nodes?

2013-12-03 Thread gans developer
Hi All,

I was performing Copy Image to Volume operation on my controller node
which has glance and cinder installed.

If i wish to create a cinder only node for cinder-volume operations , would
i need to install glance also on this node for performing Copy Image to
Volume operation ?

Thanks,
Gans.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-03 Thread Thierry Carrez
Jarret Raim wrote:

 The TC is currently working on formalizing requirements for new programs
 and projects [3].  I figured I would give them a try against this
 application.

 First, I'm assuming that the application is for a new program that
 contains the new project.  The application doesn't make that bit clear,
 though.
 
 In looking through the documentation for incubating [1], there doesn¹t
 seem to be any mention of also having to be associated with a program. Is
 it a requirement that all projects belong to a program at this point? If
 so, I guess we would be asking for a new program as I think that
 encryption and key management is a separate concern from the rest of the
 programs listed here [2].
 
 [1] https://wiki.openstack.org/wiki/Governance/NewProjects
 [2] https://wiki.openstack.org/wiki/Programs

With the introduction of programs (think: official teams), all
incubated/integrated projects must belong to an official program... So
when a project applies for incubation but is not part of an official
program yet, it de-facto also applies to be considered a program.

 [...] 
 ** Team should have a lead, elected by the team contributors

 Was the PTL elected?  I can't seem to find record of that.  If not, I
 would like to see an election held for the PTL.
 
 We¹re happy to do an election. Is this something we can do as part of the
 next election cycle? Or something that needs to be done out of band?

I'm not 100% sure we'll keep that election requirement. I think the
program application should have an initial PTL named on it. The way
that's determined is up to the team (natural candidate, election...).

 ** Team should have a clear way to grant ATC (voting) status to its
significant contributors

 Related to the above
 
 I thought that the process of becoming an ATC was pretty well set [3]. Is
 there some specific that Barbican would have to do that is different than
 the ATC rules in the Tech Committee documentation?
 
 [3] 
 https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee

No, since the team produces code the ATC designation method is pretty
well established. This rule cares for programs which have weirder
deliverables.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][cinder][glance] Should glance be installed on cinder only nodes?

2013-12-03 Thread gans developer
 Hi All,

I was performing Copy Image to Volume operation on my controller node
which has glance and cinder installed.

If i wish to create a cinder only node for cinder-volume operations , would
i need to install glance also on this node for performing Copy Image to
Volume operation ?

Thanks,
Gans.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-03 Thread Thierry Carrez
Russell Bryant wrote:
 On 12/02/2013 11:41 AM, Thierry Carrez wrote:
 I don't really care that much about deprecation in that case, but I care
 about which release the new project is made part of. Would you make it
 part of the Icehouse common release ? That means fast-tracking through
 incubation *and* integration in less than one cycle... I'm not sure we
 want that.

 I agree it's the same code (at least at the beginning), but the idea
 behind forcing all projects to undergo a full cycle before being made
 part of the release is not really about code stability, it's about
 integration with the other projects and all the various programs. We
 want them to go through a whole cycle to avoid putting unnecessary
 stress on packagers, QA, docs, infrastructure and release management.

 So while I agree that we could play tricks around deprecation, I'm not
 sure we should go from forklifted to part of the common release in less
 than 3 months.

 I'm not sure it would buy us anything, either. Having the scheduler
 usable by the end of the Icehouse cycle and integrated in the J cycle
 lets you have one release where both options are available, remove it
 first thing in J and then anyone running J (be it tracking trunk or
 using the final release) is using the external scheduler. That makes
 more sense to me and technically, you still have the option to use it
 with Icehouse.
 
 Not having to maintain code in 2 places is what it buys us.  However,
 this particular point is a bit moot until we actually had it done and
 working.  Perhaps we should just revisit the deprecation plan once we
 actually have the thing split out and running.

Agreed. My position on this would probably be different if the forklift
had been completed one month ago and we had 5 months of 'integration'.
With the current timing however, I think we'll have to have the code in
two places by the icehouse release.

That said, if we mark the nova-scheduler Icehouse code deprecated and
remove it early in J, the dual-maintenance burden is limited. The only
obligation we'd have would be security backports, and the scheduler has
been relatively vulnerability-free so far.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-03 Thread Khanh-Toan Tran
We are also interested in the proposal and would like to contribute
whatever we can.
Currently we're working on nova-scheduler we think that an independent
scheduler
is a need for Openstack. We've been engaging in several discussions on
this topic in
the ML as well as in Nova meeting, thus we were thrilled to hear your
proposal.

PS: I've written in a mail expressing our interest in this topic earlier ,
but I feel it's better
to have an more official submission to join the team :)

Best regards,

Jerome Gallard  Khanh-Toan Tran

 -Message d'origine-
 De : Robert Collins [mailto:robe...@robertcollins.net]
 Envoyé : mardi 3 décembre 2013 09:18
 À : OpenStack Development Mailing List (not for usage questions)
 Objet : Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a
modest
 proposal for an external scheduler in our lifetime

 The team size was a minimum, not a maximum - please add your names.

 We're currently waiting on the prerequisite blueprint to land before
work starts
 in earnest; and for the blueprint to be approved (he says, without
having
 checked to see if it has been now:))

 -Rob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] L3 agent external networks

2013-12-03 Thread Sylvain Afchain
Hi,

I was reviewing this patch (https://review.openstack.org/#/c/52884/) from Oleg 
and I thought that is a bit tricky to deploy an l3 agent with automation tools 
like Puppet since you have to specify the uuid of a network that doesn't 
already exist. It may be better to bind a l3 agent to an network by a CIDR 
instead of a uuid since when we deploy we know in advance which network address 
will be on which l3 agent.

I wanted also remove the L3 agent limit regarding to the number of external 
networks, I submitted a patch as WIP (https://review.openstack.org/#/c/59359/) 
for that purpose and I wanted to have the community opinion about that :)

Please let me know what you think.

Best regards,

Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] {TripleO] UI Wireframes - close to implementation start

2013-12-03 Thread Jaromir Coufal

Hey folks,

I opened 2 issues on UX discussion forum with TripleO UI topics:

Resource Management:
http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/
- this section was already reviewed before, there is not much surprises, 
just smaller updates

- we are about to implement this area

http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/
- these are completely new views and they need a lot of attention so 
that in time we don't change direction drastically

- any feedback here is welcome

We need to get into implementation ASAP. It doesn't mean that we have 
everything perfect from the very beginning, but that we have direction 
and we move forward by enhancements.


Therefor implementation of above mentioned areas should start very soon.

If all possible, I will try to record walkthrough with further 
explanations. If you have any questions or feedback, please follow the 
threads on ask-openstackux.


Thanks
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-03 Thread Daniel P. Berrange
On Mon, Dec 02, 2013 at 07:23:19PM +, Alessandro Pilotti wrote:
 
 On 02 Dec 2013, at 04:52 , Kyle Mestery (kmestery) kmest...@cisco.com wrote:
 
  
  
  This is very cool Alessandro, thanks for sharing! Any plans to try and get 
  this
  nova driver upstreamed?
 
 My personal opinion is that drivers should stay outside of Nova in a separate 
 project.

If drivers were to live in separate projects we would be forced to maintain
Nova internal code as stable APIs to avoid breaking drivers during the dev
cycle. This would place a significant burden on Nova development and have a
negative impact on the overal ease of development. It would also discourage
collaboration and sharing of code between virt drivers, which is already a
significant problem today whereby drivers come up with different ways todo
the same thing. If you want to be isolated from the community in a separate
project then expect your code to be broken periodically during development.
If you don't want that, then put the code in tree and be an active part of
the community effort working together, instead of in isolation.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][cinder][glance] Should glance be installed on cinder only nodes?

2013-12-03 Thread Avishay Traeger
Gans,
No, you don't need to install Glance on Cinder nodes.  Cinder will use the 
Glance client, which must be installed on the Cinder node (see 
python-glanceclient the requirements.txt file in Cinder's tree).

Thanks,
Avishay



From:   gans developer gans.develo...@gmail.com
To: openstack-dev@lists.openstack.org, 
Date:   12/03/2013 10:58 AM
Subject:[openstack-dev] [OpenStack-dev][cinder][glance] Should 
glance be installed on cinder only nodes?



Hi All,

I was performing Copy Image to Volume operation on my controller node 
which has glance and cinder installed.

If i wish to create a cinder only node for cinder-volume operations , 
would i need to install glance also on this node for performing Copy 
Image to Volume operation ?

Thanks,
Gans.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Javascript testing framework

2013-12-03 Thread Radomir Dopieralski
On 03/12/13 01:26, Maxime Vidori wrote:
 Hi!
 
 In order to improve the javascript quality of Horizon, we have to change the 
 testing framework of the client-side. Qunit is a good tool for simple tests, 
 but the integration of Angular need some powerful features which are not 
 present in Qunit. So, I have made a little POC with the javascript testing 
 library Jasmine, which is the one given as an example into the Angularjs 
 documentation. I have also rewrite a Qunit test in Jasmine in order to show 
 that the change is quite easy to make.
 
 Feel free to comment in this mailing list the pros and cons of this new tool, 
 and to checkout my code for reviewing it. I have also made an helper for 
 quick development of Jasmine tests through Selenium.
 
 To finish, I need your opinion for a new command line in run_tests.sh. I 
 think we should create a run_tests.sh --runserver-test target which will 
 allow developers to see all the javascript test page. This new command line 
 will allow people to avoid the use of the command line for running Selenium 
 tests, and allow them to view their tests in a comfortable html interface. It 
 could be interesting for the development of tests, this command line will 
 only be used for development purpose.
 
 Waiting for your feedbacks!
 
 Here is a link to the Jasmine POC: https://review.openstack.org/#/c/59580/

Hello Maxime,

thank you for this proof of concept, it looks very interesting. I left a
small question about how it's going to integrate with Selenium there.

But I thought that it would be nice if you could point us here to some
resources that explain why Jasmine is better than QUnit, other than the
fact that it is used in some AngularJS example. I'm sure that would help
convince a lot of people to the idea of switching. A quick search for
QUnit vs Jasmine tells me that the advantages of Jasmine are
tight integration with Ruby on Rails and Behavior Driven Development
style syntax. As we don't use either, I'm not sure we want it.

I'm sure that pointers to resources specific for our use cases would
greatly help everyone make a decission.

Thanks,
-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes - close to implementation start

2013-12-03 Thread Jaromir Coufal
I am sorry for mistake in tag - fixed in this reply and keeping the 
original text below.


On 2013/03/12 10:25, Jaromir Coufal wrote:

Hey folks,

I opened 2 issues on UX discussion forum with TripleO UI topics:

Resource Management:
http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/
- this section was already reviewed before, there is not much 
surprises, just smaller updates

- we are about to implement this area

http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/
- these are completely new views and they need a lot of attention so 
that in time we don't change direction drastically

- any feedback here is welcome

We need to get into implementation ASAP. It doesn't mean that we have 
everything perfect from the very beginning, but that we have direction 
and we move forward by enhancements.


Therefor implementation of above mentioned areas should start very soon.

If all possible, I will try to record walkthrough with further 
explanations. If you have any questions or feedback, please follow the 
threads on ask-openstackux.


Thanks
-- Jarda


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-12-03 Thread Gary Kotton
Hi,
I think that this information should be used as part of the scheduling
decision, that is hosts that are to be selected should be excluded if they
do not have the necessary resources available. It will be interesting to
know how this is going to fit into the new scheduler that is being
discussed.
Thanks
Gary

On 12/3/13 9:05 AM, Vui Chiap Lam vuich...@vmware.com wrote:

Hi Daniel,

I too found the original bp a little hard to follow, so thanks for
writing up the wiki! I see that the wiki is now linked to the BP,
which is great as well.

The ability to express CPU topology constraints for the guests
has real-world use, and several drivers, including VMware, can definitely
benefit from it.

If I understand correctly, in addition to being an elaboration of the
BP text, the wiki also adds the following:

1. Instead of returning the besting matching (num_sockets (S),
   cores_per_socket (C), threads_per_core (T)) tuple,  all applicable
   (S,C,T) tuples are returned, sorted by S then C then T.
2. A mandatory topology can be provided in the topology computation.

I like 2. because there are multiple reasons why all of a hypervisor's
CPU resources cannot be allocated to a single virtual machine.
Given that the mandatory (I prefer maximal) topology is probably fixed
per hypervisor, I wonder this information should also be used in
scheduling time to eliminate incompatible hosts outright.

As for 1. because of the order of precendence of the fields in the
(S,C,T) tuple, I am not sure how the preferred_topology comes into
play. Is it meant to help favor alternative values of S?

Also it might be good to describe a case where returning a list of
(S,C,T) instead of best-match is necessary. It seems deciding what to
pick other that the first item in the list requires logic similar to
that used to arrive at the list in the first place.

Cheers,
Vui

- Original Message -
| From: Daniel P. Berrange berra...@redhat.com
| To: openstack-dev@lists.openstack.org
| Sent: Monday, December 2, 2013 7:43:58 AM
| Subject: Re: [openstack-dev] [Nova] Blueprint: standard specification
of guest CPU topology
| 
| On Tue, Nov 19, 2013 at 12:15:51PM +, Daniel P. Berrange wrote:
|  For attention of maintainers of Nova virt drivers
| 
| Anyone from Hyper-V or VMWare drivers wish to comment on this
| proposal
| 
| 
|  A while back there was a bug requesting the ability to set the CPU
|  topology (sockets/cores/threads) for guests explicitly
|  
| 
https://urldefense.proofpoint.com/v1/url?u=https://bugs.launchpad.net/nova
/%2Bbug/1199019k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoM
Qu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=tDrRJoA74kIT8OYoO6rN6ELGrXmg2c15UU252moC
UbU%3D%0As=70afb246aba2f9c981372e632886ea05fa67ceb6f428499127ac2bdce92a16
b5
|  
|  I countered that setting explicit topology doesn't play well with
|  booting images with a variety of flavours with differing vCPU counts.
|  
|  This led to the following change which used an image property to
|  express maximum constraints on CPU topology (max-sockets/max-cores/
|  max-threads) which the libvirt driver will use to figure out the
|  actual topology (sockets/cores/threads)
|  
|
https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%2
3/c/56510/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2B
fDtysg45MkPhCZFxPEq8%3D%0Am=tDrRJoA74kIT8OYoO6rN6ELGrXmg2c15UU252moCUbU%3
D%0As=031f659eb2ed65049eff1e7074ac72f409b5d8df6dbdbf686c18b17a53a671fd
|  
|  I believe this is a prime example of something we must co-ordinate
|  across virt drivers to maximise happiness of our users.
|  
|  There's a blueprint but I find the description rather hard to
|  follow
|  
|
https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
t/nova/%2Bspec/support-libvirt-vcpu-topologyk=oIvRg1%2BdGAgOoM1BIlLLqw%3D
%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=tDrRJoA74kI
T8OYoO6rN6ELGrXmg2c15UU252moCUbU%3D%0As=aecbdaa964bd364860bf8253898b87aaf
44219fce49f9b7253e5f320db5c3a90
|  
|  So I've created a standalone wiki page which I hope describes the
|  idea more clearly
|  
|
https://urldefense.proofpoint.com/v1/url?u=https://wiki.openstack.org/wiki
/VirtDriverGuestCPUTopologyk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZ
o8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=tDrRJoA74kIT8OYoO6rN6ELGrXmg
2c15UU252moCUbU%3D%0As=898cadf5e157fb7efcac9b90d079ebb83186bd820f54191be6
fcd6b1f417caf2
|  
|  Launchpad doesn't let me link the URL to the blueprint since I'm not
|  the blurprint creator :-(
|  
|  Anyway this mail is to solicit input on the proposed standard way to
|  express this which is hypervisor portable and the addition of some
|  shared code for doing the calculations which virt driver impls can
|  just all into rather than re-inventing
|  
|  I'm looking for buy-in to the idea from the maintainers of each
|  virt driver that this conceptual approach works for them, before
|  we go merging anything with the specific impl for 

Re: [openstack-dev] [ceilometer] [marconi] Notifications brainstorming session tomorrow @ 1500 UTC

2013-12-03 Thread Julien Danjou
On Mon, Dec 02 2013, Kurt Griffiths wrote:

 Following up on some conversations we had at the summit, I’d like to get
 folks together on IRC tomorrow to crystalize the design for a notifications
 project under the Marconi program. The project’s goal is to create a service
 for surfacing events to end users (where a user can be a cloud app
 developer, or a customer using one of those apps). For example, a developer
 may want to be notified when one of their servers is low on disk space.
 Alternatively, a user of MyHipsterApp may want to get a text when one of
 their friends invites them to listen to That Band You’ve Never Heard Of.

 Interested? Please join me and other members of the Marconi team tomorrow,
 Dec. 3rd, for a brainstorming session in #openstack-marconi at 1500
 UTChttp://www.timeanddate.com/worldclock/fixedtime.html?hour=15min=0sec=0.
 Your contributions are crucial to making this project awesome.

 I’ve seeded an etherpad for the discussion:

 https://etherpad.openstack.org/p/marconi-notifications-brainstorm

This might (partially) overlap with what Ceilometer is doing with its
alarming feature, and one of the blueprint our roadmap for Icehouse:

  https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification

While it doesn't solve the use case at the same level, the technical
mechanism is likely to be similar.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-03 Thread Julien Danjou
On Mon, Dec 02 2013, Joshua Harlow wrote:

 Thanks for writing this up, looking forward to seeing this happen so that
 oslo.messaging can be used outside of the core openstack projects (and be
 used in libraries that do not want to force a oslo.cfg model onto users of
 said libraries).

 Any idea of a timeline as to when this would be reflected in
 https://github.com/openstack/oslo.messaging/ (even rough idea is fine).

As fast as the code can be written in review. I'll start working on this
now. Feel free to subscribe to the blueprint to receive notifications
about upcoming patches so you can review. ;-)

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev] How to modify a bug across multiple repo?

2013-12-03 Thread Christopher Yeoh
Hi,

On Tue, Dec 3, 2013 at 7:21 PM, Robert Collins robe...@robertcollins.netwrote:

 No, you need to manually arrange to land your changes in first one
 repo then the other.

 -Rob

 On 3 December 2013 19:17, wu jiang win...@gmail.com wrote:
  Hi all,
 
  Recently, I found a bug at API layer in Cinder, but the modifications
 relate
  to CinderClient  Tempest.
  So, I'm confused how to commit it. Can 'git --dependence' cross different
  Repo?



Just to expand on what Rob mentioned - for situations like this its pretty
common
not to be able to just say make a change in cinder first because the
tempest test will
fail. And at the same time you can't make the final change in tempest first
because the
cinder change hasn't applied yet. If this is your situation what I'd
suggest you do is:

- Submit the cinder change, it will fail tempest, that's ok, you might want
to leave a comment
saying you are waiting on a tempest change to land first.

- Submit a tempest change which disables the tempest tests which fail
because of your change
and in the commit message reference the gerrit url for the cinder change.
Its very helpful, but not
always necessary if you can get a cinder core to +1 this patch so the
tempest cores know that the
cinder team is happy with what is going on.

- Once the tempest change has merged, the cinder change should be able to
merge.

- Submit a tempest change enabling the disabled test(s) and modifying it
for the expected behaviour.

Yes, it would be nice to be able to have cross project dependent patches :)

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-12-03 Thread Daniel P. Berrange
On Tue, Dec 03, 2013 at 01:47:31AM -0800, Gary Kotton wrote:
 Hi,
 I think that this information should be used as part of the scheduling
 decision, that is hosts that are to be selected should be excluded if they
 do not have the necessary resources available. It will be interesting to
 know how this is going to fit into the new scheduler that is being
 discussed.

The CPU topology support shouldn't have any interactions with, nor
cause any failures post-scheduling. ie If the host has declared that
it has sufficient resources to run a VM with the given vCPU count,
then that is sufficient.

This is one of the reasons why the design is such that glance image
properties just declare an upper bound on topology, not an absolute
requirement. This allows the host chosen to run the VM to decide the
guest topology to suit its specific topology / resource availability.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Increase Swift ring partition power

2013-12-03 Thread Christian Schwede
Am 02.12.13 17:10, schrieb Gregory Holt:
 On Dec 2, 2013, at 9:48 AM, Christian Schwede
 christian.schw...@enovance.com wrote:

 That sounds great! Is someone already working on this (I know about
 the ongoing DiskFile refactoring) or even a blueprint available?

 There is https://blueprints.launchpad.net/swift/+spec/ring-doubling
 though I'm uncertain how up to date it is.

Thanks for the link! I read all the linked entries, reviews and patches
and it seems all of us wanted to use a similar approach.

David put it in a nutshell:

 We can consider this to be the yearly event in which we try to crack
 the part_power problem.

I'm going to write some docs and tests for my tool and will link it as
related project afterwards.

Christian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reg : Security groups implementation using openflows in quantum ovs plugin

2013-12-03 Thread Zang MingJie
On Sat, Nov 30, 2013 at 6:32 PM, Édouard Thuleau thul...@gmail.com wrote:

 And what do you think about the performance issue I talked ?
 Do you have any thought to improve wildcarding to use megaflow feature ?


I have invested a little further, here is my environment

X1 (10.0.5.1) --- OVS BR --- X2 (10.0.5.2)

I have set up several flows to make port 5000 open on X2:

$ sudo ovs-ofctl dump-flows br
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=49.672s, table=0, n_packets=7, n_bytes=496,
idle_age=6, priority=256,tcp,nw_src=10.0.5.2,tp_src=5000 actions=NORMAL
 cookie=0x0, duration=29.854s, table=0, n_packets=8, n_bytes=562,
idle_age=6, priority=256,tcp,nw_dst=10.0.5.2,tp_dst=5000 actions=NORMAL
 cookie=0x0, duration=2014.523s, table=0, n_packets=96, n_bytes=4032,
idle_age=35, priority=512,arp actions=NORMAL
 cookie=0x0, duration=2006.462s, table=0, n_packets=51, n_bytes=4283,
idle_age=40, priority=0 actions=drop

and here is the kernel flows after 2 connections created:

$ sudo ovs-dpctl dump-flows
skb_priority(0),in_port(8),eth(src=2e:19:44:50:9d:17,dst=ae:7f:28:4f:14:ec),eth_type(0x0800),ipv4(src=
10.0.5.1/255.255.255.255,dst=10.0.5.2/255.255.255.255,proto=6/0xff,tos=0/0,ttl=64/0,frag=no/0xff),tcp(src=35789,dst=5000),
packets:1, bytes:66, used:2.892s, flags:., actions:10
skb_priority(0),in_port(8),eth(src=2e:19:44:50:9d:17,dst=ae:7f:28:4f:14:ec),eth_type(0x0800),ipv4(src=
10.0.5.1/255.255.255.255,dst=10.0.5.2/255.255.255.255,proto=6/0xff,tos=0/0,ttl=64/0,frag=no/0xff),tcp(src=35775,dst=5000),
packets:0, bytes:0, used:never, actions:10
skb_priority(0),in_port(10),eth(src=ae:7f:28:4f:14:ec,dst=2e:19:44:50:9d:17),eth_type(0x0800),ipv4(src=
10.0.5.2/255.255.255.255,dst=10.0.5.1/0.0.0.0,proto=6/0xff,tos=0/0,ttl=64/0,frag=no/0xff),tcp(src=5000/0x,dst=35789/0),
packets:1, bytes:78, used:1.344s, flags:P., actions:8

conclusion:
mac-src,mac-dst can't be wildcard, because they are used by l2 bridging and
mac learning.
ip-src and port-src can't be wildcard
only ip-dst and port-dst can be wildcard

I don't know why ip-src and port-src can't be wildcard, maybe I just hit an
ovs bug.


  Édouard.

 On Fri, Nov 29, 2013 at 1:11 PM, Zang MingJie zealot0...@gmail.com
 wrote:
  On Fri, Nov 29, 2013 at 2:25 PM, Jian Wen jian@canonical.com
 wrote:
  I don't think we can implement a stateful firewall[1] now.
 
  I don't think we need a stateful firewall, a stateless one should work
  well. If the stateful conntrack is completed in the future, we can
  also take benefit from it.
 
 
  Once connection tracking capability[2] is added to the Linux OVS, we
  could start to implement the ovs-firewall-driver blueprint.
 
  [1] http://en.wikipedia.org/wiki/Stateful_firewall
  [2]
 
 http://wiki.xenproject.org/wiki/Xen_Development_Projects#Add_connection_tracking_capability_to_the_Linux_OVS
 
 
  On Tue, Nov 26, 2013 at 2:23 AM, Mike Wilson geekinu...@gmail.com
 wrote:
 
  Adding Jun to this thread since gmail is failing him.
 
 
  On Tue, Nov 19, 2013 at 10:44 AM, Amir Sadoughi
  amir.sadou...@rackspace.com wrote:
 
  Yes, my work has been on ML2 with neutron-openvswitch-agent.  I’m
  interested to see what Jun Park has. I might have something ready
 before he
  is available again, but would like to collaborate regardless.
 
  Amir
 
 
 
  On Nov 19, 2013, at 3:31 AM, Kanthi P pavuluri.kan...@gmail.com
 wrote:
 
  Hi All,
 
  Thanks for the response!
  Amir,Mike: Is your implementation being done according to ML2 plugin
 
  Regards,
  Kanthi
 
 
  On Tue, Nov 19, 2013 at 1:43 AM, Mike Wilson geekinu...@gmail.com
  wrote:
 
  Hi Kanthi,
 
  Just to reiterate what Kyle said, we do have an internal
 implementation
  using flows that looks very similar to security groups. Jun Park was
 the guy
  that wrote this and is looking to get it upstreamed. I think he'll
 be back
  in the office late next week. I'll point him to this thread when
 he's back.
 
  -Mike
 
 
  On Mon, Nov 18, 2013 at 3:39 PM, Kyle Mestery (kmestery)
  kmest...@cisco.com wrote:
 
  On Nov 18, 2013, at 4:26 PM, Kanthi P pavuluri.kan...@gmail.com
  wrote:
   Hi All,
  
   We are planning to implement quantum security groups using
 openflows
   for ovs plugin instead of iptables which is the case now.
  
   Doing so we can avoid the extra linux bridge which is connected
   between the vnet device and the ovs bridge, which is given as a
 work around
   since ovs bridge is not compatible with iptables.
  
   We are planning to create a blueprint and work on it. Could you
   please share your views on this
  
  Hi Kanthi:
 
  Overall, this idea is interesting and removing those extra bridges
  would certainly be nice. Some people at Bluehost gave a talk at the
 Summit
  [1] in which they explained they have done something similar, you
 may want
  to reach out to them since they have code for this internally
 already.
 
  The OVS plugin is in feature freeze during Icehouse, and will be
  deprecated in favor of ML2 [2] at the end of Icehouse. I would
 advise you 

Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-12-03 Thread Daniel P. Berrange
On Mon, Dec 02, 2013 at 11:05:02PM -0800, Vui Chiap Lam wrote:
 Hi Daniel,
 
 I too found the original bp a little hard to follow, so thanks for
 writing up the wiki! I see that the wiki is now linked to the BP, 
 which is great as well.
 
 The ability to express CPU topology constraints for the guests
 has real-world use, and several drivers, including VMware, can definitely 
 benefit from it.
 
 If I understand correctly, in addition to being an elaboration of the
 BP text, the wiki also adds the following:
 
 1. Instead of returning the besting matching (num_sockets (S),
cores_per_socket (C), threads_per_core (T)) tuple,  all applicable
(S,C,T) tuples are returned, sorted by S then C then T.
 2. A mandatory topology can be provided in the topology computation.
 
 I like 2. because there are multiple reasons why all of a hypervisor's
 CPU resources cannot be allocated to a single virtual machine. 
 Given that the mandatory (I prefer maximal) topology is probably fixed
 per hypervisor, I wonder this information should also be used in
 scheduling time to eliminate incompatible hosts outright.  

The host is exposing info about vCPU count it is able to support and the
scheduler picks on that basis. The guest image is just declaring upper
limits on topology it can support. So If the host is able to support the
guest's vCPU count, then the CPU topology decision should never cause any
boot failure As such CPU topology has no bearing on scheduling, which is
good because I think it would significantly complicate the problem.

 As for 1. because of the order of precendence of the fields in the
 (S,C,T) tuple, I am not sure how the preferred_topology comes into
 play. Is it meant to help favor alternative values of S?

 Also it might be good to describe a case where returning a list of
 (S,C,T) instead of best-match is necessary. It seems deciding what to
 pick other that the first item in the list requires logic similar to
 that used to arrive at the list in the first place.

It is really all about considering NUMA implications. If you prefer
cores and your VM ram cross a NUMA node then you sacrifice performance.
So if you know the VM RAM will have to cross a NUMA node, then you may
set a lower cores limit to force returning of topology spanning multiple
sockets. By returning a list of acceptable topologies the virt driver can
then have some flexibility in deciding how to pin guest CPUs / RAM to
host NUMA nodes, and/or expose guest visible NUMA topology

eg if the returned list gives a choice of

   (2 sockets, 2 cores, 1 thread)
   (1 socket, 4 cores, 1 thread)

then the virt driver can now chose whether to place the guest inside
1 single NUMA node, or spread it across nodes, and still expose sane
NUMA topology info to the guest. You could say we should take account
of NUMA straight away at the time we figure out the CPU topology, but
I believe that would complicate this code and make it impractical to
share the code across drivers.

If a virt driver doesn't care todo anything with the list of possible
topologies though, it can simply ignore it and always take the first
element in the list. This is what we'lll do in libvirt initially, but
we want todo intelligent automatic NUMA placement later to improve the
performance utilization of hosts.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Javascript testing framework

2013-12-03 Thread Maxime Vidori
I wrote a blueprint with a little description of features, and those which are 
not present in qUnit, tell me if you think it needs more details or the points 
which need more details.
Here is the link: 
https://blueprints.launchpad.net/horizon/+spec/jasmine-integration.

I think it could be the good moment to move because we do not have a lot of 
qUnit test and I have already quite finish rewriting them.

- Original Message -
From: Radomir Dopieralski openst...@sheep.art.pl
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, December 3, 2013 10:32:17 AM
Subject: Re: [openstack-dev] [horizon] Javascript testing framework

On 03/12/13 01:26, Maxime Vidori wrote:
 Hi!
 
 In order to improve the javascript quality of Horizon, we have to change the 
 testing framework of the client-side. Qunit is a good tool for simple tests, 
 but the integration of Angular need some powerful features which are not 
 present in Qunit. So, I have made a little POC with the javascript testing 
 library Jasmine, which is the one given as an example into the Angularjs 
 documentation. I have also rewrite a Qunit test in Jasmine in order to show 
 that the change is quite easy to make.
 
 Feel free to comment in this mailing list the pros and cons of this new tool, 
 and to checkout my code for reviewing it. I have also made an helper for 
 quick development of Jasmine tests through Selenium.
 
 To finish, I need your opinion for a new command line in run_tests.sh. I 
 think we should create a run_tests.sh --runserver-test target which will 
 allow developers to see all the javascript test page. This new command line 
 will allow people to avoid the use of the command line for running Selenium 
 tests, and allow them to view their tests in a comfortable html interface. It 
 could be interesting for the development of tests, this command line will 
 only be used for development purpose.
 
 Waiting for your feedbacks!
 
 Here is a link to the Jasmine POC: https://review.openstack.org/#/c/59580/

Hello Maxime,

thank you for this proof of concept, it looks very interesting. I left a
small question about how it's going to integrate with Selenium there.

But I thought that it would be nice if you could point us here to some
resources that explain why Jasmine is better than QUnit, other than the
fact that it is used in some AngularJS example. I'm sure that would help
convince a lot of people to the idea of switching. A quick search for
QUnit vs Jasmine tells me that the advantages of Jasmine are
tight integration with Ruby on Rails and Behavior Driven Development
style syntax. As we don't use either, I'm not sure we want it.

I'm sure that pointers to resources specific for our use cases would
greatly help everyone make a decission.

Thanks,
-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-12-03 Thread Gary Kotton


On 12/3/13 12:08 PM, Daniel P. Berrange berra...@redhat.com wrote:

On Tue, Dec 03, 2013 at 01:47:31AM -0800, Gary Kotton wrote:
 Hi,
 I think that this information should be used as part of the scheduling
 decision, that is hosts that are to be selected should be excluded if
they
 do not have the necessary resources available. It will be interesting to
 know how this is going to fit into the new scheduler that is being
 discussed.

The CPU topology support shouldn't have any interactions with, nor
cause any failures post-scheduling. ie If the host has declared that
it has sufficient resources to run a VM with the given vCPU count,
then that is sufficient.

Yes, you are correct. I was thinking about another issue altogether - CPU
reservations.


This is one of the reasons why the design is such that glance image
properties just declare an upper bound on topology, not an absolute
requirement. This allows the host chosen to run the VM to decide the
guest topology to suit its specific topology / resource availability.



Regards,
Daniel
-- 
|: 
https://urldefense.proofpoint.com/v1/url?u=http://berrange.com/k=oIvRg1%2
BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%
3D%0Am=%2BVH6Fb2ZdY4rBiTv4xjHlWTzi2bx3yL%2FPYmBmexK%2BCA%3D%0As=7ef7ed03
951dddf6141327a904baa12ca10765e6a17210de8d44818dc9d8d008  -o-
https://urldefense.proofpoint.com/v1/url?u=http://www.flickr.com/photos/db
errange/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfD
tysg45MkPhCZFxPEq8%3D%0Am=%2BVH6Fb2ZdY4rBiTv4xjHlWTzi2bx3yL%2FPYmBmexK%2B
CA%3D%0As=9f86fd7275b4bf94724c3f1e6863f42709742d902a3b312116509377c142cc6
7 :|
|: 
https://urldefense.proofpoint.com/v1/url?u=http://libvirt.org/k=oIvRg1%2B
dGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3
D%0Am=%2BVH6Fb2ZdY4rBiTv4xjHlWTzi2bx3yL%2FPYmBmexK%2BCA%3D%0As=a16ed6479
e9165cbe64f8de01f8ef7d3c132afd9c7db688dcc3f8e7973c9ee21  -o-
   
https://urldefense.proofpoint.com/v1/url?u=http://virt-manager.org/k=oIvR
g1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxP
Eq8%3D%0Am=%2BVH6Fb2ZdY4rBiTv4xjHlWTzi2bx3yL%2FPYmBmexK%2BCA%3D%0As=8dd3
b1fc6613623eb226ed8373d0c141246749495c52dc4e406cad0ee36e8434 :|
|: 
https://urldefense.proofpoint.com/v1/url?u=http://autobuild.org/k=oIvRg1%
2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8
%3D%0Am=%2BVH6Fb2ZdY4rBiTv4xjHlWTzi2bx3yL%2FPYmBmexK%2BCA%3D%0As=8f19277
d6494fac28d405a51715a945707d45bddc9ed653a1d4372db09e36198   -o-
  
https://urldefense.proofpoint.com/v1/url?u=http://search.cpan.org/~danberr
/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45M
kPhCZFxPEq8%3D%0Am=%2BVH6Fb2ZdY4rBiTv4xjHlWTzi2bx3yL%2FPYmBmexK%2BCA%3D%0
As=b6a59f4029b86825f8cb79c2ff060d70e0b2cec3d7965d5d3e5fe89d0f7991aa :|
|: 
https://urldefense.proofpoint.com/v1/url?u=http://entangle-photo.org/k=oI
vRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZF
xPEq8%3D%0Am=%2BVH6Fb2ZdY4rBiTv4xjHlWTzi2bx3yL%2FPYmBmexK%2BCA%3D%0As=8f
47662a2eb99894688bb8b56f68787514b036a657f703a31cb55d4b719a132e   -o-
 
https://urldefense.proofpoint.com/v1/url?u=http://live.gnome.org/gtk-vnck
=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPh
CZFxPEq8%3D%0Am=%2BVH6Fb2ZdY4rBiTv4xjHlWTzi2bx3yL%2FPYmBmexK%2BCA%3D%0As
=ea746f14f96aed6ae931abe9ab71140272560a7624e25c747e20dfcd1fa041eb :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=%2BVH6Fb2ZdY4rBiTv4xj
HlWTzi2bx3yL%2FPYmBmexK%2BCA%3D%0As=6f7cbd4e68065ed00ff18ed1065c726507ff7
90b0671de9bd0454de01a8f2def


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Request for review (glance-multifilesystem-store patch)

2013-12-03 Thread Rangnekar, Aswad
Hi,

We are targeting to complete adding multi file system support for Glance by 
Icehouse-1.
Please review the patch: https://review.openstack.org/#/c/58997/

Aswad Rangnekar
Senior Software Engineer RD (Cloud Computing) | NTT DATA Global Technology 
Services Pvt. Ltd.
w. +91.20.6604.1500 x 574 | aswad.rangne...@nttdata.com |  Learn more at 
nttdata.com/americas


__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-03 Thread David Chadwick
I have added a number of comments to this. I have also expanded on the
concept of role scoping for your consideration

regards

David

On 02/12/2013 23:21, Tiwari, Arvind wrote:
 Hi Adam and David,
 
 Thank you so much for all the great comments, seems we are making good 
 progress.
 
 I have replied to your comments and also added some to support my proposal
 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 David, I like your suggestion for role-def scoping which can fit in my Plan B 
 and I think Adam is cool with plan B.
 
 Please let me know if David's proposal for role-def scoping is cool for 
 everybody?
 
 
 Thanks,
 Arvind
 
 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com] 
 Sent: Wednesday, November 27, 2013 8:44 AM
 To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage 
 questions)
 Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 
 
 On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,

 Based on our discussion over IRC, I have updated the below etherpad with 
 proposal for nested role definition
 
 Updated.  I made my changes Green.  It isn't easy being green.
 

 https://etherpad.openstack.org/p/service-scoped-role-definition

 Please take a look @ Proposal (Ayoung) - Nested role definitions, I am 
 sorry if I could not catch your idea.

 Feel free to update the etherpad.

 Regards,
 Arvind


 -Original Message-
 From: Tiwari, Arvind
 Sent: Tuesday, November 26, 2013 4:08 PM
 To: David Chadwick; OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi David,

 Thanks for your time and valuable comments. I have replied to your comments 
 and try to explain why I am advocating to this BP.

 Let me know your thoughts, please feel free to update below etherpad
 https://etherpad.openstack.org/p/service-scoped-role-definition

 Thanks again,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Monday, November 25, 2013 12:12 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List
 Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi Arvind

 I have just added some comments to your blueprint page

 regards

 David


 On 19/11/2013 00:01, Tiwari, Arvind wrote:
 Hi,

   

 Based on our discussion in design summit , I have redone the service_id
 binding with roles BP
 https://blueprints.launchpad.net/keystone/+spec/serviceid-binding-with-role-definition.
 I have added a new BP (link below) along with detailed use case to
 support this BP.

 https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition

 Below etherpad link has some proposals for Role REST representation and
 pros and cons analysis

   

 https://etherpad.openstack.org/p/service-scoped-role-definition

   

 Please take look and let me know your thoughts.

   

 It would be awesome if we can discuss it in tomorrow's meeting.

   

 Thanks,

 Arvind

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Server action using compute V2.0 API: add and remove security group

2013-12-03 Thread GROSZ, Maty (Maty)
Hey,

I would like to know (and how do I know that from nova code) what kind of 
requests are these two APIs: Synchronized or a-synchronized (using compute v2.0 
API):
Server action  add Security Group ?
Server action  remove Security Group ?

I would like to know if the documented return response code 202 (accepted) is 
returned even if the whole process itself hasn't been finished yet, or whether 
it should returned back at the end of the 'add' or 'remove' execution.

Thanks,

Maty.


[logo]
Maty Grosz
Alcatel-Lucent
APIs Functional Owner, RD
CLOUDBAND BUSINESS UNIT
16 Atir Yeda St. Kfar-Saba 44643, ISRAEL
T: +972 (0) 9 7933078
F: +972 (0) 9 7933700
maty.gr...@alcatel-lucent.commailto:bkarin.bercov...@alcatel-lucent.com


inline: image001.jpg___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes - close to implementation start

2013-12-03 Thread Jaromir Coufal

Wireframes walkthrough: https://www.youtube.com/enhance?v=oRtL3aCuEEc


On 2013/03/12 10:25, Jaromir Coufal wrote:

Hey folks,

I opened 2 issues on UX discussion forum with TripleO UI topics:

Resource Management:
http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/
- this section was already reviewed before, there is not much 
surprises, just smaller updates

- we are about to implement this area

http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/
- these are completely new views and they need a lot of attention so 
that in time we don't change direction drastically

- any feedback here is welcome

We need to get into implementation ASAP. It doesn't mean that we have 
everything perfect from the very beginning, but that we have 
direction and we move forward by enhancements.


Therefor implementation of above mentioned areas should start very soon.

If all possible, I will try to record walkthrough with further 
explanations. If you have any questions or feedback, please follow 
the threads on ask-openstackux.


Thanks
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-12-03 Thread Day, Phil
Hi,

I think the concept of allowing users to request a cpu topology, but have a few 
questions / concerns:

 
 The host is exposing info about vCPU count it is able to support and the
 scheduler picks on that basis. The guest image is just declaring upper limits 
 on
 topology it can support. So If the host is able to support the guest's vCPU
 count, then the CPU topology decision should never cause any boot failure
 As such CPU topology has no bearing on scheduling, which is good because I
 think it would significantly complicate the problem.
 

i) Is that always true ?Some configurations (like ours) currently ignore 
vcpu count altogether because what we're actually creating are VMs that are n 
vcpus wide (as defined by the flavour) but each vcpu is only some subset of the 
processing capacity of a physical core (There was a summit session on this: 
http://summit.openstack.org/cfp/details/218).  So if vcpu count isn't being 
used for scheduling, can you still guarantee that all topology selections can 
always be met ?

ii) Even if you are counting vcpus and mapping them 1:1 against cores, are 
there not some topologies that are either more inefficient in terms of overall 
host usage and /or incompatible with other topologies (i.e. leave some (spare) 
resource un-used in way that it can't be used for a specific topology that 
would otherwise fit) ? As a provider I don't want users to be able to 
determine how efficiently (even indirectly) the hosts are utilised.   There 
maybe some topologies that I'm willing to allow (because they always pack 
efficiently) and others I would never allow.   Putting this into the control of 
the users via image metadata feels wrong in that case. Maybe flavour 
extra-spec (which is in the control of the cloud provider) would be a more 
logical fit for this kind of property ?

iii) I can see the logic of associating a topology with an image - but don't 
really understand how that would fit with the image being used with different 
flavours.  What happens if a topology in the image just can't be implemented 
within the constraints of a selected flavour ?It kind of feels as if we 
either need a way to constrain images to specific flavours, or perhaps allow an 
image to express a preferred flavour / topology, but allow the user to override 
these as part of the create request.

Cheers,
Phil



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Splitting up V3 API admin-actions plugin

2013-12-03 Thread Day, Phil
+1 from me - would much prefer to be able to pick this on an individual basis.

Could kind of see a case for keeping reset_network and inject_network_info 
together - but don't have a strong feeling about it (as we don't use them)

 -Original Message-
 From: Andrew Laski [mailto:andrew.la...@rackspace.com]
 Sent: 02 December 2013 14:59
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] Splitting up V3 API admin-actions plugin
 
 On 12/02/13 at 08:38am, Russell Bryant wrote:
 On 12/01/2013 08:39 AM, Christopher Yeoh wrote:
  Hi,
 
  At the summit we agreed to split out lock/unlock, pause/unpause,
  suspend/unsuspend functionality out of the V3 version of admin
  actions into separate extensions to make it easier for deployers to
  only have loaded the functionality that they want.
 
  Remaining in admin_actions we have:
 
  migrate
  live_migrate
  reset_network
  inject_network_info
  create_backup
  reset_state
 
  I think it makes sense to separate out migrate and live_migrate into
  a migrate plugin as well.
 
  What do people think about the others? There is no real overhead of
  having them in separate plugins and totally remove admin_actions.
  Does anyone have any objections from this being done?
 
  Also in terms of grouping I don't think any of the others remaining
  above really belong together, but welcome any suggestions.
 
 +1 to removing admin_actions and splitting everything out.
 
 +1 from me as well.
 
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-03 Thread Boris Pavlovic
Hi all,


Finally found a bit time to write my thoughts.

There are few blockers that make really complex to build scheduler as a
services or even to move main part of scheduler code to separated lib. We
already have one unsuccessfully effort
https://blueprints.launchpad.net/oslo/+spec/oslo-scheduler .

Major problems that we faced were next:
1) Hard connection with project db api layer (e.g. nova.db.api,
cinder.db.api)
2) Hard connection between db.models and host_states
3) Hardcoded host states objects structure
4) There is no namespace support in host states (so we are not able to keep
all filters for all projects in the same place)
5) Different API methods, that can't be effectively generalized.


Main goals of no-db-scheduler effort are:
1) Make scheduling much faster, storing data locally on each scheduler and
just syncing states of them
2) Remove connections between project.db.api and scheduler.db
3) Make host_states just JSON like objects
4) Add namespace support in host_states

When this part will be finished, we will have actually only 1 problem what
to do with DB API methods, and business logic of each project. What I see
is that there are 2 different ways:

1) Make scheduler as a big lib, then implement RPC methods + bit of
business logic in each project
2) Move all RPC calls from nova,cinder,ironic,... and business logic in 1
scheduler as a service


Best regards,
Boris Pavlovic



On Tue, Dec 3, 2013 at 1:11 PM, Khanh-Toan Tran 
khanh-toan.t...@cloudwatt.com wrote:

 We are also interested in the proposal and would like to contribute
 whatever we can.
 Currently we're working on nova-scheduler we think that an independent
 scheduler
 is a need for Openstack. We've been engaging in several discussions on
 this topic in
 the ML as well as in Nova meeting, thus we were thrilled to hear your
 proposal.

 PS: I've written in a mail expressing our interest in this topic earlier ,
 but I feel it's better
 to have an more official submission to join the team :)

 Best regards,

 Jerome Gallard  Khanh-Toan Tran

  -Message d'origine-
  De : Robert Collins [mailto:robe...@robertcollins.net]
  Envoyé : mardi 3 décembre 2013 09:18
  À : OpenStack Development Mailing List (not for usage questions)
  Objet : Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a
 modest
  proposal for an external scheduler in our lifetime
 
  The team size was a minimum, not a maximum - please add your names.
 
  We're currently waiting on the prerequisite blueprint to land before
 work starts
  in earnest; and for the blueprint to be approved (he says, without
 having
  checked to see if it has been now:))
 
  -Rob
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Team meeting reminder - December 3

2013-12-03 Thread Alexander Tivelkov
Hi!

This is just a reminder about the regular meeting of Murano-team in IRC.
The meeting will be held in #openstack-meeting-alt channel at 10am Pacific.

The complete agenda of the meeting is available here:
https://wiki.openstack.org/wiki/Meetings/MuranoAgenda

--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-03 Thread Flavio Percoco

On 03/12/13 03:12 +, Jarret Raim wrote:

There are two big parts to this, I think.  One is techincal - a significant
portion
of OpenStack deployments will not work with this because Celery does not
work with their deployed messaging architecture.
 See another reply in this thread for an example of someone that sees the
inability to use Qpid as a roadblock for an example.  This is solvable, but
not
quickly.

The other is somewhat technical, but also a community issue.  Monty
articulated this well in another reply.  Barbican has made a conflicting
library
choice with what every other project using messaging is using.
With the number of projects we have, it is in our best interest to strive
for
consistency where we can.  Differences should not be arbitrary.  The
differences should only be where an exception is well justified.  I don't
see
that as being the case here.  Should everyone using oslo.messaging (or its
predecessor rpc in oslo-incubator) be using Celery?  Maybe.  I don't know,
but that's the question at hand.  Ideally this would have come up with a
more
broad audience sooner.  If it did, I'm sorry I missed it.


I understand the concern here and I'm happy to have Barbican look at using
oslo.messaging during the Icehouse cycle.

I am a bit surprised at the somewhat strong reactions to our choice. When we
created Barbican, we looked at the messaging frameworks out there for use. At
the time, oslo.messaging was not packaged, not documented, not tested, had no
track record and an unknown level of community support.


But there was oslo-incubator/rpc which all projects where already
using.


Celery is a battle-tested library that is widely deployed with a good track
record, strong community and decent documentation. We made our choice based on
those factors, just as the same as we would for any library inclusion.

As celery has met our needs up to this point, we saw no reason to revisit the
decision until now. In that time oslo.messaging  has moved to a separate repo.
It still has little to no documentation, but the packaging and maintenance
issues seem to be on the way to being sorted.

So in short, in celery we get a reliable library with good docs that is battle
tested, but is limited to the transports supported by Kombu. Both celery and
Kombu are extendable and have many backends including AMQP, Redis, Beanstalk,
Amazon SQS, CouchDB, MongoDB, ZeroMQ, ZooKeeper, SoftLayer MQ and Pyro.

Oslo.messaging seems to have good support in OpenStack, but still lacks
documentation and packaging (though some of that is being sorted out now). It
offers support for qpid which celery seems to lack. It also offers a common
place for message signing and some other nice to have features for OpenStack.


I think there's something else you should take under consideration.
Oslo messaging is not just an OpenStack library. It's the RPC library
that all projects are relying on and one of the strong goals we have
in OpenStack is to reduce code and efforts duplications. We'd love to
have more people testing and contributing to oslo.messaging in order
to make it as battle tested as celery is.

Please, don't get me wrong. I don't mean to say you didn't considered
it, I just want to add another reason why we should always try to
re-use the libraries that other projects are using - unless there's a
strong technical reason ;).


Based on the commonality in OpenStack (and the lack of anyone else using
Celery), I think looking to move to oslo.messaging is a good goal. This will
take some time, but I think doing it by Icehouse seems reasonable. I think
that is what you and Monty are asking for?




I have added the task to our list on
https://wiki.openstack.org/wiki/Barbican/Incubation.



Thanks a lot for this, really!

Cheers,
FF

--
@flaper87
Flavio Percoco


pgpznZ7d3tIAe.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Request for review (glance-multifilesystem-store patch)

2013-12-03 Thread Flavio Percoco

On 03/12/13 10:29 +, Rangnekar, Aswad wrote:

We are targeting to complete adding multi file system support for Glance by
Icehouse-1.

Please review the patch: https://review.openstack.org/#/c/58997/


Please, do not send review requests to this list. Feel free to join
#openstack-glance and get some feedback there.

Thanks!
FF


--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-03 Thread Roshan Agrawal


 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Monday, December 02, 2013 8:17 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] CLI minimal implementation
 
 On 12/02/2013 07:03 PM, Roshan Agrawal wrote:
  I have created a child blueprint to define scope for the minimal
 implementation of the CLI to consider for milestone 1.
  https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementatio
  n
 
  Spec for the minimal CLI @
  https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-im
  plementation Etherpad for discussion notes:
  https://etherpad.openstack.org/p/MinimalCLI
 
  Would look for feedback on the ML, etherpad and discuss more in the
 weekly IRC meeting tomorrow.
 
 What is this R1.N syntax?  How does it relate to development milestones?
  Does R1 mean a requirement for milestone-1?

These do not relate to development milestones. R1 is a unique identified for 
the given requirement. R1.x is a unique requirement Id for something that is a 
sub item of the top level requirement R1.
Is there a more openstack standard way for generating requirements Id?  
 
 For consistency, I would use commands like:
 
solum app-create
solum app-delete
solum assembly-create
solum assembly-delete
 
 instead of adding a space in between:
 
solum app create
 
 to be more consistent with other clients, like:
 
nova flavor-create
nova flavor-delete
glance image-create
glance image-delete

The current proposal is an attempt to be consistent with the direction for the 
openstack one CLI. Adrian's addressed it in his other reply.

 
 I would make required arguments positional arguments.  So, instead of:
 
solum app-create --plan=planname
 
 do:
 
solum app-create planname

I will make this change unless hear objections 
 
 Lastly, everywhere you have a name, I would use a UUID.  Names shouldn't
 have to be globally unique (because of multi-tenancy).  UUIDs should always
 work, but you can support a name in the client code as a friendly shortcut,
 but it should fail if a unique result can not be resolved from the name.


Names do not have to be globally unique; just unique within the tenant 
namespace. The Name+tenant combination should map to a unique uuid. 
The CLI is a client tool, where as a user working with names is easier. We will 
support both, but start with Names (the friendly shortcut), and map it to uuid 
behind the scenes.


 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-03 Thread Doug Hellmann
On Mon, Dec 2, 2013 at 10:12 PM, Jarret Raim jarret.r...@rackspace.comwrote:

  There are two big parts to this, I think.  One is techincal - a
 significant
  portion
  of OpenStack deployments will not work with this because Celery does not
  work with their deployed messaging architecture.
   See another reply in this thread for an example of someone that sees the
  inability to use Qpid as a roadblock for an example.  This is solvable,
 but
  not
  quickly.
 
  The other is somewhat technical, but also a community issue.  Monty
  articulated this well in another reply.  Barbican has made a conflicting
  library
  choice with what every other project using messaging is using.
  With the number of projects we have, it is in our best interest to strive
  for
  consistency where we can.  Differences should not be arbitrary.  The
  differences should only be where an exception is well justified.  I don't
  see
  that as being the case here.  Should everyone using oslo.messaging (or
 its
  predecessor rpc in oslo-incubator) be using Celery?  Maybe.  I don't
 know,
  but that's the question at hand.  Ideally this would have come up with a
  more
  broad audience sooner.  If it did, I'm sorry I missed it.

 I understand the concern here and I'm happy to have Barbican look at using
 oslo.messaging during the Icehouse cycle.

 I am a bit surprised at the somewhat strong reactions to our choice. When
 we
 created Barbican, we looked at the messaging frameworks out there for use.
 At
 the time, oslo.messaging was not packaged, not documented, not tested, had
 no
 track record and an unknown level of community support.


The API and developer documentation is at
http://docs.openstack.org/developer/oslo.messaging/

Doug



 Celery is a battle-tested library that is widely deployed with a good track
 record, strong community and decent documentation. We made our choice
 based on
 those factors, just as the same as we would for any library inclusion.

 As celery has met our needs up to this point, we saw no reason to revisit
 the
 decision until now. In that time oslo.messaging  has moved to a separate
 repo.
 It still has little to no documentation, but the packaging and maintenance
 issues seem to be on the way to being sorted.

 So in short, in celery we get a reliable library with good docs that is
 battle
 tested, but is limited to the transports supported by Kombu. Both celery
 and
 Kombu are extendable and have many backends including AMQP, Redis,
 Beanstalk,
 Amazon SQS, CouchDB, MongoDB, ZeroMQ, ZooKeeper, SoftLayer MQ and Pyro.

 Oslo.messaging seems to have good support in OpenStack, but still lacks
 documentation and packaging (though some of that is being sorted out now).
 It
 offers support for qpid which celery seems to lack. It also offers a common
 place for message signing and some other nice to have features for
 OpenStack.

 Based on the commonality in OpenStack (and the lack of anyone else using
 Celery), I think looking to move to oslo.messaging is a good goal. This
 will
 take some time, but I think doing it by Icehouse seems reasonable. I think
 that is what you and Monty are asking for?

 I have added the task to our list on
 https://wiki.openstack.org/wiki/Barbican/Incubation.


 Thanks again for all the eyeballs our on application.


 Jarret


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-03 Thread Arati Mahimane


On 12/3/13 7:51 AM, Roshan Agrawal roshan.agra...@rackspace.com wrote:



 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Monday, December 02, 2013 8:17 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] CLI minimal implementation
 
 On 12/02/2013 07:03 PM, Roshan Agrawal wrote:
  I have created a child blueprint to define scope for the minimal
 implementation of the CLI to consider for milestone 1.
  https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementatio
  n
 
  Spec for the minimal CLI @
  https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-im
  plementation Etherpad for discussion notes:
  https://etherpad.openstack.org/p/MinimalCLI
 
  Would look for feedback on the ML, etherpad and discuss more in the
 weekly IRC meeting tomorrow.
 
 What is this R1.N syntax?  How does it relate to development milestones?
  Does R1 mean a requirement for milestone-1?

These do not relate to development milestones. R1 is a unique identified
for the given requirement. R1.x is a unique requirement Id for something
that is a sub item of the top level requirement R1.
Is there a more openstack standard way for generating requirements Id?
 
 For consistency, I would use commands like:
 
solum app-create
solum app-delete
solum assembly-create
solum assembly-delete
 
 instead of adding a space in between:
 
solum app create
 
 to be more consistent with other clients, like:
 
nova flavor-create
nova flavor-delete
glance image-create
glance image-delete

The current proposal is an attempt to be consistent with the direction
for the openstack one CLI. Adrian's addressed it in his other reply.

 
 I would make required arguments positional arguments.  So, instead of:
 
solum app-create --plan=planname
 
 do:
 
solum app-create planname

I will make this change unless hear objections

In my opinion, since most of the parameters (listed here
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/ApplicationDeployme
ntAndManagement#Solum-R1.12_app_create:_CLI) are optional,
it would be easier to specify the parameters as param_name=value
instead of having positional parameters.


 
 
 Lastly, everywhere you have a name, I would use a UUID.  Names shouldn't
 have to be globally unique (because of multi-tenancy).  UUIDs should
always
 work, but you can support a name in the client code as a friendly
shortcut,
 but it should fail if a unique result can not be resolved from the name.


Names do not have to be globally unique; just unique within the tenant
namespace. The Name+tenant combination should map to a unique uuid.
The CLI is a client tool, where as a user working with names is easier.
We will support both, but start with Names (the friendly shortcut), and
map it to uuid behind the scenes.


 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-12-03 Thread Day, Phil
Hi Daniel,

I spent some more time reading your write up on the wiki (and it is a great 
write up BTW), and had a couple of further questions (I think my original ones 
are also still valid, but do let me know if / where I'm missing the point):

iv) In the worked example where do the preferred_topology and 
mandatory_topology come from ?  (For example are these per host configuration 
values)

v) You give an example where its possible to get the situation where the 
combination of image_hw_cpu_topology, flavour means the instance can't be 
created (vcpus=2048) but that looks more like a flavour misconfiguration 
(unless there is some node that does have that many vcpus).   The case that 
worries me more is where, for example an image says it need max-sockets=1 and 
the flavour says it needs more vcpus that can be provided from a single socket. 
  In this case the flavour is still valid, just not with this particular image 
- and that feels like a case that should fail validation at the API layer, not 
down on the compute node where the only option is to reschedule or go into an 
Error state.

Phil  


 -Original Message-
 From: Day, Phil
 Sent: 03 December 2013 12:03
 To: 'Daniel P. Berrange'; OpenStack Development Mailing List (not for usage
 questions)
 Subject: RE: [openstack-dev] [Nova] Blueprint: standard specification of
 guest CPU topology
 
 Hi,
 
 I think the concept of allowing users to request a cpu topology, but have a
 few questions / concerns:
 
 
  The host is exposing info about vCPU count it is able to support and
  the scheduler picks on that basis. The guest image is just declaring
  upper limits on topology it can support. So If the host is able to
  support the guest's vCPU count, then the CPU topology decision should
  never cause any boot failure As such CPU topology has no bearing on
  scheduling, which is good because I think it would significantly complicate
 the problem.
 
 
 i) Is that always true ?Some configurations (like ours) currently ignore 
 vcpu
 count altogether because what we're actually creating are VMs that are n
 vcpus wide (as defined by the flavour) but each vcpu is only some subset of
 the processing capacity of a physical core (There was a summit session on
 this: http://summit.openstack.org/cfp/details/218).  So if vcpu count isn't
 being used for scheduling, can you still guarantee that all topology 
 selections
 can always be met ?
 
 ii) Even if you are counting vcpus and mapping them 1:1 against cores, are
 there not some topologies that are either more inefficient in terms of overall
 host usage and /or incompatible with other topologies (i.e. leave some
 (spare) resource un-used in way that it can't be used for a specific topology
 that would otherwise fit) ? As a provider I don't want users to be able to
 determine how efficiently (even indirectly) the hosts are utilised.   There
 maybe some topologies that I'm willing to allow (because they always pack
 efficiently) and others I would never allow.   Putting this into the control 
 of
 the users via image metadata feels wrong in that case. Maybe flavour
 extra-spec (which is in the control of the cloud provider) would be a more
 logical fit for this kind of property ?
 
 iii) I can see the logic of associating a topology with an image - but don't 
 really
 understand how that would fit with the image being used with different
 flavours.  What happens if a topology in the image just can't be implemented
 within the constraints of a selected flavour ?It kind of feels as if we 
 either
 need a way to constrain images to specific flavours, or perhaps allow an
 image to express a preferred flavour / topology, but allow the user to
 override these as part of the create request.
 
 Cheers,
 Phil
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread Joe Gordon
HI all,

Recently I have seen a few patches fixing a few typos.  I would like to
point out a really nifty tool to detect commonly misspelled words.  So next
time you want to fix a typo, instead of just fixing a single one you can go
ahead and fix a whole bunch.

https://github.com/lyda/misspell-check

To install it:
  $ pip install misspellings

To use it in your favorite openstack repo:
 $ git ls-files | grep -v locale | misspellings -f -


Sample output:

http://paste.openstack.org/show/54354


best,
Joe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][qa] Punting ceilometer from whitelist

2013-12-03 Thread Eoghan Glynn


- Original Message -
 On 12/02/2013 10:24 AM, Julien Danjou wrote:
  On Fri, Nov 29 2013, David Kranz wrote:
 
  In preparing to fail builds with log errors I have been trying to make
  things easier for projects by maintaining a whitelist. But these bugs in
  ceilometer are coming in so fast that I can't keep up. So I am  just
  putting
  .* in the white list for any cases I find before gate failing is turned
  on, hopefully early this week.
  Following the chat on IRC and the bug reports, it seems this might come
   From the tempest tests that are under reviews, as currently I don't
  think Ceilometer generates any error as it's not tested.
 
  So I'm not sure we want to whitelist anything?
 So I tested this with https://review.openstack.org/#/c/59443/. There are
 flaky log errors coming from ceilometer. You
 can see that the build at 12:27 passed, but the last build failed twice,
 each with a different set of errors. So the whitelist needs to remain
 and the ceilometer team should remove each entry when it is believed to
 be unnecessary.

Hi David,

Just looking into this issue.

So when you say the build failed, do you mean that errors were detected
in the ceilometer log files? (as opposed to a specific Tempest testcase
having reported a failure)

If that interpretation of build failure is correct, I think there's a simple
explanation for the compute agent ERRORs seen in the log file for the CI
build related to your patch referenced above, specifically:

  ERROR ceilometer.compute.pollsters.disk [-] Requested operation is not valid: 
domain is not running

The problem I suspect is a side-effect of a nova test that suspends the
instance in question, followed by a race between the ceilometer logic that
discovers the local instances via the nova-api followed by the individual
pollsters that call into the libvirt daemon to gather the disk stats etc.
It appears that the libvirt virDomainBlockStats() call fails with domain
is not running for suspended instances.

This would only occur intermittently as it requires the instance to
remain in the suspended state across a polling interval boundary. 

So we need tighten up our logic there to avoid spewing needless errors
when a very normal event occurs (i.e. instance suspension).

I've filed a bug[1] which some ideas for addressing the issue - this
will require a bit of discussion before agreeing a way forward, but I'll
prioritize getting this knocked on the head asap.

Cheers,
Eoghan

[1] https://bugs.launchpad.net/ceilometer/+bug/1257302



  The tricky part is going to be for us to fix Ceilometer on one side and
  re-run Tempest reviews on the other side once a potential fix is merged.
 This is another use case for the promised
 dependent-patch-between-projects thing.
 
   -David
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] {TripleO] UI Wireframes - close to implementation start

2013-12-03 Thread Tzu-Mainn Chen
 Hey folks,

 I opened 2 issues on UX discussion forum with TripleO UI topics:

 Resource Management:
 http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/
 - this section was already reviewed before, there is not much surprises, just
 smaller updates
 - we are about to implement this area

 http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/
 - these are completely new views and they need a lot of attention so that in
 time we don't change direction drastically
 - any feedback here is welcome

 We need to get into implementation ASAP. It doesn't mean that we have
 everything perfect from the very beginning, but that we have direction and
 we move forward by enhancements.

 Therefor implementation of above mentioned areas should start very soon.

 If all possible, I will try to record walkthrough with further explanations.
 If you have any questions or feedback, please follow the threads on
 ask-openstackux.

 Thanks
 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
These wireframes look really good! However, would it be possible to get the 
list of requirements driving them? For example, something on the level of: 

1) removal of resource classes and racks 
2) what happens behind the scenes when deployment occurs 
3) the purpose of compute class 
4) etc 

I think it'd be easier to understand the big picture that way. Thanks! 

Mainn 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread Sylvain Bauza
Great tool !
Just discovered that openstack.common.rpc does have typos, another good
reason to migrate to oslo.messaging.rpc :-)

-Sylvain


2013/12/3 Joe Gordon joe.gord...@gmail.com

 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So next
 time you want to fix a typo, instead of just fixing a single one you can go
 ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354


 best,
 Joe

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][qa] Punting ceilometer from whitelist

2013-12-03 Thread Sean Dague
On 12/03/2013 09:30 AM, Eoghan Glynn wrote:
 
 
 - Original Message -
 On 12/02/2013 10:24 AM, Julien Danjou wrote:
 On Fri, Nov 29 2013, David Kranz wrote:

 In preparing to fail builds with log errors I have been trying to make
 things easier for projects by maintaining a whitelist. But these bugs in
 ceilometer are coming in so fast that I can't keep up. So I am  just
 putting
 .* in the white list for any cases I find before gate failing is turned
 on, hopefully early this week.
 Following the chat on IRC and the bug reports, it seems this might come
  From the tempest tests that are under reviews, as currently I don't
 think Ceilometer generates any error as it's not tested.

 So I'm not sure we want to whitelist anything?
 So I tested this with https://review.openstack.org/#/c/59443/. There are
 flaky log errors coming from ceilometer. You
 can see that the build at 12:27 passed, but the last build failed twice,
 each with a different set of errors. So the whitelist needs to remain
 and the ceilometer team should remove each entry when it is believed to
 be unnecessary.
 
 Hi David,
 
 Just looking into this issue.
 
 So when you say the build failed, do you mean that errors were detected
 in the ceilometer log files? (as opposed to a specific Tempest testcase
 having reported a failure)
 
 If that interpretation of build failure is correct, I think there's a simple
 explanation for the compute agent ERRORs seen in the log file for the CI
 build related to your patch referenced above, specifically:
 
   ERROR ceilometer.compute.pollsters.disk [-] Requested operation is not 
 valid: domain is not running
 
 The problem I suspect is a side-effect of a nova test that suspends the
 instance in question, followed by a race between the ceilometer logic that
 discovers the local instances via the nova-api followed by the individual
 pollsters that call into the libvirt daemon to gather the disk stats etc.
 It appears that the libvirt virDomainBlockStats() call fails with domain
 is not running for suspended instances.
 
 This would only occur intermittently as it requires the instance to
 remain in the suspended state across a polling interval boundary. 
 
 So we need tighten up our logic there to avoid spewing needless errors
 when a very normal event occurs (i.e. instance suspension).

Definitely need to tighten things up.

As a developer think about the fact that when you log something as
ERROR, you are expecting a cloud operator to be woken up in the middle
of the night with an email alert to go fix the cloud immediately. You
are intentionally ruining someone's weekend to fix this issue - RIGHT NOW!

Hence why we are going to start failing jobs that add new ERRORs. We
have a whitelist for times when this should be the case. But assume
that's not the normal path.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 agent external networks

2013-12-03 Thread Robert Kukura
On 12/03/2013 04:23 AM, Sylvain Afchain wrote:
 Hi,
 
 I was reviewing this patch (https://review.openstack.org/#/c/52884/) from 
 Oleg and I thought that is a bit tricky to deploy an l3 agent with automation 
 tools like Puppet since you have to specify the uuid of a network that 
 doesn't already exist. It may be better to bind a l3 agent to an network by a 
 CIDR instead of a uuid since when we deploy we know in advance which network 
 address will be on which l3 agent.
 
 I wanted also remove the L3 agent limit regarding to the number of external 
 networks, I submitted a patch as WIP 
 (https://review.openstack.org/#/c/59359/) for that purpose and I wanted to 
 have the community opinion about that :)

I really like this idea - there is no need to limit an agent to a single
external network unless the external bridge is being used. See my
comments on the patch.

-Bob

 
 Please let me know what you think.
 
 Best regards,
 
 Sylvain
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-03 Thread Oleg Gelbukh
Chmouel,

We reviewed the design of this feature at the summit with CERN and HP
teams. Centralized quota storage in Keystone is an anticipated feature, but
there are concerns about adding quota enforcement logic for every service
to Keystone. The agreed solution is to add quota numbers storage to
Keystone, and add mechanism that will notify services about change to the
quota. Service, in turn, will update quota cache and apply the new quota
value according to its own enforcement rules.

More detailed capture of the discussion on etherpad:
https://etherpad.openstack.org/p/CentralizedQuotas

Re this particular change, we plan to reuse this API extension code, but
extended to support domain-level quota as well.

--
Best regards,
Oleg Gelbukh
Mirantis Labs


On Mon, Dec 2, 2013 at 5:39 PM, Chmouel Boudjnah chmo...@enovance.comwrote:

 Hello,

 I was wondering what was the status of Keystone being the central place
 across all OpenStack projects for quotas.

 There is already an implementation from Dmitry here :

 https://review.openstack.org/#/c/40568/

 but hasn't seen activities since october waiting for icehouse development
 to be started and a few bits to be cleaned and added (i.e: the sqlite
 migration).

 It would be great if we can get this rekicked to get that for icehouse-2.

 Thanks,
 Chmouel.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-03 Thread John Dickinson
How are you proposing that this integrate with Swift's account and container 
quotas (especially since there may be hundreds of thousands of accounts and 
millions (billions?) of containers in a single Swift cluster)? A centralized 
lookup for quotas doesn't really seem to be a scalable solution.

--John


On Dec 3, 2013, at 6:53 AM, Oleg Gelbukh ogelb...@mirantis.com wrote:

 Chmouel,
 
 We reviewed the design of this feature at the summit with CERN and HP teams. 
 Centralized quota storage in Keystone is an anticipated feature, but there 
 are concerns about adding quota enforcement logic for every service to 
 Keystone. The agreed solution is to add quota numbers storage to Keystone, 
 and add mechanism that will notify services about change to the quota. 
 Service, in turn, will update quota cache and apply the new quota value 
 according to its own enforcement rules.
 
 More detailed capture of the discussion on etherpad:
 https://etherpad.openstack.org/p/CentralizedQuotas
 
 Re this particular change, we plan to reuse this API extension code, but 
 extended to support domain-level quota as well.
 
 --
 Best regards,
 Oleg Gelbukh
 Mirantis Labs
 
 
 On Mon, Dec 2, 2013 at 5:39 PM, Chmouel Boudjnah chmo...@enovance.com wrote:
 Hello,
 
 I was wondering what was the status of Keystone being the central place 
 across all OpenStack projects for quotas.
 
 There is already an implementation from Dmitry here :
 
 https://review.openstack.org/#/c/40568/
 
 but hasn't seen activities since october waiting for icehouse development to 
 be started and a few bits to be cleaned and added (i.e: the sqlite migration).
 
 It would be great if we can get this rekicked to get that for icehouse-2.
 
 Thanks,
 Chmouel.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes - close to implementation start

2013-12-03 Thread Jaromir Coufal
Please, instead of 'enhance', use 'watch': 
http://www.youtube.com/watch?v=oRtL3aCuEEc (this link is correct)


Thanks
-- Jarda

On 2013/03/12 12:53, Jaromir Coufal wrote:

Wireframes walkthrough: https://www.youtube.com/enhance?v=oRtL3aCuEEc


On 2013/03/12 10:25, Jaromir Coufal wrote:

Hey folks,

I opened 2 issues on UX discussion forum with TripleO UI topics:

Resource Management:
http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/
- this section was already reviewed before, there is not much 
surprises, just smaller updates

- we are about to implement this area

http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/
- these are completely new views and they need a lot of attention so 
that in time we don't change direction drastically

- any feedback here is welcome

We need to get into implementation ASAP. It doesn't mean that we 
have everything perfect from the very beginning, but that we have 
direction and we move forward by enhancements.


Therefor implementation of above mentioned areas should start very soon.

If all possible, I will try to record walkthrough with further 
explanations. If you have any questions or feedback, please follow 
the threads on ask-openstackux.


Thanks
-- Jarda



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][qa] Punting ceilometer from whitelist

2013-12-03 Thread David Kranz

On 12/03/2013 09:30 AM, Eoghan Glynn wrote:


- Original Message -

On 12/02/2013 10:24 AM, Julien Danjou wrote:

On Fri, Nov 29 2013, David Kranz wrote:


In preparing to fail builds with log errors I have been trying to make
things easier for projects by maintaining a whitelist. But these bugs in
ceilometer are coming in so fast that I can't keep up. So I am  just
putting
.* in the white list for any cases I find before gate failing is turned
on, hopefully early this week.

Following the chat on IRC and the bug reports, it seems this might come
  From the tempest tests that are under reviews, as currently I don't
think Ceilometer generates any error as it's not tested.

So I'm not sure we want to whitelist anything?

So I tested this with https://review.openstack.org/#/c/59443/. There are
flaky log errors coming from ceilometer. You
can see that the build at 12:27 passed, but the last build failed twice,
each with a different set of errors. So the whitelist needs to remain
and the ceilometer team should remove each entry when it is believed to
be unnecessary.

Hi David,

Just looking into this issue.

So when you say the build failed, do you mean that errors were detected
in the ceilometer log files? (as opposed to a specific Tempest testcase
having reported a failure)
Yes, exactly. This patch removed the whitelist entries for ceilometer 
and so those errors then failed the build.


If that interpretation of build failure is correct, I think there's a simple
explanation for the compute agent ERRORs seen in the log file for the CI
build related to your patch referenced above, specifically:

   ERROR ceilometer.compute.pollsters.disk [-] Requested operation is not 
valid: domain is not running

The problem I suspect is a side-effect of a nova test that suspends the
instance in question, followed by a race between the ceilometer logic that
discovers the local instances via the nova-api followed by the individual
pollsters that call into the libvirt daemon to gather the disk stats etc.
It appears that the libvirt virDomainBlockStats() call fails with domain
is not running for suspended instances.

This would only occur intermittently as it requires the instance to
remain in the suspended state across a polling interval boundary.

So we need tighten up our logic there to avoid spewing needless errors
when a very normal event occurs (i.e. instance suspension).

I've filed a bug[1] which some ideas for addressing the issue - this
will require a bit of discussion before agreeing a way forward, but I'll
prioritize getting this knocked on the head asap.
Great! Thanks. The change I pushed yesterday should help prevent this 
sort of thing from creeping in across all projects. But as Julian 
observed, the process of removing entries from the whitelist that are no 
longer needed due to bug fixes is not so easy and automatic. I'm trying 
to put together a script that will check the whitelist entries against 
the last two weeks of builds using logstash but it is not so simple to 
do that since general regexps cannot be used with logstash.



 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint: standard specification of guest CPU topology

2013-12-03 Thread Chris Friesen

On 12/03/2013 04:08 AM, Daniel P. Berrange wrote:

On Tue, Dec 03, 2013 at 01:47:31AM -0800, Gary Kotton wrote:

Hi,
I think that this information should be used as part of the scheduling
decision, that is hosts that are to be selected should be excluded if they
do not have the necessary resources available. It will be interesting to
know how this is going to fit into the new scheduler that is being
discussed.


The CPU topology support shouldn't have any interactions with, nor
cause any failures post-scheduling. ie If the host has declared that
it has sufficient resources to run a VM with the given vCPU count,
then that is sufficient.


What if we want to do more than just specify a number of vCPUs?  What if 
we want to specify that they need to all come from a single NUMA node? 
Or all from different NUMA nodes?  Or that we want (or don't want) them 
to come from hyperthread siblings, or from different physical sockets.


This sort of things is less common in the typical cloud space, but for 
private clouds where the overcommit ratio might be far smaller and 
performance is more of an issue they might be more desirable.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thursday subteam meeting

2013-12-03 Thread Eugene Nikanorov
Hi,

Sure they must have the same provider.
Loadbalancer instance could be created in two ways:
- implicitly with pool creation, then provider for the pool becomes
provider for the instance.
- explicitly, with pool created later on and attached to the instance.
In that case provider attribute will be validated against instance's
provider.

Thanks,
Eugene.


On Tue, Dec 3, 2013 at 2:29 AM, Itsuro ODA o...@valinux.co.jp wrote:

 Hi Eugene, Iwamoto

  You are correct. Provider attribute will remain in the pool due to API
  compatibility reasons.
 I agree with you.

 I just wanted to make sure pools in a loadblancer can have
 different providers or not. (I think it should be same.)

 Thanks
 Itsuto Ofa

 On Mon, 2 Dec 2013 12:48:30 +0400
 Eugene Nikanorov enikano...@mirantis.com wrote:

  Hi Iwamoto,
 
  You are correct. Provider attribute will remain in the pool due to API
  compatibility reasons.
 
  Thanks,
  Eugene.
 
 
  On Mon, Dec 2, 2013 at 9:35 AM, IWAMOTO Toshihiro iwam...@valinux.co.jp
 wrote:
 
   At Fri, 29 Nov 2013 07:25:54 +0900,
   Itsuro ODA wrote:
   
Hi Eugene,
   
Thank you for the response.
   
I have a comment.
I think 'provider' attribute should be added to loadbalance resource
and used rather than pool's 'provider' since I think using multiple
driver within a loadbalancer does not make sense.
  
   There can be a 'provider' attribute in a loadbalancer resource, but,
   to maintain API, the 'provider' attribute in pools should remain the
   same.
   Is there any other attribute planned for the loadbalancer resource?
  
What do you think ?
   
I'm looking forward to your code up !
   
Thanks.
Itsuro Oda
   
On Thu, 28 Nov 2013 16:58:40 +0400
Eugene Nikanorov enikano...@mirantis.com wrote:
   
 Hi Itsuro,

 I've updated the wiki with some examples of cli workflow that
   illustrate
 proposed API.
 Please see the updated page:

  
 https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance#API_change

 Thanks,
 Eugene.
  
   --
   IWAMOTO Toshihiro
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

 --
 Itsuro ODA o...@valinux.co.jp


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-03 Thread Shawn Hartsock
Sorry to jump into this late and all, but I am curious.

Why not borrow the concept of flavors from Nova and apply them to quotas? 

While it is open to interpretation and I most certainly could be wrong, the 
why of flavors is that you want to plan. If you know that you have flavors 
that are 1/4, 1/8, 1/16, 1/32, and so on the size of your standard host node 
then you know that if you plan for each node that 4 of the 1/4th flavor tenants 
in storage and CPU and 32 times 4 (or 128) possible  IP addresses ... then you 
know your worst case infrastructure requirements. As a cloud operator, you know 
how many maximum IP you need and how many compute nodes you need.

So in the terms of quotas, why not borrow the same concept and allow for the 
creation of a quota system that then allows a cloud operator to plan. That 
should limit the total number of quotas and make the problem space smaller and 
easier to deal with right?

Or have I missed a lot of the conversation and should I run out and do some 
reading? Pointers would be welcome.

# Shawn Hartsock


- Original Message -
 From: John Dickinson m...@not.mn
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Cc: Chmouel Boudjnah chmo...@chmouel.com
 Sent: Tuesday, December 3, 2013 10:04:47 AM
 Subject: Re: [openstack-dev] [Keystone] Store quotas in Keystone
 
 How are you proposing that this integrate with Swift's account and container
 quotas (especially since there may be hundreds of thousands of accounts and
 millions (billions?) of containers in a single Swift cluster)? A centralized
 lookup for quotas doesn't really seem to be a scalable solution.
 
 --John
 
 
 On Dec 3, 2013, at 6:53 AM, Oleg Gelbukh ogelb...@mirantis.com wrote:
 
  Chmouel,
  
  We reviewed the design of this feature at the summit with CERN and HP
  teams. Centralized quota storage in Keystone is an anticipated feature,
  but there are concerns about adding quota enforcement logic for every
  service to Keystone. The agreed solution is to add quota numbers storage
  to Keystone, and add mechanism that will notify services about change to
  the quota. Service, in turn, will update quota cache and apply the new
  quota value according to its own enforcement rules.
  
  More detailed capture of the discussion on etherpad:
  https://urldefense.proofpoint.com/v1/url?u=https://etherpad.openstack.org/p/CentralizedQuotask=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=U9jd8i1QXRWQEdLI1XfrWPjXsJaoGrk8w31ffdfY7Wk%3D%0Am=tzBzbXgJCXEysjYVOTpv38Q4gcaAQ%2Fi8DgWNbal6W78%3D%0As=10a3de3d5f5eeea63349193030efc95dfcd127f9275b8244b0fe02ce5e3da2ab
  
  Re this particular change, we plan to reuse this API extension code, but
  extended to support domain-level quota as well.
  
  --
  Best regards,
  Oleg Gelbukh
  Mirantis Labs
  
  
  On Mon, Dec 2, 2013 at 5:39 PM, Chmouel Boudjnah chmo...@enovance.com
  wrote:
  Hello,
  
  I was wondering what was the status of Keystone being the central place
  across all OpenStack projects for quotas.
  
  There is already an implementation from Dmitry here :
  
  https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/40568/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=U9jd8i1QXRWQEdLI1XfrWPjXsJaoGrk8w31ffdfY7Wk%3D%0Am=tzBzbXgJCXEysjYVOTpv38Q4gcaAQ%2Fi8DgWNbal6W78%3D%0As=783921e458225eec7098c9755b96d5f2c45a8c744d6eec76ee30cb7a583d522b
  
  but hasn't seen activities since october waiting for icehouse development
  to be started and a few bits to be cleaned and added (i.e: the sqlite
  migration).
  
  It would be great if we can get this rekicked to get that for icehouse-2.
  
  Thanks,
  Chmouel.
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=U9jd8i1QXRWQEdLI1XfrWPjXsJaoGrk8w31ffdfY7Wk%3D%0Am=tzBzbXgJCXEysjYVOTpv38Q4gcaAQ%2Fi8DgWNbal6W78%3D%0As=a18fc7f57ed9d5071ef1aa8893069f260a9b40f75c2c405be351d7271a1b1794
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=U9jd8i1QXRWQEdLI1XfrWPjXsJaoGrk8w31ffdfY7Wk%3D%0Am=tzBzbXgJCXEysjYVOTpv38Q4gcaAQ%2Fi8DgWNbal6W78%3D%0As=a18fc7f57ed9d5071ef1aa8893069f260a9b40f75c2c405be351d7271a1b1794
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=U9jd8i1QXRWQEdLI1XfrWPjXsJaoGrk8w31ffdfY7Wk%3D%0Am=tzBzbXgJCXEysjYVOTpv38Q4gcaAQ%2Fi8DgWNbal6W78%3D%0As=a18fc7f57ed9d5071ef1aa8893069f260a9b40f75c2c405be351d7271a1b1794
 


[openstack-dev] [Ironic] functional (aka integration) gate testing

2013-12-03 Thread Vladimir Kozhukalov
We are going to make integration testing gate scheme for Ironic and we've
investigated several cases which are actual for TripleO.

1) https://etherpad.openstack.org/p/tripleo-test-cluster
This is the newest and most advanced initiative. It is something like test
environment on demand. It is still not ready to use.

2) https://github.com/openstack-infra/tripleo-ci
This project seems not to be actively used at the moment. It contains
toci_gate_test.sh, but this script is empty and is used as a gate hook. It
is supposed that it will then implement the whole gate testing logic using
test env on demand (see previous point).
This project also has some shell code which is used to manage emulated bare
metal environments. It is something like prepare libvirt VM xml and launch
VM using virsh (nothing special).

3) https://github.com/openstack/tripleo-incubator/blob/master/scripts (aka
devtest)
This is a set of shell scripts which are intended to reproduce the whole
TripleO flow (seed, undercloud, overcloud). It is supposed to be used to
perform testing actions (including gate tests).
Documentation is available
http://docs.openstack.org/developer/tripleo-incubator/devtest.html

So, the situation looks like there is no fully working and mature scheme at
the moment.

My suggestion is to start from creating empty gate test flow (like in
tripleo-ci). Then we can write some code implementing some testing logic.
It is possible even before conductor manager is ready. We can just directly
import driver modules and test them in a functional (aka integration)
manner. As for managing emulated bare metal environments, here we can write
(or copy from tripleo) some scripts for that (shell or python). What we
actually need to be able to do is to launch one VM, then to install ironic
on it, and then launch another VM and boot it via PXE from the first one.
In the future we can use environment on demand scheme, when it is ready.
So we can follow the same scenario as they use in TripleO.

Besides, there is an idea about how to manage test environment using
openstack itself. Right now nova can make VMs and it has advanced
functionality for that. What it can NOT do is to boot them via PXE. There
is a blueprint for that
https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe.


-- 
Vladimir Kozhukalov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] git-integration working group meeting reminder

2013-12-03 Thread Krishna Raman
Hi,

We will hold our first Git Integration working group meeting on Wednesday, 
December 4, 2013 1700 UTC / 0900 PST [1].

Since we have about 13 people who wish to participate, Google hangout is no 
longer an option. Instead we will fall back
to IRC and hold the meeting on #solum. I have updated the Solum wiki to 
indicate the time.

Agenda for tomorrows meeting:
* Administrative:
* Decide if we can reserve this time every week for a recurring 
meeting of the working group.
* Topics:
* Git Pull workflow (Required for milestone-1)
* 
https://blueprints.launchpad.net/solum/+spec/solum-git-pull
* Integration with existing OpenStack tools and workflow
* Integration with GitHub or other external git 
repositories
* Integration with lang-pack workflow
* General discussion

Please find me on #solum or email the list if you would like other topics added 
to this discussion.

Thanks
—Krishna

[1] 
http://www.worldtimebuddy.com/?qm=1lid=8,524901,2158177,100h=8date=2013-12-04sln=9-10


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-03 Thread Jay Pipes

On 12/03/2013 10:04 AM, John Dickinson wrote:

How are you proposing that this integrate with Swift's account and container 
quotas (especially since there may be hundreds of thousands of accounts and 
millions (billions?) of containers in a single Swift cluster)? A centralized 
lookup for quotas doesn't really seem to be a scalable solution.


From reading below, it does not look like a centralized lookup is what 
the design is. A push-change strategy is what is described, where the 
quota numbers themselves are stored in a canonical location in Keystone, 
but when those numbers are changed, Keystone would send a notification 
of that change to subscribing services such as Swift, which would 
presumably have one or more levels of caching for things like account 
and container quotas...


Best,
-jay


--John


On Dec 3, 2013, at 6:53 AM, Oleg Gelbukh ogelb...@mirantis.com wrote:


Chmouel,

We reviewed the design of this feature at the summit with CERN and HP teams. 
Centralized quota storage in Keystone is an anticipated feature, but there are 
concerns about adding quota enforcement logic for every service to Keystone. 
The agreed solution is to add quota numbers storage to Keystone, and add 
mechanism that will notify services about change to the quota. Service, in 
turn, will update quota cache and apply the new quota value according to its 
own enforcement rules.

More detailed capture of the discussion on etherpad:
https://etherpad.openstack.org/p/CentralizedQuotas

Re this particular change, we plan to reuse this API extension code, but 
extended to support domain-level quota as well.

--
Best regards,
Oleg Gelbukh
Mirantis Labs


On Mon, Dec 2, 2013 at 5:39 PM, Chmouel Boudjnah chmo...@enovance.com wrote:
Hello,

I was wondering what was the status of Keystone being the central place across 
all OpenStack projects for quotas.

There is already an implementation from Dmitry here :

https://review.openstack.org/#/c/40568/

but hasn't seen activities since october waiting for icehouse development to be 
started and a few bits to be cleaned and added (i.e: the sqlite migration).

It would be great if we can get this rekicked to get that for icehouse-2.

Thanks,
Chmouel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-03 Thread Maru Newby
I've been investigating a bug that is preventing VM's from receiving IP 
addresses when a Neutron service is under high load:

https://bugs.launchpad.net/neutron/+bug/1192381

High load causes the DHCP agent's status updates to be delayed, causing the 
Neutron service to assume that the agent is down.  This results in the Neutron 
service not sending notifications of port addition to the DHCP agent.  At 
present, the notifications are simply dropped.  A simple fix is to send 
notifications regardless of agent status.  Does anybody have any objections to 
this stop-gap approach?  I'm not clear on the implications of sending 
notifications to agents that are down, but I'm hoping for a simple fix that can 
be backported to both havana and grizzly (yes, this bug has been with us that 
long).

Fixing this problem for real, though, will likely be more involved.  The 
proposal to replace the current wsgi framework with Pecan may increase the 
Neutron service's scalability, but should we continue to use a 'fire and 
forget' approach to notification?  Being able to track the success or failure 
of a given action outside of the logs would seem pretty important, and allow 
for more effective coordination with Nova than is currently possible.


Maru
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [scheduler] External scheduler design doc + review

2013-12-03 Thread Debojyoti Dutta
Hi

I think we should do a review of the design doc review and have rough
consensus (that it should work) followed by running code. ….

As of now all the design stuff is supposedly in (as per the scheduler
meeting today)
https://etherpad.openstack.org/p/icehouse-external-scheduler

Hope the core devs who decide to shepherd will review this and say yay or
nay.

thx
debo
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Plugin Blueprint and ML2 Plugin

2013-12-03 Thread NAVEEN R K REDDY
Hi All,

I have couple of questions wrt to plugin blueprint and ml2 plugin,

1.  We are planning to submit the plugin into Openstack Icehouse release.
The question we have is there any deadline for the plugin blueprint
submission ?
2.  Is there anything like mandatory for everyone need to implement ML2
plugin instead of monolithic plugin ?


Thanks in advance.


Regards,
Naveen.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-03 Thread Paul Montgomery
I agree.  With many optional parameters possible, positional parameters
would seem to complicate things a bit (even for end users).


On 12/3/13 8:14 AM, Arati Mahimane arati.mahim...@rackspace.com wrote:



On 12/3/13 7:51 AM, Roshan Agrawal roshan.agra...@rackspace.com wrote:



 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Monday, December 02, 2013 8:17 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] CLI minimal implementation
 
 On 12/02/2013 07:03 PM, Roshan Agrawal wrote:
  I have created a child blueprint to define scope for the minimal
 implementation of the CLI to consider for milestone 1.
  
https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementatio
  n
 
  Spec for the minimal CLI @
  
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-im
  plementation Etherpad for discussion notes:
  https://etherpad.openstack.org/p/MinimalCLI
 
  Would look for feedback on the ML, etherpad and discuss more in the
 weekly IRC meeting tomorrow.
 
 What is this R1.N syntax?  How does it relate to development
milestones?
  Does R1 mean a requirement for milestone-1?

These do not relate to development milestones. R1 is a unique identified
for the given requirement. R1.x is a unique requirement Id for something
that is a sub item of the top level requirement R1.
Is there a more openstack standard way for generating requirements Id?
 
 For consistency, I would use commands like:
 
solum app-create
solum app-delete
solum assembly-create
solum assembly-delete
 
 instead of adding a space in between:
 
solum app create
 
 to be more consistent with other clients, like:
 
nova flavor-create
nova flavor-delete
glance image-create
glance image-delete

The current proposal is an attempt to be consistent with the direction
for the openstack one CLI. Adrian's addressed it in his other reply.

 
 I would make required arguments positional arguments.  So, instead of:
 
solum app-create --plan=planname
 
 do:
 
solum app-create planname

I will make this change unless hear objections

In my opinion, since most of the parameters (listed here
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/ApplicationDeploym
e
ntAndManagement#Solum-R1.12_app_create:_CLI) are optional,
it would be easier to specify the parameters as param_name=value
instead of having positional parameters.


 
 
 Lastly, everywhere you have a name, I would use a UUID.  Names
shouldn't
 have to be globally unique (because of multi-tenancy).  UUIDs should
always
 work, but you can support a name in the client code as a friendly
shortcut,
 but it should fail if a unique result can not be resolved from the
name.


Names do not have to be globally unique; just unique within the tenant
namespace. The Name+tenant combination should map to a unique uuid.
The CLI is a client tool, where as a user working with names is easier.
We will support both, but start with Names (the friendly shortcut), and
map it to uuid behind the scenes.


 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Plugin Blueprint and ML2 Plugin

2013-12-03 Thread Kyle Mestery (kmestery)
Hi Naveen:

The sooner you submit your blueprint the better, as Neutron core devs
can comment on it and help answer questions for you. You can implement
an ML2 MechanismDriver or a monolithic plugin, but if you file the BP
we can help you decide which may be better for your environment.

Keep in mind there are new 3rd party requirements in neutron for plugins
now [1], the key one being external Tempest testing.

Thanks,
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-November/019219.html

On Dec 3, 2013, at 9:50 AM, John Smith lbalba...@gmail.com wrote:
 Hi,
 
 
 Im not sure, but this may be more appropriate for the developers
 mailing list: openstack-dev@lists.openstack.org
 
 
 Regards,
 
 
 John Smith
 
 
 On Tue, Dec 3, 2013 at 3:36 PM, NAVEEN R K REDDY
 naveen.kunare...@gmail.com wrote:
 Hi All,
 
 I have couple of questions wrt to plugin blueprint and ml2 plugin,
 
 1.  We are planning to submit the plugin into Openstack Icehouse release.
 The question we have is there any deadline for the plugin blueprint
 submission ?
 2.  Is there anything like mandatory for everyone need to implement ML2
 plugin instead of monolithic plugin ?
 
 
 Thanks in advance.
 
 
 Regards,
 Naveen.
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Tuning QueuePool parameters?

2013-12-03 Thread Maru Newby
I recently ran into this bug while trying to concurrently boot a large number 
(75) of VMs: 

https://bugs.launchpad.net/neutron/+bug/1160442

I see that the fix for the bug added configuration of SQLAlchemy QueuePool 
parameters that should prevent the boot failures I was seeing.  However, I 
don't see a good explanation on the bug as to what values to set the 
configuration to or why the defaults weren't updated to something sane.  If 
that information is available somewhere, please share!  I'm not sure why this 
bug is considered fixed if it's still possible to trigger it with no clear path 
to resolution.


Maru


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-03 Thread Randall Burt
I disagree. If a param is required and has no meaningful default, it should be 
positional IMO. I think this actually reduces confusion as you can tell from 
the signature alone that this is a value the user must supply to have any 
meaningful thing happen.

On Dec 3, 2013, at 10:13 AM, Paul Montgomery paul.montgom...@rackspace.com
 wrote:

 I agree.  With many optional parameters possible, positional parameters
 would seem to complicate things a bit (even for end users).
 
 
 On 12/3/13 8:14 AM, Arati Mahimane arati.mahim...@rackspace.com wrote:
 
 
 
 On 12/3/13 7:51 AM, Roshan Agrawal roshan.agra...@rackspace.com wrote:
 
 
 
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Monday, December 02, 2013 8:17 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] CLI minimal implementation
 
 On 12/02/2013 07:03 PM, Roshan Agrawal wrote:
 I have created a child blueprint to define scope for the minimal
 implementation of the CLI to consider for milestone 1.
 
 https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementatio
 n
 
 Spec for the minimal CLI @
 
 https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-im
 plementation Etherpad for discussion notes:
 https://etherpad.openstack.org/p/MinimalCLI
 
 Would look for feedback on the ML, etherpad and discuss more in the
 weekly IRC meeting tomorrow.
 
 What is this R1.N syntax?  How does it relate to development
 milestones?
 Does R1 mean a requirement for milestone-1?
 
 These do not relate to development milestones. R1 is a unique identified
 for the given requirement. R1.x is a unique requirement Id for something
 that is a sub item of the top level requirement R1.
 Is there a more openstack standard way for generating requirements Id?
 
 For consistency, I would use commands like:
 
   solum app-create
   solum app-delete
   solum assembly-create
   solum assembly-delete
 
 instead of adding a space in between:
 
   solum app create
 
 to be more consistent with other clients, like:
 
   nova flavor-create
   nova flavor-delete
   glance image-create
   glance image-delete
 
 The current proposal is an attempt to be consistent with the direction
 for the openstack one CLI. Adrian's addressed it in his other reply.
 
 
 I would make required arguments positional arguments.  So, instead of:
 
   solum app-create --plan=planname
 
 do:
 
   solum app-create planname
 
 I will make this change unless hear objections
 
 In my opinion, since most of the parameters (listed here
 https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/ApplicationDeploym
 e
 ntAndManagement#Solum-R1.12_app_create:_CLI) are optional,
 it would be easier to specify the parameters as param_name=value
 instead of having positional parameters.
 
 
 
 
 Lastly, everywhere you have a name, I would use a UUID.  Names
 shouldn't
 have to be globally unique (because of multi-tenancy).  UUIDs should
 always
 work, but you can support a name in the client code as a friendly
 shortcut,
 but it should fail if a unique result can not be resolved from the
 name.
 
 
 Names do not have to be globally unique; just unique within the tenant
 namespace. The Name+tenant combination should map to a unique uuid.
 The CLI is a client tool, where as a user working with names is easier.
 We will support both, but start with Names (the friendly shortcut), and
 map it to uuid behind the scenes.
 
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-03 Thread Jarret Raim

With the introduction of programs (think: official teams), all
incubated/integrated projects must belong to an official program... So
when a project applies for incubation but is not part of an official
program yet, it de-facto also applies to be considered a program.

Ahh, understood. So I guess we'd be asking for a new program then.

I'm not 100% sure we'll keep that election requirement. I think the
program application should have an initial PTL named on it. The way
that's determined is up to the team (natural candidate, election...).

It sounds like if we have a PTL election as part of the normal Icehouse
schedule, we should be covered here?

No, since the team produces code the ATC designation method is pretty
well established. This rule cares for programs which have weirder
deliverables.

Easy enough, thanks for the clarification.



Jarret


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-03 Thread Arati Mahimane
Randall, I think you are talking about required parameters and we are
talking about optional ones.
Please correct me if I am wrong.

-Arati

On 12/3/13 10:27 AM, Randall Burt randall.b...@rackspace.com wrote:

I disagree. If a param is required and has no meaningful default, it
should be positional IMO. I think this actually reduces confusion as you
can tell from the signature alone that this is a value the user must
supply to have any meaningful thing happen.

On Dec 3, 2013, at 10:13 AM, Paul Montgomery
paul.montgom...@rackspace.com
 wrote:

 I agree.  With many optional parameters possible, positional parameters
 would seem to complicate things a bit (even for end users).
 
 
 On 12/3/13 8:14 AM, Arati Mahimane arati.mahim...@rackspace.com
wrote:
 
 
 
 On 12/3/13 7:51 AM, Roshan Agrawal roshan.agra...@rackspace.com
wrote:
 
 
 
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Monday, December 02, 2013 8:17 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] CLI minimal implementation
 
 On 12/02/2013 07:03 PM, Roshan Agrawal wrote:
 I have created a child blueprint to define scope for the minimal
 implementation of the CLI to consider for milestone 1.
 
 
https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementatio
 n
 
 Spec for the minimal CLI @
 
 
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-im
 plementation Etherpad for discussion notes:
 https://etherpad.openstack.org/p/MinimalCLI
 
 Would look for feedback on the ML, etherpad and discuss more in the
 weekly IRC meeting tomorrow.
 
 What is this R1.N syntax?  How does it relate to development
 milestones?
 Does R1 mean a requirement for milestone-1?
 
 These do not relate to development milestones. R1 is a unique
identified
 for the given requirement. R1.x is a unique requirement Id for
something
 that is a sub item of the top level requirement R1.
 Is there a more openstack standard way for generating requirements
Id?
 
 For consistency, I would use commands like:
 
   solum app-create
   solum app-delete
   solum assembly-create
   solum assembly-delete
 
 instead of adding a space in between:
 
   solum app create
 
 to be more consistent with other clients, like:
 
   nova flavor-create
   nova flavor-delete
   glance image-create
   glance image-delete
 
 The current proposal is an attempt to be consistent with the direction
 for the openstack one CLI. Adrian's addressed it in his other reply.
 
 
 I would make required arguments positional arguments.  So, instead
of:
 
   solum app-create --plan=planname
 
 do:
 
   solum app-create planname
 
 I will make this change unless hear objections
 
 In my opinion, since most of the parameters (listed here
 
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/ApplicationDeplo
ym
 e
 ntAndManagement#Solum-R1.12_app_create:_CLI) are optional,
 it would be easier to specify the parameters as param_name=value
 instead of having positional parameters.
 
 
 
 
 Lastly, everywhere you have a name, I would use a UUID.  Names
 shouldn't
 have to be globally unique (because of multi-tenancy).  UUIDs should
 always
 work, but you can support a name in the client code as a friendly
 shortcut,
 but it should fail if a unique result can not be resolved from the
 name.
 
 
 Names do not have to be globally unique; just unique within the tenant
 namespace. The Name+tenant combination should map to a unique uuid.
 The CLI is a client tool, where as a user working with names is
easier.
 We will support both, but start with Names (the friendly shortcut),
and
 map it to uuid behind the scenes.
 
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-03 Thread Jarret Raim

I think there's something else you should take under consideration.
Oslo messaging is not just an OpenStack library. It's the RPC library
that all projects are relying on and one of the strong goals we have
in OpenStack is to reduce code and efforts duplications. We'd love to
have more people testing and contributing to oslo.messaging in order
to make it as battle tested as celery is.

Please, don't get me wrong. I don't mean to say you didn't considered
it, I just want to add another reason why we should always try to
re-use the libraries that other projects are using - unless there's a
strong technical reason ;).

As I¹ve said, we are willing to look at the library for Icehouse. As lots
of projects have implemented it, I hope that the switchover will be
reasonably easy. 

I think this conversation has gotten away from our incubation request and
into an argument about what makes a good library and when and how projects
should choose between oslo and other options. I¹m happy to have the second
one in another thread, but that seems like a longer conversation that is
separate from our request.

It seems like the comments are slowing down now. Does everyone feel our
list (https://wiki.openstack.org/wiki/Barbican/Incubation) accurately
captures the comments that have been brought up?

I filled out the Scope section of our request and I think we¹ve cleared up
the PTL election issue. Is there anything else I missed or have we covered
most of the issues?



Jarret


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-03 Thread John Dickinson

On Dec 3, 2013, at 8:05 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 12/03/2013 10:04 AM, John Dickinson wrote:
 How are you proposing that this integrate with Swift's account and container 
 quotas (especially since there may be hundreds of thousands of accounts and 
 millions (billions?) of containers in a single Swift cluster)? A centralized 
 lookup for quotas doesn't really seem to be a scalable solution.
 
 From reading below, it does not look like a centralized lookup is what the 
 design is. A push-change strategy is what is described, where the quota 
 numbers themselves are stored in a canonical location in Keystone, but when 
 those numbers are changed, Keystone would send a notification of that change 
 to subscribing services such as Swift, which would presumably have one or 
 more levels of caching for things like account and container quotas...

Yes, I get that, and there are already methods in Swift to support that. The 
trick, though, is either (1) storing all the canonical info in Keystone and 
scaling that or (2) storing some boiled down version, if possible, and 
fanning that out to all of the resources in Swift. Both are difficult and 
require storing the information in the central Keystone store.

 
 Best,
 -jay
 
 --John
 
 
 On Dec 3, 2013, at 6:53 AM, Oleg Gelbukh ogelb...@mirantis.com wrote:
 
 Chmouel,
 
 We reviewed the design of this feature at the summit with CERN and HP 
 teams. Centralized quota storage in Keystone is an anticipated feature, but 
 there are concerns about adding quota enforcement logic for every service 
 to Keystone. The agreed solution is to add quota numbers storage to 
 Keystone, and add mechanism that will notify services about change to the 
 quota. Service, in turn, will update quota cache and apply the new quota 
 value according to its own enforcement rules.
 
 More detailed capture of the discussion on etherpad:
 https://etherpad.openstack.org/p/CentralizedQuotas
 
 Re this particular change, we plan to reuse this API extension code, but 
 extended to support domain-level quota as well.
 
 --
 Best regards,
 Oleg Gelbukh
 Mirantis Labs
 
 
 On Mon, Dec 2, 2013 at 5:39 PM, Chmouel Boudjnah chmo...@enovance.com 
 wrote:
 Hello,
 
 I was wondering what was the status of Keystone being the central place 
 across all OpenStack projects for quotas.
 
 There is already an implementation from Dmitry here :
 
 https://review.openstack.org/#/c/40568/
 
 but hasn't seen activities since october waiting for icehouse development 
 to be started and a few bits to be cleaned and added (i.e: the sqlite 
 migration).
 
 It would be great if we can get this rekicked to get that for icehouse-2.
 
 Thanks,
 Chmouel.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-03 Thread Jarret Raim

 The API and developer documentation is at
http://docs.openstack.org/developer/oslo.messaging/

This is great, thanks for the link. Would there be any objections to
adding this to the github repo and the openstack wiki pages? I spent a
bunch of time looking and wasn¹t able to turn this up.


Additionally, where is this documentation generated from? I looked at the
doc/ dir in the repo [1] and most of the files in there were empty.


[1] https://github.com/openstack/oslo.messaging/tree/master/doc/source




Jarret


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-03 Thread Arati Mahimane
Roshan, I have added some comments to the Etherpad -
https://etherpad.openstack.org/p/MinimalCLI

-Arati

On 12/3/13 10:27 AM, Randall Burt randall.b...@rackspace.com wrote:

I disagree. If a param is required and has no meaningful default, it
should be positional IMO. I think this actually reduces confusion as you
can tell from the signature alone that this is a value the user must
supply to have any meaningful thing happen.

On Dec 3, 2013, at 10:13 AM, Paul Montgomery
paul.montgom...@rackspace.com
 wrote:

 I agree.  With many optional parameters possible, positional parameters
 would seem to complicate things a bit (even for end users).
 
 
 On 12/3/13 8:14 AM, Arati Mahimane arati.mahim...@rackspace.com
wrote:
 
 
 
 On 12/3/13 7:51 AM, Roshan Agrawal roshan.agra...@rackspace.com
wrote:
 
 
 
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Monday, December 02, 2013 8:17 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] CLI minimal implementation
 
 On 12/02/2013 07:03 PM, Roshan Agrawal wrote:
 I have created a child blueprint to define scope for the minimal
 implementation of the CLI to consider for milestone 1.
 
 
https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementatio
 n
 
 Spec for the minimal CLI @
 
 
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-im
 plementation Etherpad for discussion notes:
 https://etherpad.openstack.org/p/MinimalCLI
 
 Would look for feedback on the ML, etherpad and discuss more in the
 weekly IRC meeting tomorrow.
 
 What is this R1.N syntax?  How does it relate to development
 milestones?
 Does R1 mean a requirement for milestone-1?
 
 These do not relate to development milestones. R1 is a unique
identified
 for the given requirement. R1.x is a unique requirement Id for
something
 that is a sub item of the top level requirement R1.
 Is there a more openstack standard way for generating requirements
Id?
 
 For consistency, I would use commands like:
 
   solum app-create
   solum app-delete
   solum assembly-create
   solum assembly-delete
 
 instead of adding a space in between:
 
   solum app create
 
 to be more consistent with other clients, like:
 
   nova flavor-create
   nova flavor-delete
   glance image-create
   glance image-delete
 
 The current proposal is an attempt to be consistent with the direction
 for the openstack one CLI. Adrian's addressed it in his other reply.
 
 
 I would make required arguments positional arguments.  So, instead
of:
 
   solum app-create --plan=planname
 
 do:
 
   solum app-create planname
 
 I will make this change unless hear objections
 
 In my opinion, since most of the parameters (listed here
 
https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/ApplicationDeplo
ym
 e
 ntAndManagement#Solum-R1.12_app_create:_CLI) are optional,
 it would be easier to specify the parameters as param_name=value
 instead of having positional parameters.
 
 
 
 
 Lastly, everywhere you have a name, I would use a UUID.  Names
 shouldn't
 have to be globally unique (because of multi-tenancy).  UUIDs should
 always
 work, but you can support a name in the client code as a friendly
 shortcut,
 but it should fail if a unique result can not be resolved from the
 name.
 
 
 Names do not have to be globally unique; just unique within the tenant
 namespace. The Name+tenant combination should map to a unique uuid.
 The CLI is a client tool, where as a user working with names is
easier.
 We will support both, but start with Names (the friendly shortcut),
and
 map it to uuid behind the scenes.
 
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-03 Thread Jay Pipes

On 12/03/2013 11:39 AM, Arati Mahimane wrote:

Randall, I think you are talking about required parameters and we are
talking about optional ones.
Please correct me if I am wrong.


Russell was specifically talking about required parameters being 
positional arguments.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-03 Thread Stephen Gran

On 03/12/13 16:08, Maru Newby wrote:

I've been investigating a bug that is preventing VM's from receiving IP 
addresses when a Neutron service is under high load:

https://bugs.launchpad.net/neutron/+bug/1192381

High load causes the DHCP agent's status updates to be delayed, causing the 
Neutron service to assume that the agent is down.  This results in the Neutron 
service not sending notifications of port addition to the DHCP agent.  At 
present, the notifications are simply dropped.  A simple fix is to send 
notifications regardless of agent status.  Does anybody have any objections to 
this stop-gap approach?  I'm not clear on the implications of sending 
notifications to agents that are down, but I'm hoping for a simple fix that can 
be backported to both havana and grizzly (yes, this bug has been with us that 
long).

Fixing this problem for real, though, will likely be more involved.  The 
proposal to replace the current wsgi framework with Pecan may increase the 
Neutron service's scalability, but should we continue to use a 'fire and 
forget' approach to notification?  Being able to track the success or failure 
of a given action outside of the logs would seem pretty important, and allow 
for more effective coordination with Nova than is currently possible.


It strikes me that we ask an awful lot of a single neutron-server 
instance - it has to take state updates from all the agents, it has to 
do scheduling, it has to respond to API requests, and it has to 
communicate about actual changes with the agents.


Maybe breaking some of these out the way nova has a scheduler and a 
conductor and so on might be a good model (I know there are things 
people are unhappy about with nova-scheduler, but imagine how much worse 
it would be if it was built into the API).


Doing all of those tasks, and doing it largely single threaded, is just 
asking for overload.


Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 33% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News  Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News  Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Enhance UX of Launch Instance Form

2013-12-03 Thread Gabriel pettier
Hi there

I read the proposal and related documentation, and intend to start 
implementing it into horizon.

Regards

on Wed Nov 20 15:09:05 UTC 2013 C?dric Soulas Wrote


Thanks for all the feedback on the Enhance UX of launch instance form 
subject and its prototype.

Try the latest version of the prototype:
http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance

This update was made after several discussion on those different channels:

- openstack ux google group
- launchpad horizon (and now launchpad openstack ux)
- mailing list and IRC
- the new ask bots for openstack UX

We tried to write back most of discussions on ask bot, and are now focusing on 
this tool.

Below a digest of those discussions, with links to ask bot (on each subject, 
there are links to related blueprints, google doc drafts, etc)

= General topics =

- Modals and supporting different screen sizes [2]
  Current modal doesn't work well on the top 8 screen resolutions [2]
  = Responsive and full screen modal added on the prototype [1]

- Wizard mode for some modals [3]
  = try the wizard [1]

= Specific to launch instance =

- Improve boot source options [4]
  * first choose to boot from ephemeral or persistent disk
  * if no ephemeral flavor are available, hide the selector
  * group by public, project, shared with me
  * warning message added for delete on terminate option (when boot from 
 persistent)

- Scaling the flavor list [5]
  * sort the columns of the table. In particular: by name.
  * group of flavor list (for example: performance, standard...)?

- Scaling the image list [5]
  * a scrollbar on the image list
  * limit the number of list items and add a x more instance snapshots - See 
 more line
  * a search / filter feature would be great, like discussed at the scaling 
 horizon design session

- Step 1 / Step 2 workflow: when the user click on select from one boot 
source item it goes directly to the step 2.
  If it goes back from step 2 to step 1:
  * the text Please select a boot source would be replaced with a Next 
 button
  * the button select on the selected boot source item would be replaced 
 with a check-mark (or equivalent).
  * the user would still have the possibility to select another boot source

- flavor depending on image requirements and quotas available: 
   * this a very good point, lot of things to discuss about
   = should open a separate thread on this
 
- Network: still a work in progress
  * if a single choice: then make it default choice

- Several wording updates (cancel, ephemeral boot source, ...)

[1] http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance
[2] 
http://ask-openstackux.rhcloud.com/question/11/modals-and-supporting-different-screen-sizes/
[3] http://ask-openstackux.rhcloud.com/question/81/wizard-ui-for-workflow
[4] 
http://ask-openstackux.rhcloud.com/question/13/improve-boot-source-ux-ephemeral-vs-persistent-disk/
[5] 
http://ask-openstackux.rhcloud.com/question/12/enhance-the-selection-of-a-flavor-and-an-image/

Best,

C?dric
-- 
Gabriel Pettier
Software Engineer at CloudWatt.com 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [policy] Neutron Policy IRC meeting

2013-12-03 Thread Mohammad Banikazemi

Hello everybody,

Following up the action items from our last meeting and in preparation for
our next IRC meeting on Dec 5th (see below), I have started updating the
google document [1].
I have added the tables describing the attributes of new Neutron objects. I
will be also working on adding a few possible action types for policy
rules.

Please visit the document and make suggestions and provide feedback.

Thanks,

-Mohammad

[1]
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit#



From:   Kyle Mestery (kmestery) kmest...@cisco.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:   11/21/2013 12:03 PM
Subject:[openstack-dev] [neutron] [policy] Logs and notes from first
Neutron Policy IRC meeting



HI all!

The Neutron Policy sub-team had it's first IRC meeting today [1].
Relevant logs from the meeting are here [2]. We're hoping to
continue the discussion going forward. I've noted action items
in both the meeting logs and on the wiki page. We'll cover those
for the next meeting we have.

Note: We'll not meet next week due to the Thanksgiving holiday
in the US.

Hope to see everyone on #openstack-meeting-alt at 1600 UTC
on Thursday December 5th! In the meantime, please continue
the discussion in IRC on #openstack-neutron and on the
openstack-dev mailing list.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy
[2] http://eavesdrop.openstack.org/meetings/networking_policy/2013/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

inline: graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-03 Thread Joe Gordon
On Dec 3, 2013 6:45 PM, Jarret Raim jarret.r...@rackspace.com wrote:


 With the introduction of programs (think: official teams), all
 incubated/integrated projects must belong to an official program... So
 when a project applies for incubation but is not part of an official
 program yet, it de-facto also applies to be considered a program.

 Ahh, understood. So I guess we'd be asking for a new program then.

While I am all for adding a new program, I think we should only add one if
we rule out all existing programs as a home.

With that in mind why not add this to the  keystone program? Perhaps that
may require a tweak to keystones mission statement, but that is doable. I
saw a partial answer to this somewhere but not a full one.


 I'm not 100% sure we'll keep that election requirement. I think the
 program application should have an initial PTL named on it. The way
 that's determined is up to the team (natural candidate, election...).

 It sounds like if we have a PTL election as part of the normal Icehouse
 schedule, we should be covered here?

 No, since the team produces code the ATC designation method is pretty
 well established. This rule cares for programs which have weirder
 deliverables.

 Easy enough, thanks for the clarification.



 Jarret


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Plugin Blueprint and ML2 Plugin

2013-12-03 Thread Kyle Mestery (kmestery)
On Dec 3, 2013, at 10:13 AM, NAVEEN R K REDDY naveen.kunare...@gmail.com 
wrote:
 
 Hi All,
 
 I have couple of questions wrt to plugin blueprint and ml2 plugin,
 
 1.  We are planning to submit the plugin into Openstack Icehouse release. The 
 question we have is there any deadline for the plugin blueprint submission ?
 2.  Is there anything like mandatory for everyone need to implement ML2 
 plugin instead of monolithic plugin ? 
 
 
 Thanks in advance.
 
Hi Naveen:

Here is my reply on the other thread, copied here in case people
want to continue the discussion on this thread:

The sooner you submit your blueprint the better, as Neutron core devs
can comment on it and help answer questions for you. You can implement
an ML2 MechanismDriver or a monolithic plugin, but if you file the BP
we can help you decide which may be better for your environment.

Keep in mind there are new 3rd party requirements in neutron for plugins
now [1], the key one being external Tempest testing.

Thanks,
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-November/019219.html

On Dec 3, 2013, at 9:50 AM, John Smith lbalba...@gmail.com wrote:
 Hi,
 
 
 Im not sure, but this may be more appropriate for the developers
 mailing list: openstack-dev@lists.openstack.org
 
 
 Regards,
 
 
 John Smith
 
 
 On Tue, Dec 3, 2013 at 3:36 PM, NAVEEN R K REDDY
 naveen.kunare...@gmail.com wrote:
 Hi All,
 
 I have couple of questions wrt to plugin blueprint and ml2 plugin,
 
 1.  We are planning to submit the plugin into Openstack Icehouse release.
 The question we have is there any deadline for the plugin blueprint
 submission ?
 2.  Is there anything like mandatory for everyone need to implement ML2
 plugin instead of monolithic plugin ?
 
 
 Thanks in advance.
 
 
 Regards,
 Naveen.
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

 Regards,
 Naveen.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-03 Thread Jay Pipes

On 12/03/2013 11:40 AM, John Dickinson wrote:


On Dec 3, 2013, at 8:05 AM, Jay Pipes jaypi...@gmail.com wrote:


On 12/03/2013 10:04 AM, John Dickinson wrote:

How are you proposing that this integrate with Swift's account and container 
quotas (especially since there may be hundreds of thousands of accounts and 
millions (billions?) of containers in a single Swift cluster)? A centralized 
lookup for quotas doesn't really seem to be a scalable solution.


 From reading below, it does not look like a centralized lookup is what the 
design is. A push-change strategy is what is described, where the quota numbers 
themselves are stored in a canonical location in Keystone, but when those 
numbers are changed, Keystone would send a notification of that change to 
subscribing services such as Swift, which would presumably have one or more 
levels of caching for things like account and container quotas...


Yes, I get that, and there are already methods in Swift to support that. The trick, 
though, is either (1) storing all the canonical info in Keystone and scaling that or (2) 
storing some boiled down version, if possible, and fanning that out to all of 
the resources in Swift. Both are difficult and require storing the information in the 
central Keystone store.


The storage driver for quotas in Keystone could use something like 
Cassandra as its data store, leaving the Keystone endpoint stateless and 
only responsible for relaying the update message to subscribers.


Each type of thing Keystone manages -- identity, token, catalog, etc 
-- can have a different storage driver. Adding a new storage driver for 
Cassandra and its ilk would be pretty trivial. That way Keystone folks 
can focus on the job at hand (notifying subscribers of updates to 
quotas) and Cassandra developers can focus on scaling data storage and 
retrieval.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as first-class citizen - SSL Termination (Revised)

2013-12-03 Thread Samuel Bercovici
Hi,

The primary reason for the simple proposal is due to the difficult to reach 
consensus on how SSL certificates can be stored in OpenStack.
As there is currently no trusted storage in OpenStack, the simple proposal 
overcomes this by pushing the SSL certificates into the load balancers which 
are considered trusted.
If there is an agreement that storing the SSL certificates and similar 
information in the OpenStack database is fine, than having the feature modeled 
with SSL certificates and SSL policies as 1st level citizens is preferable.

As Vijay mentioned, both options will support well the common use cases.

Hopefully, we can get other people to vote on this and drive a decision.

Regards,
-Sam.


-Original Message-
From: Vijay Venkatachalam [mailto:vijay.venkatacha...@citrix.com] 
Sent: Monday, December 02, 2013 11:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as 
first-class citizen - SSL Termination (Revised)


LBaaS enthusiasts: Your vote on the revised model for SSL Termination?

Here is a comparison between the original and revised model for SSL Termination:

***
Original Basic Model that was proposed in summit
***
* Certificate parameters introduced as part of VIP resource.
* This model is for basic config and there will be a model introduced in future 
for detailed use case.
* Each certificate is created for one and only one VIP.
* Certificate params not stored in DB and sent directly to loadbalancer. 
* In case of failures, there is no way to restart the operation from details 
stored in DB.
***
Revised New Model
***
* Certificate parameters will be part of an independent certificate resource. A 
first-class citizen handled by LBaaS plugin.
* It is a forwarding looking model and aligns with AWS for uploading server 
certificates.
* A certificate can be reused in many VIPs.
* Certificate params stored in DB. 
* In case of failures, parameters stored in DB will be used to restore the 
system.

A more detailed comparison can be viewed in the following link  
https://docs.google.com/document/d/1fFHbg3beRtmlyiryHiXlpWpRo1oWj8FqVeZISh07iGs/edit?usp=sharing

Thanks,
Vijay V.


 -Original Message-
 From: Vijay Venkatachalam
 Sent: Friday, November 29, 2013 2:18 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] Vote required for 
 certificate as first level citizen - SSL Termination
 
 
 To summarize:
 Certificate will be a first level citizen which can be reused and For 
 certificate management nothing sophisticated is required.
 
 Can you please Vote (+1, -1)?
 
 We can move on if there is consensus around this.
 
  -Original Message-
  From: Stephen Gran [mailto:stephen.g...@guardian.co.uk]
  Sent: Wednesday, November 20, 2013 3:01 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL Termination 
  write-up
 
  Hi,
 
  On Wed, 2013-11-20 at 08:24 +, Samuel Bercovici wrote:
   Hi,
  
  
  
   Evgeny has outlined the wiki for the proposed change at:
   https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL which is in line 
   with what was discussed during the summit.
  
   The
  
 
 https://docs.google.com/document/d/1tFOrIa10lKr0xQyLVGsVfXr29NQBq2n
  YTvMkMJ_inbo/edit discuss in addition Certificate Chains.
  
  
  
   What would be the benefit of having a certificate that must be 
   connected to VIP vs. embedding it in the VIP?
 
  You could reuse the same certificate for multiple loadbalancer VIPs.
  This is a fairly common pattern - we have a dev wildcard cert that 
  is
  self- signed, and is used for lots of VIPs.
 
   When we get a system that can store certificates (ex: Barbican), 
   we will add support to it in the LBaaS model.
 
  It probably doesn't need anything that complicated, does it?
 
  Cheers,
  --
  Stephen Gran
  Senior Systems Integrator - The Guardian
 
  Please consider the environment before printing this email.
  --
  Visit theguardian.com
 
  On your mobile, download the Guardian iPhone app 
  theguardian.com/iphone and our iPad edition theguardian.com/iPad 
  Save up to 33% by subscribing to the Guardian and Observer - choose 
  the papers you want and get full digital access.
  Visit subscribe.theguardian.com
 
  This e-mail and all attachments are confidential and may also be 
  privileged. If you are not the named recipient, please notify the 
  sender and delete the e- mail and all attachments immediately.
  Do not disclose the contents to another person. You may not use the 
  information for any purpose, or store, or copy, it in any way.
 
  Guardian News  Media Limited is not liable for any computer viruses 
  or other material transmitted with or as part of this e-mail. You 
  should 

Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-03 Thread Dean Troyer
On Mon, Dec 2, 2013 at 10:09 PM, Adrian Otto adrian.o...@rackspace.comwrote:

  Sorry, I changed the link. We originally started with hyphenated
 noun-verbs but switched to the current proposal upon receipt of advice that
 it would be more compatible with the next version of the cliff based CLI
 for OpenStack. If I remember correctly this advice came from Doug Hellman.


That may have been me, I had the same conversation three or four times
during and right after the summit.  Either way, I'm the one to blame for
that command format...


  Etherpad for discussion notes:
 https://etherpad.openstack.org/p/MinimalCLI


I left some comments about how I would format the commands to be consistent
with OSC.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat][[keystone] RFC: introducing request identification

2013-12-03 Thread Adam Young

On 11/27/2013 12:45 AM, Takahiro Shida wrote:


Hi all,

I'm also interested in this issue.

 Create a unified request identifier
 https://blueprints.launchpad.net/nova/+spec/cross-service-request-id

I checked this BP and the following review.
https://review.openstack.org/#/c/29480/

There are many comments. Finally, this review looks rejected by user 
specified correlation-id is useless and insecure.


 3. Enable keystone to generate request identification (we can 
call it 'request-token', for example).



Lets not use the term Token.  Request Identifier is fine.


 -2

So, this idea will be helpful to solve the cross-service-request-id 
problem.

Because the correlation-id specified by keystone.

Can we make this request envelope so that we can put more information 
into the request than just an identifier?  Specifically, we are going to 
potentially want to put a set of trust Identifiers into a portion of the 
message to allow for secure delegation.




How about nova guys and keystone guys ?




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-03 Thread Jeremy Stanley
On 2013-12-03 16:43:32 + (+), Jarret Raim wrote:
 This is great, thanks for the link. Would there be any objections to
 adding this to the github repo

I think you meant the git repo. What's a gi-thub?

URL: http://git.openstack.org/cgit/openstack/oslo.messaging/tree/doc/source/ 

 and the openstack wiki pages? I spent a bunch of time looking and
 wasn¹t able to turn this up.

If you haven't worked on official OpenStack projects before, I can
see how it might be easy to overlook that source for the
Sphinx-built developer documentation lives in a standardized
location within each git repository (for convenience). Possibly
another pattern we should suggest following as part of any
application for incubation.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-03 Thread Joe Gordon
On Dec 3, 2013 6:49 PM, John Dickinson m...@not.mn wrote:


 On Dec 3, 2013, at 8:05 AM, Jay Pipes jaypi...@gmail.com wrote:

  On 12/03/2013 10:04 AM, John Dickinson wrote:
  How are you proposing that this integrate with Swift's account and
container quotas (especially since there may be hundreds of thousands of
accounts and millions (billions?) of containers in a single Swift cluster)?
A centralized lookup for quotas doesn't really seem to be a scalable
solution.
 
  From reading below, it does not look like a centralized lookup is what
the design is. A push-change strategy is what is described, where the quota
numbers themselves are stored in a canonical location in Keystone, but when
those numbers are changed, Keystone would send a notification of that
change to subscribing services such as Swift, which would presumably have
one or more levels of caching for things like account and container
quotas...

 Yes, I get that, and there are already methods in Swift to support that.
The trick, though, is either (1) storing all the canonical info in Keystone
and scaling that or (2) storing some boiled down version, if possible,
and fanning that out to all of the resources in Swift. Both are difficult
and require storing the information in the central Keystone store.

If I remember correctly the motivation for using keystone for quotas is so
there is one easy place to set quotas across all projects.  Why not hide
this complexity with the unified client instead?  That has been the answer
we have been using for pulling out assorted proxy APIs in nova (nova
image-list volume-list) etc.


 
  Best,
  -jay
 
  --John
 
 
  On Dec 3, 2013, at 6:53 AM, Oleg Gelbukh ogelb...@mirantis.com wrote:
 
  Chmouel,
 
  We reviewed the design of this feature at the summit with CERN and HP
teams. Centralized quota storage in Keystone is an anticipated feature, but
there are concerns about adding quota enforcement logic for every service
to Keystone. The agreed solution is to add quota numbers storage to
Keystone, and add mechanism that will notify services about change to the
quota. Service, in turn, will update quota cache and apply the new quota
value according to its own enforcement rules.
 
  More detailed capture of the discussion on etherpad:
  https://etherpad.openstack.org/p/CentralizedQuotas
 
  Re this particular change, we plan to reuse this API extension code,
but extended to support domain-level quota as well.
 
  --
  Best regards,
  Oleg Gelbukh
  Mirantis Labs
 
 
  On Mon, Dec 2, 2013 at 5:39 PM, Chmouel Boudjnah chmo...@enovance.com
wrote:
  Hello,
 
  I was wondering what was the status of Keystone being the central
place across all OpenStack projects for quotas.
 
  There is already an implementation from Dmitry here :
 
  https://review.openstack.org/#/c/40568/
 
  but hasn't seen activities since october waiting for icehouse
development to be started and a few bits to be cleaned and added (i.e: the
sqlite migration).
 
  It would be great if we can get this rekicked to get that for
icehouse-2.
 
  Thanks,
  Chmouel.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-03 Thread Russell Bryant
On 12/03/2013 07:22 AM, Boris Pavlovic wrote:
 Hi all,
 
 
 Finally found a bit time to write my thoughts.
 
 There are few blockers that make really complex to build scheduler as a
 services or even to move main part of scheduler code to separated lib.
 We already have one unsuccessfully effort
 https://blueprints.launchpad.net/oslo/+spec/oslo-scheduler .
 
 Major problems that we faced were next:
 1) Hard connection with project db api layer (e.g. nova.db.api,
 cinder.db.api)
 2) Hard connection between db.models and host_states 
 3) Hardcoded host states objects structure
 4) There is no namespace support in host states (so we are not able to
 keep all filters for all projects in the same place)
 5) Different API methods, that can't be effectively generalized. 
 
 
 Main goals of no-db-scheduler effort are: 
 1) Make scheduling much faster, storing data locally on each scheduler
 and just syncing states of them
 2) Remove connections between project.db.api and scheduler.db
 3) Make host_states just JSON like objects
 4) Add namespace support in host_states 
 
 When this part will be finished, we will have actually only 1 problem
 what to do with DB API methods, and business logic of each project. What
 I see is that there are 2 different ways:

If the new project is just a forklift of the existing code that still
imports nova's db API and accesses Nova's DB, I don't think the initial
forklift necessarily has to be blocked on completing no-db-scheduler.
That can happen after just as easily (depending on which effort is ready
first).

 1) Make scheduler as a big lib, then implement RPC methods + bit of
 business logic in each project
 2) Move all RPC calls from nova,cinder,ironic,... and business logic in
 1 scheduler as a service 

Right now I think #2 is the approach we should take.  This is mainly
because there is common information that is needed for the scheduling
logic for resources in multiple projects.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] welcoming new committers

2013-12-03 Thread Stefano Maffulli
On 10/31/2013 11:49 AM, Stefano Maffulli wrote:
 Another idea that Tom suggested is to use gerrit automation to send back
 to first time committers something in addition to the normal 'your patch
 is waiting for review' message. The message could be something like:
[...]

Tom sent a patch for review:

https://review.openstack.org/#/c/58900/

If you're interested in the topic of 'welcoming new committers' please
chime in.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-03 Thread Russell Bryant
On 12/03/2013 11:40 AM, Jarret Raim wrote:
 
 I think there's something else you should take under consideration.
 Oslo messaging is not just an OpenStack library. It's the RPC library
 that all projects are relying on and one of the strong goals we have
 in OpenStack is to reduce code and efforts duplications. We'd love to
 have more people testing and contributing to oslo.messaging in order
 to make it as battle tested as celery is.

 Please, don't get me wrong. I don't mean to say you didn't considered
 it, I just want to add another reason why we should always try to
 re-use the libraries that other projects are using - unless there's a
 strong technical reason ;).
 
 As I¹ve said, we are willing to look at the library for Icehouse. As lots
 of projects have implemented it, I hope that the switchover will be
 reasonably easy. 
 
 I think this conversation has gotten away from our incubation request and
 into an argument about what makes a good library and when and how projects
 should choose between oslo and other options. I¹m happy to have the second
 one in another thread, but that seems like a longer conversation that is
 separate from our request.

It's absolutely about your incubation request.  Part of this process is
looking at the technical fit with the rest of OpenStack, and this is
well within the scope of that discussion.

 It seems like the comments are slowing down now. Does everyone feel our
 list (https://wiki.openstack.org/wiki/Barbican/Incubation) accurately
 captures the comments that have been brought up?
 
 I filled out the Scope section of our request and I think we¹ve cleared up
 the PTL election issue. Is there anything else I missed or have we covered
 most of the issues?

I need to make another pass over the added info and go deeper into the
technical bits, so I suspect there will be more feedback still.  It's
only been a day.  :-)

Also, note that most of the things brought up so far are based on the
proposed requirements for becoming incubated, not for things to be
worked on during incubation.  Unless the requirements change, so far it
looks like this request should be deferred a bit longer.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-03 Thread Russell Bryant
On 12/03/2013 11:45 AM, Jay Pipes wrote:
 On 12/03/2013 11:39 AM, Arati Mahimane wrote:
 Randall, I think you are talking about required parameters and we are
 talking about optional ones.
 Please correct me if I am wrong.
 
 Russell was specifically talking about required parameters being
 positional arguments.

Correct.

Optional still in the form --foo=bar

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-03 Thread Russell Bryant
On 12/03/2013 01:26 PM, Russell Bryant wrote:
 Unless the requirements change, so far it
 looks like this request should be deferred a bit longer.

And note that this is just my opinion, and note a statement of position
on behalf of the entire TC.  We can still officially consider the
request at an upcoming TC meeting.  I was just giving an indication of
where I stand right now.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread Nachi Ueno
Great tool especially for non-native guys such as me!

Thanks Joe

Best
Nachi

2013/12/3 Sylvain Bauza sylvain.ba...@gmail.com:
 Great tool !
 Just discovered that openstack.common.rpc does have typos, another good
 reason to migrate to oslo.messaging.rpc :-)

 -Sylvain


 2013/12/3 Joe Gordon joe.gord...@gmail.com

 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So next
 time you want to fix a typo, instead of just fixing a single one you can go
 ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354


 best,
 Joe

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat][[keystone] RFC: introducing request identification

2013-12-03 Thread Andrew Laski

On 11/29/13 at 03:56pm, haruka tanizawa wrote:

Thank you for your reply.
I completely misunderstood.


You're correct on request_id and task_id.
What I'm planning is a string field that a user can pass in with the

request and it will be part of the task representation.

That field will have no meaning to Nova, but a client like Heat could use

it to ensure that they don't send requests twice

by checking if there's a task with that field set.

I see.
Especially, this point is so good.
'Heat could use it to ensure that they don't send requests twice by
checking if there's a task with that field set.'

Moreover, I want to ask some questions about instance-tasks-api.
(I'm sorry it's a little bit long...)

* Is instance-tasks-api process outside of Nova? Is it standalone?


This is something that's entirely contained within Nova.  It's just 
adding a different representation of what is already occurring with 
task_states on an instance.



* About 'user can pass in with the request'
 When user specifies task_id, task_id would be which user specified.
 And if user doesn't specify task_id, does task_id generate automatically
by Nova?
 (like correlation_id is generated by oslo auto when specified from
noonne.)


I think it's better to think of it as a 'tag' field, not task_id.  
task_id is something that would be generated within Nova, but a tag 
field would allow a client to specify a small amount of data to attach 
to the task.  Like a token that could be used to identify requests that 
have been made.  So if nothing is specified the field will remain blank.



* About management state of API
 Which is correct 'Queued, Active, Error, Complete' or ' pendig, in
progress, and completed'?


The implementation hasn't reached this point yet so it's up for 
discussion, but 'Queued, Active, Error, Complete' is the current plan.



 And for exmple 'live migration', there are 'pre migration',
'migration(migrateToURI)' and 'post migration'.
 Do you care about each detailed task? or care about 'live migrating ' ?
 Does 'in progress'(for example) say about in progress of 'pre migration'
or in progress of 'live migration'?


I think it makes sense for live migration to be a task, and any 
associated steps would be sub resources under that task.  When we start 
to look at cancelling tasks it makes sense to cancel a live migration 
rather than cancelling a pre migration.



* About relation with 'Taskflow'.
 Nova's taskflow-nize is not yet.
 However, taskflow's persistence of flow state is good helper for
cancelling tasks, I think.
 (I think cancelling is not scope of i-2.)
 How do you think of this relation and the fiture?


I think this is something to consider in the future.  For now I'm more 
focused on the user visibility into tasks than how they're implemented 
within Nova.  But there is a lot of implementation improvement that can 
happen later.




I would appriciate updating etherpad or blueprint if you have more detail
or data flow of instance-tasks-api.

Sincerely, Haruka Tanizawa


2013/11/28 Andrew Laski andrew.la...@rackspace.com


On 11/22/13 at 10:14am, haruka tanizawa wrote:


Thanks for your reply.

 I'm working on the implementation of instance-tasks-api[0] in Nova and



this is what I've been moving towards so far.
Yes, I know. I think that is good idea.

 The API will accept a string to be a part of the task but it will have



meaning only to the client, not to Nova.  Then if tasks can be searched
or
filtered by that field I think that would meet the requirements you layed
out above, or is something missing?
Hmmm, as far as I understand, keystone(keystone work plan blueprint)
generate request_id to each request.
(I think that is a good idea.)
And task_id is generated by instance-tasks-api.
Is my understanding of this correct?
Or if I miss something, thanks for telling me anything.



You're correct on request_id and task_id.  What I'm planning is a string
field that a user can pass in with the request and it will be part of the
task representation.  That field will have no meaning to Nova, but a client
like Heat could use it to ensure that they don't send requests twice by
checking if there's a task with that field set.



Haruka Tanizawa



 ___

OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread Russell Bryant
On 12/03/2013 09:22 AM, Joe Gordon wrote:
 HI all,
 
 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So
 next time you want to fix a typo, instead of just fixing a single one
 you can go ahead and fix a whole bunch.
 
 https://github.com/lyda/misspell-check
 
 To install it:
   $ pip install misspellings
 
 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -
 
 
 Sample output:
 
 http://paste.openstack.org/show/54354

Are we going to start gating on spellcheck of code and commit messages?  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread John Griffith
On Tue, Dec 3, 2013 at 11:38 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/03/2013 09:22 AM, Joe Gordon wrote:
 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So
 next time you want to fix a typo, instead of just fixing a single one
 you can go ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354

 Are we going to start gating on spellcheck of code and commit messages?  :-)

NO please (please please please).  We have enough grammar reviewers
at this point already IMO and I honestly think I might puke if jenkins
fails my patch because I didn't put a '.' at the end of my comment
line in the code.  I'd much rather see us focus on things like... I
dunno... maybe having the code actually work?


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread Nachi Ueno
2013/12/3 John Griffith john.griff...@solidfire.com:
 On Tue, Dec 3, 2013 at 11:38 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/03/2013 09:22 AM, Joe Gordon wrote:
 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So
 next time you want to fix a typo, instead of just fixing a single one
 you can go ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354

 Are we going to start gating on spellcheck of code and commit messages?  :-)

 NO please (please please please).  We have enough grammar reviewers
 at this point already IMO and I honestly think I might puke if jenkins
 fails my patch because I didn't put a '.' at the end of my comment
 line in the code.  I'd much rather see us focus on things like... I
 dunno... maybe having the code actually work?

yeah, but may be non-voting reviews by this tool is helpful


 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Enhance UX of Launch Instance Form

2013-12-03 Thread Gabriel pettier
(Previous mail went out a bit fast)

These features could be developed iteratively to improve upon the 
existing code base:
 - First allow the modal view system to expand for better usage of screen 
   real-estate combined with responsiveness of the whole popin
 - Then rework existing menus to simplify user flow:
   - ephemeral/persistent switch
   - images/flavors choice list instead of combobox

I saw work had been started for the wizard-navigation in [1]

As for implementation details we obviously need to discuss them, for exemple as
there have been a recent addition of AngularJS, should we use it for the view
implementation?

Feedback/directions?

[1] http://ask-openstackux.rhcloud.com/question/81/wizard-ui-for-workflow/  
 
On Tue, Dec 03, 2013 at 05:49:29PM +0100, Gabriel pettier wrote:
 Hi there
 
 I read the proposal and related documentation, and intend to start 
 implementing it into horizon.
 
 Regards
 
 on Wed Nov 20 15:09:05 UTC 2013 C?dric Soulas Wrote
 
 
 Thanks for all the feedback on the Enhance UX of launch instance form 
 subject and its prototype.
 
 Try the latest version of the prototype:
 http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance
 
 This update was made after several discussion on those different channels:
 
 - openstack ux google group
 - launchpad horizon (and now launchpad openstack ux)
 - mailing list and IRC
 - the new ask bots for openstack UX
 
 We tried to write back most of discussions on ask bot, and are now focusing 
 on this tool.
 
 Below a digest of those discussions, with links to ask bot (on each 
 subject, there are links to related blueprints, google doc drafts, etc)
 
 = General topics =
 
 - Modals and supporting different screen sizes [2]
   Current modal doesn't work well on the top 8 screen resolutions [2]
   = Responsive and full screen modal added on the prototype [1]
 
 - Wizard mode for some modals [3]
   = try the wizard [1]
 
 = Specific to launch instance =
 
 - Improve boot source options [4]
   * first choose to boot from ephemeral or persistent disk
   * if no ephemeral flavor are available, hide the selector
   * group by public, project, shared with me
   * warning message added for delete on terminate option (when boot from 
  persistent)
 
 - Scaling the flavor list [5]
   * sort the columns of the table. In particular: by name.
   * group of flavor list (for example: performance, standard...)?
 
 - Scaling the image list [5]
   * a scrollbar on the image list
   * limit the number of list items and add a x more instance snapshots - 
  See more line
   * a search / filter feature would be great, like discussed at the scaling 
  horizon design session
 
 - Step 1 / Step 2 workflow: when the user click on select from one boot 
 source item it goes directly to the step 2.
   If it goes back from step 2 to step 1:
   * the text Please select a boot source would be replaced with a Next 
  button
   * the button select on the selected boot source item would be replaced 
  with a check-mark (or equivalent).
   * the user would still have the possibility to select another boot source
 
 - flavor depending on image requirements and quotas available: 
* this a very good point, lot of things to discuss about
= should open a separate thread on this
  
 - Network: still a work in progress
   * if a single choice: then make it default choice
 
 - Several wording updates (cancel, ephemeral boot source, ...)
 
 [1] 
 http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance
 [2] 
 http://ask-openstackux.rhcloud.com/question/11/modals-and-supporting-different-screen-sizes/
 [3] http://ask-openstackux.rhcloud.com/question/81/wizard-ui-for-workflow
 [4] 
 http://ask-openstackux.rhcloud.com/question/13/improve-boot-source-ux-ephemeral-vs-persistent-disk/
 [5] 
 http://ask-openstackux.rhcloud.com/question/12/enhance-the-selection-of-a-flavor-and-an-image/
 
 Best,
 
 Cédric
 
   Oct 11 17:11:26 UTC 2013, Jesse Pretorius jesse.pretorius at gmail.com  
   wrote:
  
   +1
   
   A few comments:
   
   1. Bear in mind that sometimes a user may not have access to any Ephemeral
   flavors, so the tabbing should ideally be adaptive. An alternative would
   not to bother with the tabs and just show a flavor list. In our deployment
   we have no flavors with ephemeral disk space larger than 0.
   2. Whenever there's a selection, but only one choice, make it a default
   choice. It's tedious to choose the only selection only because you have 
   to.
   It's common for our users to have one network/subnet defined, but the
   current UI requires them to switch tabs and select the network which is
   rather tedious.
   3. The selection of the flavor is divorced from the quota available and
   from the image requirements. Ideally those two items should somehow be
   incorporated. A user needs to know up-front that the server will build 
   based on both their quota and the image minimum requirements.
   4. We'd like to 

Re: [openstack-dev] [horizon] Enhance UX of Launch Instance Form

2013-12-03 Thread Gabriel Pettier
(Previous mail went out a bit fast)

These features could be developed iteratively to improve upon the 
existing code base:
 - First allow the modal view system to expand for better usage of screen 
   real-estate combined with responsiveness of the whole popin
 - Then rework existing menus to simplify user flow:
   - ephemeral/persistent switch
   - images/flavors choice list instead of combobox

I saw work had been started for the wizard-navigation in [1] 

As for implementation details we obviously need to discuss them, for exemple as
there have been a recent addition of AngularJS, should we use it for the view
implementation?

Feedback/directions?

[1] http://ask-openstackux.rhcloud.com/question/81/wizard-ui-for-workflow/  
 
On Tue, Dec 03, 2013 at 05:49:29PM +0100, Gabriel pettier wrote:
 Hi there
 
 I read the proposal and related documentation, and intend to start 
 implementing it into horizon.
 
 Regards
 
 on Wed Nov 20 15:09:05 UTC 2013 C?dric Soulas Wrote
 
 
 Thanks for all the feedback on the Enhance UX of launch instance form 
 subject and its prototype.
 
 Try the latest version of the prototype:
 http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance
 
 This update was made after several discussion on those different channels:
 
 - openstack ux google group
 - launchpad horizon (and now launchpad openstack ux)
 - mailing list and IRC
 - the new ask bots for openstack UX
 
 We tried to write back most of discussions on ask bot, and are now focusing 
 on this tool.
 
 Below a digest of those discussions, with links to ask bot (on each 
 subject, there are links to related blueprints, google doc drafts, etc)
 
 = General topics =
 
 - Modals and supporting different screen sizes [2]
   Current modal doesn't work well on the top 8 screen resolutions [2]
   = Responsive and full screen modal added on the prototype [1]
 
 - Wizard mode for some modals [3]
   = try the wizard [1]
 
 = Specific to launch instance =
 
 - Improve boot source options [4]
   * first choose to boot from ephemeral or persistent disk
   * if no ephemeral flavor are available, hide the selector
   * group by public, project, shared with me
   * warning message added for delete on terminate option (when boot from 
  persistent)
 
 - Scaling the flavor list [5]
   * sort the columns of the table. In particular: by name.
   * group of flavor list (for example: performance, standard...)?
 
 - Scaling the image list [5]
   * a scrollbar on the image list
   * limit the number of list items and add a x more instance snapshots - 
  See more line
   * a search / filter feature would be great, like discussed at the scaling 
  horizon design session
 
 - Step 1 / Step 2 workflow: when the user click on select from one boot 
 source item it goes directly to the step 2.
   If it goes back from step 2 to step 1:
   * the text Please select a boot source would be replaced with a Next 
  button
   * the button select on the selected boot source item would be replaced 
  with a check-mark (or equivalent).
   * the user would still have the possibility to select another boot source
 
 - flavor depending on image requirements and quotas available:
* this a very good point, lot of things to discuss about
= should open a separate thread on this
 
 - Network: still a work in progress
   * if a single choice: then make it default choice
 
 - Several wording updates (cancel, ephemeral boot source, ...)
 
 [1] 
 http://cedricss.github.io/openstack-dashboard-ux-blueprints/launch-instance
 [2] 
 http://ask-openstackux.rhcloud.com/question/11/modals-and-supporting-different-screen-sizes/
 [3] http://ask-openstackux.rhcloud.com/question/81/wizard-ui-for-workflow
 [4] 
 http://ask-openstackux.rhcloud.com/question/13/improve-boot-source-ux-ephemeral-vs-persistent-disk/
 [5] 
 http://ask-openstackux.rhcloud.com/question/12/enhance-the-selection-of-a-flavor-and-an-image/
 
 Best,
 
 C?dric
 
   Oct 11 17:11:26 UTC 2013, Jesse Pretorius jesse.pretorius at gmail.com  
   wrote:
   
   +1
   
   A few comments:
   
   1. Bear in mind that sometimes a user may not have access to any Ephemeral
   flavors, so the tabbing should ideally be adaptive. An alternative would
   not to bother with the tabs and just show a flavor list. In our deployment
   we have no flavors with ephemeral disk space larger than 0. 
   2. Whenever there's a selection, but only one choice, make it a default
   choice. It's tedious to choose the only selection only because you have 
   to.
   It's common for our users to have one network/subnet defined, but the
   current UI requires them to switch tabs and select the network which is
   rather tedious.
   3. The selection of the flavor is divorced from the quota available and
   from the image requirements. Ideally those two items should somehow be
   incorporated. A user needs to know up-front that the server will build
   based on both their quota and the image minimum requirements.
   4. We'd like to 

Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread John Griffith
On Tue, Dec 3, 2013 at 11:54 AM, Nachi Ueno na...@ntti3.com wrote:
 2013/12/3 John Griffith john.griff...@solidfire.com:
 On Tue, Dec 3, 2013 at 11:38 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/03/2013 09:22 AM, Joe Gordon wrote:
 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So
 next time you want to fix a typo, instead of just fixing a single one
 you can go ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354

 Are we going to start gating on spellcheck of code and commit messages?  :-)

 NO please (please please please).  We have enough grammar reviewers
 at this point already IMO and I honestly think I might puke if jenkins
 fails my patch because I didn't put a '.' at the end of my comment
 line in the code.  I'd much rather see us focus on things like... I
 dunno... maybe having the code actually work?

 yeah, but may be non-voting reviews by this tool is helpful

Fair enough... don't get me wrong I'm all for support of non-english
contributors etc.  I just think that the emphasis on grammar and
punctuation in reviews has gotten a bit out of hand as of late.  FWIW
I've never -1'd a patch (and never would) because somebody used its
rather than it's in a comment.  Or they didn't end a comment (NOT a
docstring) with a period.  I think it's the wrong place to spend
effort quite honestly.

That being said, I realize people will continue to this sort of thing
(it's very important to get your -1 counts in the review stats) and
admittedly there is some value to spelling and grammar.  I just feel
that there are *real* issues and bugs that people could spend this
time that would actually have some significant and real benefit.

I'm obviously in the minority on this topic so I should probably just
yield at this point and get on board the grammar train.





 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-03 Thread Yathiraj Udupi (yudupi)
I totally agree on this meta level scheduler aspect. This should separate the 
placement decision making logic (for resources of any type, but can start on 
Nova resources) from their actual creation, say VM creation.



This way the placement decisions can be relayed to the individual components 
allocator component or whatever component that handles this after the 
separation.



So in this effort of separating the scheduler I hope some clean interfaces will 
be created that separate these logics.  At least we should attempt to make the 
effort follow some  global meta scheduling clean interfaces that should be 
designed.



Yathi.





-- Original message--

From: Debojyoti Dutta

Date: Tue, 12/3/2013 10:50 AM

To: OpenStack Development Mailing List (not for usage questions);

Cc: Boris Pavlovic;

Subject:Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest 
proposal for an external scheduler in our lifetime



I agree with RussellB on this … if the forklift's goal is to just separate the 
scheduler, there should be no new features etc till the forklift is done and it 
should work as is with very minor config changes.

A scheduler has several features like place resources correctly, for example. 
Ideally, this should be a simple service that can allocate any resources to any 
available bucket - balls in bins, VMs in host, blocks/blobs on disk/SSD etc. 
Maybe the scheduler should operate on meta level resource maps for each type 
and delegate the precise decisions to the allocator for that type.

debo


On Tue, Dec 3, 2013 at 9:58 AM, Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com wrote:
On 12/03/2013 07:22 AM, Boris Pavlovic wrote:
 Hi all,


 Finally found a bit time to write my thoughts.

 There are few blockers that make really complex to build scheduler as a
 services or even to move main part of scheduler code to separated lib.
 We already have one unsuccessfully effort
 https://blueprints.launchpad.net/oslo/+spec/oslo-scheduler .

 Major problems that we faced were next:
 1) Hard connection with project db api layer (e.g. nova.db.api,
 cinder.db.api)
 2) Hard connection between db.models and host_states
 3) Hardcoded host states objects structure
 4) There is no namespace support in host states (so we are not able to
 keep all filters for all projects in the same place)
 5) Different API methods, that can't be effectively generalized.


 Main goals of no-db-scheduler effort are:
 1) Make scheduling much faster, storing data locally on each scheduler
 and just syncing states of them
 2) Remove connections between project.db.api and scheduler.db
 3) Make host_states just JSON like objects
 4) Add namespace support in host_states

 When this part will be finished, we will have actually only 1 problem
 what to do with DB API methods, and business logic of each project. What
 I see is that there are 2 different ways:

If the new project is just a forklift of the existing code that still
imports nova's db API and accesses Nova's DB, I don't think the initial
forklift necessarily has to be blocked on completing no-db-scheduler.
That can happen after just as easily (depending on which effort is ready
first).

 1) Make scheduler as a big lib, then implement RPC methods + bit of
 business logic in each project
 2) Move all RPC calls from nova,cinder,ironic,... and business logic in
 1 scheduler as a service

Right now I think #2 is the approach we should take.  This is mainly
because there is common information that is needed for the scheduling
logic for resources in multiple projects.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
-Debo~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread Joe Gordon
On Tue, Dec 3, 2013 at 10:46 AM, John Griffith
john.griff...@solidfire.comwrote:

 On Tue, Dec 3, 2013 at 11:38 AM, Russell Bryant rbry...@redhat.com
 wrote:
  On 12/03/2013 09:22 AM, Joe Gordon wrote:
  HI all,
 
  Recently I have seen a few patches fixing a few typos.  I would like to
  point out a really nifty tool to detect commonly misspelled words.  So
  next time you want to fix a typo, instead of just fixing a single one
  you can go ahead and fix a whole bunch.
 
  https://github.com/lyda/misspell-check
 
  To install it:
$ pip install misspellings
 
  To use it in your favorite openstack repo:
   $ git ls-files | grep -v locale | misspellings -f -
 
 
  Sample output:
 
  http://paste.openstack.org/show/54354
 
  Are we going to start gating on spellcheck of code and commit messages?
  :-)

 NO please (please please please).  We have enough grammar reviewers
 at this point already IMO and I honestly think I might puke if jenkins
 fails my patch because I didn't put a '.' at the end of my comment
 line in the code.  I'd much rather see us focus on things like... I
 dunno... maybe having the code actually work?



That is explicitly not what this tool does. See the readme here:
https://github.com/lyda/misspell-check

And no, after a few IRC discussions there are no plans to gate on this.



 
  --
  Russell Bryant
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread Nachi Ueno
2013/12/3 John Griffith john.griff...@solidfire.com:
 On Tue, Dec 3, 2013 at 11:54 AM, Nachi Ueno na...@ntti3.com wrote:
 2013/12/3 John Griffith john.griff...@solidfire.com:
 On Tue, Dec 3, 2013 at 11:38 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/03/2013 09:22 AM, Joe Gordon wrote:
 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So
 next time you want to fix a typo, instead of just fixing a single one
 you can go ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354

 Are we going to start gating on spellcheck of code and commit messages?  
 :-)

 NO please (please please please).  We have enough grammar reviewers
 at this point already IMO and I honestly think I might puke if jenkins
 fails my patch because I didn't put a '.' at the end of my comment
 line in the code.  I'd much rather see us focus on things like... I
 dunno... maybe having the code actually work?

 yeah, but may be non-voting reviews by this tool is helpful

 Fair enough... don't get me wrong I'm all for support of non-english
 contributors etc.  I just think that the emphasis on grammar and
 punctuation in reviews has gotten a bit out of hand as of late.  FWIW
 I've never -1'd a patch (and never would) because somebody used its
 rather than it's in a comment.  Or they didn't end a comment (NOT a
 docstring) with a period.  I think it's the wrong place to spend
 effort quite honestly.

 That being said, I realize people will continue to this sort of thing
 (it's very important to get your -1 counts in the review stats) and
 admittedly there is some value to spelling and grammar.  I just feel
 that there are *real* issues and bugs that people could spend this
 time that would actually have some significant and real benefit.

 I'm obviously in the minority on this topic so I should probably just
 yield at this point and get on board the grammar train.

May be, this is off topic.
At first, I do agree the importance of such grammar error is not high.
We should focus on real issues.

However IMO, we should -1 for even such cases (using its)

I just send patch for fixing misspells in neutron.
https://review.openstack.org/#/c/59809/

There was 50 misspells. so it is may be small mistakes for one patch,
but it will be growing..






 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tool for detecting commonly misspelled words

2013-12-03 Thread John Griffith
On Tue, Dec 3, 2013 at 12:18 PM, Nachi Ueno na...@ntti3.com wrote:
 2013/12/3 John Griffith john.griff...@solidfire.com:
 On Tue, Dec 3, 2013 at 11:54 AM, Nachi Ueno na...@ntti3.com wrote:
 2013/12/3 John Griffith john.griff...@solidfire.com:
 On Tue, Dec 3, 2013 at 11:38 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/03/2013 09:22 AM, Joe Gordon wrote:
 HI all,

 Recently I have seen a few patches fixing a few typos.  I would like to
 point out a really nifty tool to detect commonly misspelled words.  So
 next time you want to fix a typo, instead of just fixing a single one
 you can go ahead and fix a whole bunch.

 https://github.com/lyda/misspell-check

 To install it:
   $ pip install misspellings

 To use it in your favorite openstack repo:
  $ git ls-files | grep -v locale | misspellings -f -


 Sample output:

 http://paste.openstack.org/show/54354

 Are we going to start gating on spellcheck of code and commit messages?  
 :-)

 NO please (please please please).  We have enough grammar reviewers
 at this point already IMO and I honestly think I might puke if jenkins
 fails my patch because I didn't put a '.' at the end of my comment
 line in the code.  I'd much rather see us focus on things like... I
 dunno... maybe having the code actually work?

 yeah, but may be non-voting reviews by this tool is helpful

 Fair enough... don't get me wrong I'm all for support of non-english
 contributors etc.  I just think that the emphasis on grammar and
 punctuation in reviews has gotten a bit out of hand as of late.  FWIW
 I've never -1'd a patch (and never would) because somebody used its
 rather than it's in a comment.  Or they didn't end a comment (NOT a
 docstring) with a period.  I think it's the wrong place to spend
 effort quite honestly.

 That being said, I realize people will continue to this sort of thing
 (it's very important to get your -1 counts in the review stats) and
 admittedly there is some value to spelling and grammar.  I just feel
 that there are *real* issues and bugs that people could spend this
 time that would actually have some significant and real benefit.

 I'm obviously in the minority on this topic so I should probably just
 yield at this point and get on board the grammar train.

 May be, this is off topic.
 At first, I do agree the importance of such grammar error is not high.
 We should focus on real issues.

 However IMO, we should -1 for even such cases (using its)

 I just send patch for fixing misspells in neutron.
 https://review.openstack.org/#/c/59809/

 There was 50 misspells. so it is may be small mistakes for one patch,
 but it will be growing..

Ok, point taken... I'll be quiet on the subject now :)






 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] neutron floating IP assignment unexpected

2013-12-03 Thread Jon Maron
Hi,

  I have the following configuration in savanna.conf:

# If set to True, Savanna will use floating IPs to communicate
# with instances. To make sure that all instances have
# floating IPs assigned in Nova Network set
# auto_assign_floating_ip=True in nova.conf.If Neutron is
# used for networking, make sure that all Node Groups have
# floating_ip_pool parameter defined. (boolean value)
use_floating_ips=false

# Use Neutron or Nova Network (boolean value)
use_neutron=true

# Use network namespaces for communication (only valid to use in conjunction
# with use_neutron=True)
use_namespaces=true

  My nova.conf file DOES NOT have auto_assign_floating_ip set to True.

  My dashboard local settings file explicitly sets AUTO_ASSIGNMENT_ENABLED = 
False

  Yet, the spawned VMs are generated with a floating IP:

[root@cn082 savanna(keystone_demo)]# nova list
+--+++-+
| ID   | Name   | Status | Networks 
   |
+--+++-+
| e32572ae-397b-4a61-9562-7a52fe6cd738 | dc1-master-001 | ACTIVE | 
private=10.0.0.14, 172.24.4.232 |
| da50a103-0f64-4b33-9bd1-586e8b1c981c | dc1-slave-001  | ACTIVE | 
private=10.0.0.15, 172.24.4.233 |
+--+++-+

  Any idea how this may be the case?

-- Jon


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-03 Thread Russell Bryant
On 12/03/2013 03:17 AM, Robert Collins wrote:
 The team size was a minimum, not a maximum - please add your names.
 
 We're currently waiting on the prerequisite blueprint to land before
 work starts in earnest; and for the blueprint to be approved (he says,
 without having checked to see if it has been now:))

I approved it.

https://blueprints.launchpad.net/nova/+spec/forklift-scheduler-breakout

Once this is moving, please keep me in the loop on progress.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >