Re: [openstack-dev] [horizon] Javascript development improvement

2013-11-21 Thread Ladislav Smola

Hello,

as long as node won't be Production dependency, it shouldn't be a 
problem, right? I give +1 to that


Regards
Ladislav

On 11/20/2013 05:01 PM, Maxime Vidori wrote:

Hi all, I know it is pretty annoying but I have to resurrect this subject.

With the integration of Angularjs into Horizon we will encounter a lot of 
issues with javascript. I ask you to reconsider to bring back Nodejs as a 
development platform. I am not talking about production, we are all agree that 
Node is not ready for production, and we do not want it as a backend. But the 
facts are that we need a lot of its features, which will increase the tests and 
the development. Currently, we do not have any javascript code quality: jslint 
is a great tool and can be used easily into node. Angularjs also provides 
end-to-end testing based on nodejs again, testing is important especially if we 
start to put more logic into JS. Selenium is used just to run qUnit tests, we 
can bring back these tests into node and have a clean unified testing platform. 
Tests will be easier to perform.

Finally, (do not punch me in the face) lessc which is used for bootstrap is 
completely integrated into it. I am afraid that modern javascript development 
can not be perform without this tool.

Regards

Maxime Vidori


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-21 Thread Sylvain Bauza

Hi Yuriy, Dolph et al.

I'm implementing a climate.policy.check_is_admin(ctx) which will look at 
policy.json entry 'context_is_admin' for knowing which roles do have 
elevated rights for Climate.


This check must be called when creating a context for knowing if we can 
allow extra rights. The is_admin flag is pretty handsome because it can 
be triggered upon that check.


If we say that one is bad, how should we manage that ?

-Sylvain



Le 21/11/2013 06:18, Yuriy Taraday a écrit :
On Wed, Nov 20, 2013 at 9:57 PM, Dolph Mathews 
dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote:


On Wed, Nov 20, 2013 at 10:52 AM, Yuriy Taraday
yorik@gmail.com mailto:yorik@gmail.com wrote:

On Wed, Nov 20, 2013 at 8:42 PM, Dolph Mathews
dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote:

is_admin is a short sighted and not at all granular -- it
needs to die, so avoid imitating it.


 I suggest keeping it in case we need to elevate privileges
from code.


Can you expand on this point? It sounds like you want to ignore
the deployer-specified authorization configuration...


No, we're not ignoring it. In Keystone we have two options to become 
an admin: either have 'admin'-like role (set in policy.json by 
deployer) or have 'is_admin' set (the only way in Keystone is to pass 
configured admin_token). We don't have bootstrap problem in any other 
services, so we don't need any admin_token. But we might need to run 
code that requires admin privileges for user that don't have them. 
Other projects use get_admin_context() or smth like that for this.
I suggest we keep the option to have such 'in-code sudo' using 
is_admin that will be mentioned in policy.json, but limit is_admin 
usage to just that.


--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does Nova really need an SQL database?

2013-11-21 Thread Soren Hansen
2013/11/20 Chris Friesen chris.frie...@windriver.com:
 What about a hybrid solution?
 There is data that is only used by the scheduler--for performance reasons
 maybe it would make sense to store that information in RAM as described at

 https://blueprints.launchpad.net/nova/+spec/no-db-scheduler

 For the rest of the data, perhaps it could be persisted using some alternate
 backend.

What would that solve?

-- 
Soren Hansen | http://linux2go.dk/
Ubuntu Developer | http://www.ubuntu.com/
OpenStack Developer  | http://www.openstack.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-21 Thread Flavio Percoco

On 20/11/13 09:37 -0600, Dolph Mathews wrote:




On Wed, Nov 20, 2013 at 9:09 AM, Thierry Carrez thie...@openstack.org wrote:

   Hi everyone,

   How should we proceed to make sure UX (user experience) is properly
   taken into account into OpenStack development ? Historically it was hard
   for UX sessions (especially the ones that affect multiple projects, like
   CLI / API experience) to get session time at our design summits. This
   visibility issue prompted the recent request by UX-minded folks to make
   UX an official OpenStack program.

   However, as was apparent in the Technical Committee meeting discussion
   about it yesterday, most of us are not convinced that establishing and
   blessing a separate team is the most efficient way to give UX the
   attention it deserves. Ideally, UX-minded folks would get active
   *within* existing project teams rather than form some sort of
   counter-power as a separate team. In the same way we want scalability
   and security mindset to be present in every project, we want UX to be
   present in every project. It's more of an advocacy group than a
   program imho.

   So my recommendation would be to encourage UX folks to get involved
   within projects and during project-specific weekly meetings to
   efficiently drive better UX there, as a direct project contributor. If
   all the UX-minded folks need a forum to coordinate, I think [UX] ML
   threads and, maybe, a UX weekly meeting would be an interesting first step.


++

UX is an issue at nearly every layer. OpenStack has a huge variety of
interfaces, all of which deserve consistent, top tier UX attention and
community-wide HIG's-- CLIs, client libraries / language bindings, HTTP APIs,
web UIs, messaging and even pluggable driver interfaces. Each type of interface
generally caters to a different audience, each with slightly different
expectations.


As already mentioned in other emails on this thread, I think it'd be
valuable to have a member of each project to coordinate with the UX
team. I think this is something we all want to have in the projects
we're working on and also something that every core member should be
keeping in their minds when reviewing patches.

I like the idea of having a security akin team for UX. We could also
tag bugs - this came up in the last TC meeting - when we think they
need the UX team intervention.

Also, as part of the review process, when the patch affects the UX,
reviewers could add one of the UX core members to the review and
request their feedback.

The above should guarantees the cross-project UX enforcement to some
extent.


From my point of view, UX is not just something we need to have

experts on but something we all need to care about. Having a UX team
will definitely help with this matter.


   There would still be an issue with UX session space at the Design
   Summit... but that's a well known issue that affects more than just UX:
   the way our design summits were historically organized (around programs
   only) made it difficult to discuss cross-project and cross-program
   issues. To address that, the plan is to carve cross-project space into
   the next design summit, even if that means a little less topical
   sessions for everyone else.


I'd be happy to contribute a design session to focus on improving UX across
the community, and I would certainly attend!


We also discussed about having a cross-project session at the summit.
I think this is becoming more and more important.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-21 Thread Yuriy Taraday
On Thu, Nov 21, 2013 at 12:37 PM, Sylvain Bauza sylvain.ba...@bull.netwrote:

  Hi Yuriy, Dolph et al.

 I'm implementing a climate.policy.check_is_admin(ctx) which will look at
 policy.json entry 'context_is_admin' for knowing which roles do have
 elevated rights for Climate.

 This check must be called when creating a context for knowing if we can
 allow extra rights. The is_admin flag is pretty handsome because it can be
 triggered upon that check.

 If we say that one is bad, how should we manage that ?

 -Sylvain


There should be no need for is_admin and some special policy rule like
context_is_admin.
Every action that might require granular access control (for controllers it
should be every action at all, I guess) should call enforce() from
openstack.common.policy to check appropriate rule in policy.json.
Rules for actions that require user to be admin should contain a reference
to some basic rule like admin_required in Keystone (see
https://github.com/openstack/keystone/blob/master/etc/policy.json).

We should not check from code if the user is an admin. We should always ask
openstack.common.policy if the user have access to the action.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Static IPAddress configuration in nova.conf file

2013-11-21 Thread Balaji P
Hi,
Please let us know if anybody is having similar requirement as below.
Also any suggestions/comments on this approach will be helpful.
Regards,
Balaji.P
From: P Balaji-B37839
Sent: Tuesday, November 19, 2013 4:09 PM
To: OpenStack Development Mailing List
Cc: Mannidi Purandhar Sairam-B39209; Lingala Srikanth Kumar-B37208; Somanchi 
Trinath-B39208; B Veera-B37207; Addepalli Srini-B22160
Subject: [openstack-dev] [nova] Static IPAddress configuration in nova.conf file

Hi,
Nova-compute in Compute nodes send fanout_cast to the scheduler in Controlle 
Node once every 60 seconds.  Configuration file Nova.conf in Compute Node has 
to be configured with Management Network IP address and there is no provision 
to configure Data Network IP address in the configuration file. But if there is 
any change in the IPAddress for these Management Network Interface and Data 
Network Interface, then we have to configure them  manually in the 
configuration file of compute node.
We would like to create BP to address this issue of static configuration of 
IPAddress for Management Network Interface and Data Network Interface of 
Compute Node by providing the interface names in the nova.conf file.
So that any change in the ipaddress for these interfaces will be updated 
dynamically in the fanout_cast  message to the Controller and update the DB.
We came to know that the current deployments are using some scripts to handle 
this static ipaddress configuration in nova.conf file.
Any comments/suggestions will be useful.
Regards,
Balaji.P







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Propose project story wiki idea

2013-11-21 Thread Clint Byrum
Excerpts from Boris Pavlovic's message of 2013-11-21 00:16:04 -0800:
 Clint,
 
 The main idea is to have processed by human history of project.
 
 It is really impossible to aggregate automatically all data from different
 sources:
 IRC (main project chat/dev chat/meetings), Mailing Lists, Code, Reviews,
 Summit discussions, using project specific knowledge and history of  the
 project.To get short messages like here
 https://wiki.openstack.org/wiki/Rally/Updates
 
 So the idea is that in each project we should have the persons that will
 aggregate for others all these sources and present really short, high level
 view of situation. And these messages should be in one place (wiki/or other
 platform (not mailing lists)) for project. So we will be able quick to get
 what happens with project for last few months and what are current goals.
 This will be also very very useful for new contributors.
 
 So Aggregation of data is good (and should be done), but it is not enough..
 

I did not suggest aggregation of data. We have TONS of that, and we
don't need more.

I suggested a very simple way for project leaders and members to maintain
the current story during the meetings.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-21 Thread Sylvain Bauza

Le 21/11/2013 10:04, Yuriy Taraday a écrit :
On Thu, Nov 21, 2013 at 12:37 PM, Sylvain Bauza 
sylvain.ba...@bull.net mailto:sylvain.ba...@bull.net wrote:


Hi Yuriy, Dolph et al.

I'm implementing a climate.policy.check_is_admin(ctx) which will
look at policy.json entry 'context_is_admin' for knowing which
roles do have elevated rights for Climate.

This check must be called when creating a context for knowing if
we can allow extra rights. The is_admin flag is pretty handsome
because it can be triggered upon that check.

If we say that one is bad, how should we manage that ?

-Sylvain


There should be no need for is_admin and some special policy rule like 
context_is_admin.
Every action that might require granular access control (for 
controllers it should be every action at all, I guess) should call 
enforce() from openstack.common.policy to check appropriate rule in 
policy.json.
Rules for actions that require user to be admin should contain a 
reference to some basic rule like admin_required in Keystone (see 
https://github.com/openstack/keystone/blob/master/etc/policy.json).


We should not check from code if the user is an admin. We should 
always ask openstack.common.policy if the user have access to the action.


--

Kind regards, Yuriy.



Thanks for all your thoughts, really appreciated. OK, I will discuss 
with Swann and see what needs to be modified accordingly.


I'll deliver a new patchset for https://review.openstack.org/#/c/57200/ 
(policies) based on Context patch from Swann and having is_admin, and 
then I'll iterate removing the necessary parts.


-Sylvain
(Btw, that's bad I spent a few days implementing policies without clear 
guidelines and copying Nova stuff with latest Oslo policies, we 
definitely need developer documentation for that...)




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Propose project story wiki idea

2013-11-21 Thread Thierry Carrez
Stefano Maffulli wrote:
 On 11/19/2013 09:33 PM, Boris Pavlovic wrote:
 The idea of this proposal is that every OpenStack project should have
 story wiki page. It means to publish every week one short message that
 contains most interesting updates for the last week, and high level road
 map for future week. So reading this for 10-15 minutes you can see what
 changed in project, and get better understanding of high level road map
 of the project.
 
 I like the idea.
 
 I have received requests to include high level summaries from all
 projects in the weekly newsletter but it's quite impossible for me to do
 that as I don't have enough understanding of each project to extrapolate
 the significant news from the noise. [...]

This is an interesting point. From various discussions I had with people
over the last year, the thing the development community is really really
after is weekly technical news that would cover updates from major
projects as well as deep dives into new features, tech conference CFPs,
etc. The reference in the area (and only example I have) is LWN
(lwn.net) and their awesome weekly coverage of what happens in Linux
kernel development and beyond.

The trick is, such coverage requires editors with a deep technical
knowledge, both to be able to determine significant news from marketing
noise *and* to be able to deep dive into a new feature and make an
article out of it that makes a good read for developers or OpenStack
deployers. It's also a full-time job, even if some of those deep-dive
articles could just be contributed by their developers.

LWN is an exception rather than the rule in the tech press. It would be
absolutely awesome if we managed to build something like it to cover
OpenStack, but finding the right people (the right skill set + the will
and the time to do it) will be, I fear, extremely difficult.

Thoughts ? Volunteers ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] meaning of resource_id in a meter

2013-11-21 Thread Julien Danjou
On Wed, Nov 20 2013, Gordon Chung wrote:

 came across a question when reviewing 
 https://review.openstack.org/#/c/56019... basically, in Samples, user_id 
 and project_id attributes are pretty self-explanatory and map to Keystone 
 concepts pretty well but what is the criteria for setting resource_id? 
 maybe the ambiguity is that resource_id in a Sample is not the resource_id 
 from Keystone... so what is it? is it just any UUID that is accessible 
 from notification/response and if it is, is there a better (possibly more 
 consistent) alternative?

In all cases, these are free string fields. `user_id' and `project_id'
map to Keystone _most of the time_, especially with samples emitted by
Ceilometer itself. That can be false as soon as you send samples from
external systems to Ceilometer.

I don't have the feeling we should legislate on what a `resource_id`.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to stage client major releases in Gerrit?

2013-11-21 Thread Thierry Carrez
Mark Washenberger wrote:
 [...]
 In order to mitigate that risk, I think it would make a lot of sense to
 have a place to stage and carefully consider all the breaking changes we
 want to make. I also would like to have that place be somewhere in
 Gerrit so that it fits in with our current submission and review
 process. But if that place is the 'master' branch and we take a long
 time, then we can't really release any bug fixes to the v0 series in the
 meantime.
 
 I can think of a few workarounds, but they all seem kinda bad. For
 example, we could put all the breaking changes together in one commit,
 or we could do all this prep in github.
 
 My question is, is there a correct way to stage breaking changes in
 Gerrit? Has some other team already dealt with this problem?
 [...]

It sounds like a case where we could use a feature branch. There have
been a number of them in the past when people wanted to incrementally
work on new features without affecting master, and at first glance
(haha) it sounds appropriate here. Infra team, thoughts ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Race condition between DB layer and plugin back-end implementation

2013-11-21 Thread Isaku Yamahata
On Wed, Nov 20, 2013 at 10:16:46PM -0800,
Gary Duan gd...@varmour.com wrote:

 Hi, Isaku and Edgar,

Hi.


 As part of the effort to implement L3 router service type framework, I have
 reworked L3 plugin to introduce a 2-step process, precommit and postcommit,
 similar to ML2. If you plan to work on L3 code, we can collaborate.

Sure, let's collaborate. This is discussion phase at this moment.
I envisage that our plan will be
- 1st step: introduce 2-step transition to ML2 plugin
(and hope other simple plugin will follow)
- 2nd step: introduce locking protocol or any other mechanism like
async update similar NVP, or taskflow...
(design and implementation)
  ...
- Nth step: introduce debugging/test framework
e.g. insert hooks to trigger artificial sleep or context switch
 in debug mode in order to make race more likely to happen


 https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type-framework

Is there any code publicly available?


 Also, for advanced services such as FW and LBaas, there already is a state
 transition logic in the plugin. For example, a firewall instance can have
 CREATE, UPDATE and DELETE_PENDING states.

Oh great! Advanced services have more complex state than core plugin,
I suppose. Are you aware of further issues?
Does they require further synchronization in addition to 2-step transition?
Like lock, serialization, async update...
Probably we can learn from other projects, nova, cinder...

thanks,
-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Scalable HAProxy Resources

2013-11-21 Thread Eugene Nikanorov
Hi,

Yes, it's briefly covered here:
http://docs.openstack.org/network-admin/admin/content/install_neutron-lbaas-agent.html
So basically you install lbaas agents on network controllers or compute
nodes (not VMs!). Upon startup those agents register themselves in neutron
and become available for handling requests for lbaas resources creation.
Of course haproxy should be installed on the nodes with lbaas agents.

Thanks,
Eugene.



On Thu, Nov 21, 2013 at 10:56 AM, Mellquist, Peter
peter.mellqu...@hp.comwrote:

  Eugene,



 At the Hong Kong Summit,  a new LBaaS feature allowing multiple HAProxy
 instances to be utilized in a horizontal scaling manner was discussed
 allowing deployment of many HAProxy resources. Does there exist any
 documentation on how to deploy multiple HAProxy’s? Does it matter how these
 are deployed ( in the cloud on Nova instances OR bare metal )?



 Thank You,

 Peter.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Subteam meeting on Thursday, 21

2013-11-21 Thread Eugene Nikanorov
Meeting reminder!
Today, 14-00 UTC, #openstack-meeting

Agenda for the meeting:
1) Announcements
2) Progress with qa and third-party testing
3) Feature design discussions

Thanks,
Eugene.


On Tue, Nov 19, 2013 at 12:30 PM, Eugene Nikanorov
enikano...@mirantis.comwrote:

 Hi folks,

 Let's meet on #openstack-meeting on Thrsday, 21, at 14-00 UTC
 We'll discuss current progress and design of some of proposed features.

 Thanks,
 Eugene.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Zane Bitter

On 20/11/13 23:49, Christopher Armstrong wrote:

On Wed, Nov 20, 2013 at 2:07 PM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:

On 20/11/13 16:07, Christopher Armstrong wrote:

On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com
mailto:zbit...@redhat.com mailto:zbit...@redhat.com wrote:

 On 19/11/13 19:14, Christopher Armstrong wrote:

thought we had a workable solution with the LoadBalancerMember
idea,
which you would use in a way somewhat similar to
CinderVolumeAttachment
in the above example, to hook servers up to load balancers.


I haven't seen this proposal at all. Do you have a link? How does it
handle the problem of wanting to notify an arbitrary service (i.e.
not necessarily a load balancer)?


It's been described in the autoscaling wiki page for a while, and I
thought the LBMember idea was discussed at the summit, but I wasn't
there to verify that :)

https://wiki.openstack.org/wiki/Heat/AutoScaling#LBMember.3F

Basically, the LoadBalancerMember resource (which is very similar to the
CinderVolumeAttachment) would be responsible for removing and adding IPs
from/to the load balancer (which is actually a direct mapping to the way
the various LB APIs work). Since this resource lives with the server
resource inside the scaling unit, we don't really need to get anything
_out_ of that stack, only pass _in_ the load balancer ID.


I see a couple of problems with this approach:

1) It makes the default case hard. There's no way to just specify a 
server and hook it up to a load balancer like you can at the moment. 
Instead, you _have_ to create a template (or template snippet - not 
really any better) to add this extra resource in, even for what should 
be the most basic, default case (scale servers behind a load balancer).


2) It relies on a plugin being present for any type of thing you might 
want to notify.


At summit and - to the best of my recollection - before, we talked about 
scaling a generic group of resources and passing notifications to a 
generic controller, with the types of both defined by the user. I was 
expecting you to propose something based on webhooks, which is why I was 
surprised not to see anything about it in the API. (I'm not prejudging 
that that is the way to go... I'm actually wondering if Marconi has a 
role to play here.)


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Extended query filtering feature

2013-11-21 Thread Ildikó Váncsa
Hi All,

As a resumption of the Improving Ceilometer API query filtering session of the 
HK Design Summit, I created a document about supporting complex query filters 
in Ceilometer. The document contains a brief summary of the previously 
suggested idea. The second part of the etherpad discuss the details of 
supporting the complex filtering expressions in the queries according to the 
discussion on the design summit session.

A new blueprint will also be submitted this week about the improved idea.

The link to the etherpad: 
https://etherpad.openstack.org/p/Ceilometer_extended_API_query_filtering

Please feel free to comment the document above. If you choose to write your 
comments into the etherpad doc, please use the Authorship colors option or you 
can also send the comments via email.

Thanks and Best Regards,
Ildiko Vancsa
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [heat] Custom Flavor creation through Heat

2013-11-21 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
-Original Message-
From: ext Steven Hardy [mailto:sha...@redhat.com] 
Sent: Thursday, November 14, 2013 2:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [heat] Custom Flavor creation through 
Heat

On Thu, Nov 14, 2013 at 08:22:57AM +, Kodam, Vijayakumar (EXT-Tata 
Consultancy Ser - FI/Espoo) wrote:
snip
 Thanks Steve Baker for the information. I am also waiting to hear from 
Steve Hardy, if keystone trust system will fix the nova flavors admin 
privileges issue.

So, basically, no.  Trusts only allow you to delegate roles you already
have, so if nova requires admin to create a flavor, and the user creating
the heat stack doesn't have admin, then they can't create a flavor.  Trusts
won't solve this problem, they won't allow users to gain roles they don't
already have.

As Clint has pointed out, if you control the OpenStack deployment, you are
free to modify the policy for any API to suit your requirements - the
policy provided by projects is hopefully a sane set of defaults, but the
whole point of policy.json is that it's configurable.

 One option to control the proliferation of nova flavors is to make them 
private to the tenant (using flavor-access?) who created them. 
 This provides the needed privacy so that others tenants cannot view them.

This is the first step IMO - the nova flavors aren't scoped per tenant atm,
which will be a big problem if you start creating loads of non-public
flavors via stack templates.

At the moment, you can specify --is-public false when creating a flavor,
but this doesn't really mean that the flavor is private to the user, or
tenant, it just means non-admin users can't see it AFAICT.

So right now, if User1 in Tenant1 does:

nova flavor-create User1Flavor auto 128 10 1 --is-public false

Every user in every tenant will see it via tenant-list --all, if they have
the admin role.

This lack of proper role-based request scoping is an issue throughout
OpenStack AFAICS, Heat included (I'm working on fixing it).

Probably what we need is something like:
- Normal user : Can create a private flavor in a tenant where they
  have the Member role (invisible to any other users)
- Tenant Admin user : Can create public flavors in the tenants where they
  have the admin role (visible to all users in the tenant)
- Domain admin user : Can create public flavors in the domains where they
  have the admin role (visible to all users in all tenants in that domain)

Note the current admin user scope is like the last case, only for the
default domain.

So for now, I'm -1 on adding a heat resource to create flavors, we should
fix the flavor scoping in Nova first IMO.

Steve
___



Can we expect role-based request scoping for heat in icehouse-1 or near 
future? 

VijayKumar


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-21 Thread Chuck Short
Hi

Has a decision happened when this meeting is going to take place, assuming
it is still taking place tomorrow.

Regards
chuck


On Mon, Nov 18, 2013 at 7:58 PM, Krishna Raman kra...@gmail.com wrote:


 On Nov 18, 2013, at 4:30 PM, Russell Bryant rbry...@redhat.com wrote:

 On 11/18/2013 06:30 PM, Dan Smith wrote:

 Not having been at the summit (maybe the next one), could somebody
 give a really short explanation as to why it needs to be a separate
 service? It sounds like it should fit within the Nova area. It is,
 after all, just another hypervisor type, or so it seems.


 But it's not just another hypervisor. If all you want from your
 containers is lightweight VMs, then nova is a reasonable place to put
 that (and it's there right now). If, however, you want to expose the
 complex and flexible attributes of a container, such as being able to
 overlap filesystems, have fine-grained control over what is shared with
 the host OS, look at the processes within a container, etc, then nova
 ends up needing quite a bit of change to support that.

 I think the overwhelming majority of folks in the room, after discussing
 it, agreed that Nova is infrastructure and containers is more of a
 platform thing. Making it a separate service lets us define a mechanism
 to manage these that makes much more sense than treating them like VMs.
 Using Nova to deploy VMs that run this service is the right approach,
 IMHO. Clayton put it very well, I think:

  If the thing you want to deploy has a kernel, then you need Nova. If
  your thing runs on a kernel, you want $new_service_name.

 I agree.

 Note that this is just another service under the compute project (or
 program, or whatever the correct terminology is this week).


 The Compute program is correct.  That is established terminology as
 defined by the TC in the last cycle.

 So while
 distinct from Nova in terms of code, development should be tightly
 integrated until (and if at some point) it doesn't make sense.


 And it may share a whole bunch of the code.

 Another way to put this:  The API requirements people have for
 containers include a number of features considered outside of the
 current scope of Nova (short version: Nova's scope stops before going
 *inside* the servers it creates, except file injection, which we plan to
 remove anyway).  That presents a problem.  A new service is one possible
 solution.

 My view of the outcome of the session was not it *will* be a new
 service.  Instead, it was, we *think* it should be a new service, but
 let's do some more investigation to decide for sure.

 The action item from the session was to go off and come up with a
 proposal for what a new service would look like.  In particular, we
 needed a proposal for what the API would look like.  With that in hand,
 we need to come back and ask the question again of whether a new service
 is the right answer.

 I see 3 possible solutions here:

 1) Expand the scope of Nova to include all of the things people want to
 be able to do with containers.

 This is my least favorite option.  Nova is already really big.  We've
 worked to split things out (Networking, Block Storage, Images) to keep
 it under control.  I don't think a significant increase in scope is a
 smart move for Nova's future.

 2) Declare containers as explicitly out of scope and start a new project
 with its own API.

 That is what is being proposed here.

 3) Some middle ground that is a variation of #2.  Consider Ironic.  The
 idea is that Nova's API will still be used for basic provisioning, which
 Nova will implement by talking to Ironic.  However, there are a lot of
 baremetal management things that don't fit in Nova at all, and those
 only exist in Ironic's API.

 I wanted to mention this option for completeness, but I don't actually
 think it's the right choice here.  With Ironic you have a physical
 resource (managed by Ironic), and then instances of an image running on
 these physical resources (managed by Nova).

 With containers, there's a similar line.  You have instances of
 containers (managed either by Nova or the new service) running on
 servers (managed by Nova).  I think there is a good line for separating
 concerns, with a container service on top of Nova.


 Let's ask ourselves:  How much overlap is there between the current
 compute API and a proposed containers API?  Effectively, what's the
 diff?  How much do we expect this diff to change in the coming years?

 The current diff demonstrates a significant clash with the current scope
 of Nova.  I also expect a lot of innovation around containers in the
 next few years, which will result in wanting to do new cool things in
 the API.  I feel that all of this justifies a new API service to best
 position ourselves for the long term.


 +1

 We need to come up with the API first before we decide if this is a new
 service or just something that
 needs to be added to Nova,

 How about we have all interested parties meet on IRC or conf. call 

[openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-21 Thread Robert Collins
https://etherpad.openstack.org/p/icehouse-external-scheduler

I'm looking for 4-5 folk who have:
 - modest Nova skills
 - time to follow a fairly mechanical (but careful and detailed work
needed) plan to break the status quo around scheduler extraction

And of course, discussion galore about the idea :)

Cheers,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Does cinder support HDS AMS 2300 storage now?

2013-11-21 Thread Steven Sonnenberg
On Thu Nov 21 03:32:10 UTC 2013, Lei asked:
I just found the HUS is supported. But I have a old AMS storage machine and 
want to use it.
So I want to make sure is it possible?
The answer is that both AMS and HUS arrays are supported.

Steve Sonnenberg
Master Solutions Consultant
Hitachi Data Systems
Cell: 443-929-6543

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Race condition between DB layer and plugin back-end implementation

2013-11-21 Thread Gary Duan
See inline,


On Thu, Nov 21, 2013 at 2:19 AM, Isaku Yamahata isaku.yamah...@gmail.comwrote:

 On Wed, Nov 20, 2013 at 10:16:46PM -0800,
 Gary Duan gd...@varmour.com wrote:

  Hi, Isaku and Edgar,

 Hi.


  As part of the effort to implement L3 router service type framework, I
 have
  reworked L3 plugin to introduce a 2-step process, precommit and
 postcommit,
  similar to ML2. If you plan to work on L3 code, we can collaborate.

 Sure, let's collaborate. This is discussion phase at this moment.
 I envisage that our plan will be
 - 1st step: introduce 2-step transition to ML2 plugin
 (and hope other simple plugin will follow)
 - 2nd step: introduce locking protocol or any other mechanism like
 async update similar NVP, or taskflow...
 (design and implementation)
   ...
 - Nth step: introduce debugging/test framework
 e.g. insert hooks to trigger artificial sleep or context switch
  in debug mode in order to make race more likely to happen


 
 https://blueprints.launchpad.net/neutron/+spec/l3-router-service-type-framework

 Is there any code publicly available?



I will do some clean up and post the patch for discussion.



  Also, for advanced services such as FW and LBaas, there already is a
 state
  transition logic in the plugin. For example, a firewall instance can have
  CREATE, UPDATE and DELETE_PENDING states.

 Oh great! Advanced services have more complex state than core plugin,
 I suppose. Are you aware of further issues?
 Does they require further synchronization in addition to 2-step transition?
 Like lock, serialization, async update...
 Probably we can learn from other projects, nova, cinder...


Advanced service plugins don't have two-step transition today. IMO, If
vendor plugins/drivers don't maintain their own databases for these
services, it might not be urgent to add these steps in the plugin. How to
make sure database and back-end implementation in sync need more thought.
As configuring backend device can be an a-sync process, rollback database
tables can be cumbersome.

Thanks,
Gary


 thanks,
 --
 Isaku Yamahata isaku.yamah...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The recent gate performance and how it affects you

2013-11-21 Thread Michael Still
On Thu, Nov 21, 2013 at 9:10 AM, Michael Still mi...@stillhq.com wrote:
 On Thu, Nov 21, 2013 at 7:44 AM, Clark Boylan clark.boy...@gmail.com wrote:

 How do we avoid this in the future? Step one is reviewers that are
 approving changes (or reverifying them) should keep an eye on the gate
 queue.

 Talking on the -infra IRC channel just now, it has become clear to me
 that we need to stop approving _any_ change for now until we have the
 gate fixed. All we're doing at the moment is rechecking over and over
 because the gate is too unreliable to actually pass changes. This is
 making debugging the gate significantly harder.

 Could cores please refrain from approving code until the gate issues
 are resolved?

I am pleased to say that people much smarter than me seem to have now
resolved the gate issues. It is now safe to approve code once again.

Expect a long merge queue as the backlog clears, so perhaps start by
approving patches which were approved before we downed tools?

Cheers,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Diagnostic] Diagnostic API: summit follow-up

2013-11-21 Thread Matt Riedemann



On 11/20/2013 9:35 PM, Lingxian Kong wrote:

hi Matt:

noticed there is no consensus there[1], any progress outside the ML?

[1]
http://lists.openstack.org/__pipermail/openstack-dev/2013-__October/016385.html
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html



2013/11/21 Oleg Gelbukh ogelb...@mirantis.com
mailto:ogelb...@mirantis.com

Matt,

Thank you for bringing this up. I've been following this thread and
the idea is somewhat aligned with our approach, but we'd like to
take one step further.

In this Diagnostic API, we want to collect information about system
state from sources outside to OpenStack. We'd probably should
extract this call from Nova API and use it in our implementation to
get hypervisor-specific information about virtual machines which
exist on the node. But the idea is to get vision into the system
state alternative to that provided by OpenStack APIs.

May be we should reconsider our naming to avoid confusion and call
this Instrumentation API or something like that?

--
Best regards,
Oleg Gelbukh


On Wed, Nov 20, 2013 at 6:45 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com mailto:mrie...@linux.vnet.ibm.com wrote:



On Wednesday, November 20, 2013 7:52:39 AM, Oleg Gelbukh wrote:

Hi, fellow stackers,

There was a conversation during 'Enhance debugability'
session at the
summit about Diagnostic API which allows gate to get 'state
of world'
of OpenStack installation. 'State of world' includes
hardware- and
operating system-level configurations of servers in cluster.

This info would help to compare the expected effect of tests
on a
system with its actual state, thus providing Tempest with
ability to
see into it (whitebox tests) as one of possible use cases.
Another use
case is to provide input for validation of OpenStack
configuration files.

We're putting together an initial version of data model of
API with
example values in the following etherpad:
https://etherpad.openstack.__org/p/icehouse-diagnostic-api-__spec
https://etherpad.openstack.org/p/icehouse-diagnostic-api-spec

This version covers most hardware and system-level
configurations
managed by OpenStack in Linux system. What is missing from
there? What
information you'd like to see in such an API? Please, feel
free to
share your thoughts in ML, or in the etherpad directly.


--
Best regards,
Oleg Gelbukh
Mirantis Labs


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi Oleg,

There has been some discussion over the nova virtapi's
get_diagnostics method.  The background is in a thread from
October [1].  The timing is pertinent since the VMware team is
working on implementing that API for their nova virt driver [2].
  The main issue is there is no consistency between the nova
virt drivers and how they would implement the get_diagnostics
API, they only return information that is hypervisor-specific.
  The API docs and current Tempest test covers the libvirt
driver's implementation, but wouldn't work for say xen, vmware
or powervm drivers.

I think the solution right now is to namespace the keys in the
dict that is returned from the API so a caller could at least
check for that and know how to handle processing the result, but
it's not ideal.

Does your solution take into account the nova virtapi's
get_diagnostics method?

[1]

http://lists.openstack.org/__pipermail/openstack-dev/2013-__October/016385.html

http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html
[2] https://review.openstack.org/#__/c/51404/
https://review.openstack.org/#/c/51404/

--

Thanks,

Matt Riedemann



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com mailto:konglingx...@huawei.com;

Re: [openstack-dev] How to stage client major releases in Gerrit?

2013-11-21 Thread Robert Collins
On 22 November 2013 02:57, Monty Taylor mord...@inaugust.com wrote:




 This is a really complex one because of the gate. It's not just about
 the semver major version bump. I agree with earlier sentiment - the way
 to handle breaking changes is to bump the major version, and on the
 surface I don't have a problem with us doing that, since there is
 already a mechanism to deal with that.

 HOWEVER - it's more complex than that with us, because the client libs
 are part of our integration.

 We've already agreed on and have been operating on the assumption that
 client libs do not break rest api backwards compat. We're 3 seconds away
 from landing gating tests to ensure this is the case. The reasoning here
 is that an end user of OpenStack should not need to know what version of
 OpenStack a vendor is running - the latest python-glanceclient should
 work with diablo and it should work with icehouse. Nothing in this
 thread breaks that - I just bring it up because it's one of the overall
 design points that we'll be rubbing against.

I don't understand why branches would be needed here *if* the breaking
changes don't impact any supported release of OpenStack.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Proposals for Tempest core

2013-11-21 Thread Attila Fazekas
+1 for both!



- Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, November 15, 2013 2:38:27 PM
 Subject: [openstack-dev] [qa] Proposals for Tempest core
 
 It's post summit time, so time to evaluate our current core group for
 Tempest. There are a few community members that I'd like to nominate for
 Tempest core, as I've found their review feedback over the last few
 months to be invaluable. Tempest core folks, please +1 or -1 as you feel
 appropriate:
 
 Masayuki Igawa
 
 His review history is here -
 https://review.openstack.org/#/q/reviewer:masayuki.igawa%2540gmail.com+project:openstack/tempest,n,z
 
 Ken'ichi Ohmichi
 
 His review history is here -
 https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com+project:openstack/tempest,n,z
 
 They have both been actively engaged in the Tempest community, and have
 been actively contributing to both Tempest and OpenStack integrated
 projects, working hard to both enhance test coverage, and fix the issues
 found in the projects themselves. This has been hugely beneficial to
 OpenStack as a whole.
 
 At the same time, it's also time, I think, to remove Jay Pipes from
 tempest-core. Jay's not had much time for reviews of late, and it's
 important that the core review team is a working title about actively
 reviewing code.
 
 With this change Tempest core would end up no longer being majority
 north american, or even majority english as first language (that kind of
 excites me). Adjusting to both there will be another mailing list thread
 about changing our weekly meeting time to make it more friendly to our
 APAC contributors.
 
   -Sean
 
 --
 Sean Dague
 http://dague.net
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Search Project - summit follow up

2013-11-21 Thread Lyle, David

 On Wednesday, November 20, 2013 3:12 PM
 Dolph Mathews dolph.math...@gmail.com wrote:
 
 
 On Wed, Nov 20, 2013 at 1:06 PM, Dmitri Zimin(e) | StackStorm
 d...@stackstorm.com wrote:
 Thanks Terry for highlighting this:
 
 Yes, tenant isolation is the must. It's not reflected in the prototype - it
 queries Solr directly; but the proper implementation will go through the
 query API service, where ACL will be applied.
 
 UX folks are welcome to comment on expected queries.
 
 I think the key benefit of cross-resource index over querying DBs is that it
 saves the clients from implementing complex queries case by case, leaving
 flexibility to the user.
 
 I question the need for this service, as this service **should** very much be
 dependent on the clients for this functionality. Expecting to query backends
 directly must be a misunderstanding somewhere... Start with a specification
 for filtering across all services and advocate for it on both existing and new
 APIs.

First, I am all in favor of extensive and common filtering across services in 
OpenStack.  Any improvements there  would be extremely useful.

The benefit of a specific search service is an exhaustive list of all resources 
in the stack where any field in the data is queryable.  So names, ids, IPs, 
etc. So as an admin knowing one piece of data, I can get results across 
services that may match.  It returns data I may have known about, but also data 
I may have overlooked.  When purging a project, I can find all resources tied 
to that project.  Same for a user, or IP.  It's the difference between needing 
to know where to look for every piece of data vs knowing a piece of data and 
being able to do something useful with it.  The performance and usability 
difference is huge.

 
 
 -- Dmitri.
 
 
  
 
 --
 
 -Dolph

As mentioned before, this has to be guarded by role based access controls, 
which is not a trivial addition, knowing what piece of data is available to 
whom.  

And maintaining up-to-date data is also a lingering question.

-David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-21 Thread Ben Nemec

On 2013-11-21 10:20, Jesse Noller wrote:
On Nov 20, 2013, at 9:09 AM, Thierry Carrez thie...@openstack.org 
wrote:



Hi everyone,

How should we proceed to make sure UX (user experience) is properly
taken into account into OpenStack development ? Historically it was 
hard
for UX sessions (especially the ones that affect multiple projects, 
like

CLI / API experience) to get session time at our design summits. This
visibility issue prompted the recent request by UX-minded folks to 
make

UX an official OpenStack program.

However, as was apparent in the Technical Committee meeting discussion
about it yesterday, most of us are not convinced that establishing and
blessing a separate team is the most efficient way to give UX the
attention it deserves. Ideally, UX-minded folks would get active
*within* existing project teams rather than form some sort of
counter-power as a separate team. In the same way we want scalability
and security mindset to be present in every project, we want UX to be
present in every project. It's more of an advocacy group than a
program imho.

So my recommendation would be to encourage UX folks to get involved
within projects and during project-specific weekly meetings to
efficiently drive better UX there, as a direct project contributor. If
all the UX-minded folks need a forum to coordinate, I think [UX] ML
threads and, maybe, a UX weekly meeting would be an interesting first 
step.


There would still be an issue with UX session space at the Design
Summit... but that's a well known issue that affects more than just 
UX:
the way our design summits were historically organized (around 
programs

only) made it difficult to discuss cross-project and cross-program
issues. To address that, the plan is to carve cross-project space into
the next design summit, even if that means a little less topical
sessions for everyone else.

Thoughts ?


Hello again everyone - let me turn this around a little bit, I’m
working on proposing something based on the Oslo work and
openstack-client, and overall looking at the *Developer Experience*
focused around application developers and end-users more so than the
individual UX issues (configuration, UI, IxD, etc).

I’ve spoken to Everett and others about discussions had at the summit
around ideas like developer.openstack.org - and I think the idea is a
good start towards improving the lives of downstream application
developers. However, one of the problems (as I and others see it) is
that there’s a series of disconnects between the needs of the
individual projects to have a command line client for administrative /
basic usage and the needs of application developers and end-users (not
Openstack admins, just end users).

What I’d like to propose is a team that’s not focused on the
overarching UX (from horizon to **) but rather a team / group focused
on some key areas:

1: Creating an *application developer* focused SDK for openstack 
services

2: Unifying the back-end code and common tools for the command line
clients into 1
3: Providing extension points for downstream vendors to add custom
extensions as needed
4: Based on 1; make deriving project-specific CLIs a matter of
importing/subclassing and extending

This is a bit of a hybrid between what the awesome openstackclient
team has done to make a unified CLI, but takes a step back to focus on
a unified back end with clean APIs that can not only power CLIs, but
also act as an SDK. This would allow many vendors (Rackspace, for
example) to willingly drop their SDKs and leverage this unified back
end.

In my “perfect world” you’d be able to, as an application developer
targeting Openstack providers, do something close to (code sketch):

from openstack.api.auth import AuthClass
from openstack.api.nova import NovaClient
from openstack.api.nova import NovaAdmin

auth = AuthClass(…)

nova = NovaClient(auth)
nova.server.create(… block=True)

nova_admin = NovaAdmin(auth)
nova_admin.delete_flavor(…)

Downstream vendors could further extend each of these and either
create very thin shims or meta packages that add provider specific
services, e.g:

from openstack.vendor.rackspace.api.auth AuthClass

…

The end goals being:

1: provide a common rest client back end for all the things
2: Collapse all common functions (such as error retries) into a common 
lib

3: DO NOT DICTATE a concurrency system: no eventlet, no greenlet. Just
Python; allow application developers to use what they need to.
4: Provide a cliff based extension system for vendors
5: Document everything.
6: Python 3  2 compatible code base

As I said earlier; this would build on work already in flight within
openstack, and additionally within vendors such as rackspace to
contribute to this effort directly and reduce the proliferation of
SDKs/clis/etc. Existing SDKs could be end-of-lifed. The team working
on this would be comprised of people focused on working across the
openstack projects not just as dictators of supreme design, but
actually implementing a 

Re: [openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer

2013-11-21 Thread Devananda van der Veen
On Thu, Nov 21, 2013 at 12:08 AM, Ladislav Smola lsm...@redhat.com wrote:

  Responses inline.


 On 11/20/2013 07:14 PM, Devananda van der Veen wrote:

 Responses inline.

  On Wed, Nov 20, 2013 at 2:19 AM, Ladislav Smola lsm...@redhat.comwrote:

 Ok, I'll try to summarize what will be done in the near future for
 Undercloud monitoring.

 1. There will be Central agent running on the same host(hosts once the
 central agent horizontal scaling is finished) as Ironic


  Ironic is meant to be run with 1 conductor service. By i-2 milestone we
 should be able to do this, and running at least 2 conductors will be
 recommended. When will Ceilometer be able to run with multiple agents?


 Here it is described and tracked:
 https://blueprints.launchpad.net/ceilometer/+spec/central-agent-improvement


Thanks - I've subscribed to it.


On a side note, it is a bit confusing to call something a central
 agent if it is meant to be horizontally scaled. The ironic-conductor
 service has been designed to scale out in a similar way to nova-conductor;
 that is, there may be many of them in an AZ. I'm not sure that there is a
 need for Ceilometer's agent to scale in exactly a 1:1 relationship with
 ironic-conductor?


 Yeah we have already talked about that. Maybe some renaming will be in
 place later. :-) I don't think it has to be 1:1 mapping. There was only
 requirement to have Hardware agent only on hosts with ironic-conductor,
 so it has access to management network, right?


Correct.

 2. It will have SNMP pollster, SNMP pollster will be able to get list of
 hosts and their IPs from Nova (last time I
 checked it was in Nova) so it can poll them for stats. Hosts to poll
 can be also defined statically in config file.


  Assuming all the undercloud images have an SNMP daemon baked in, which
 they should, then this is fine. And yes, Nova can give you the IP addresses
 for instances provisioned via Ironic.



 Yes.


3. It will have IPMI pollster, that will poll Ironic API, getting list
 of hosts and a fixed set of stats (basically everything
 that we can get :-))


  No -- I thought we just agreed that Ironic will not expose an API for
 IPMI data. You can poll Nova to get a list of instances (that are on bare
 metal) and you can poll Ironic to get a list of nodes (either nodes that
 have an instance associated, or nodes that are unprovisioned) but this will
 only give you basic information about the node (such as the MAC addresses
 of its network ports, and whether it is on/off, etc).


 Ok sorry I have misunderstood the:
 If there is a fixed set of information (eg, temp, fan speed, etc) that
 ceilometer will want,let's make a list of that and add a driver interface
 within Ironic to abstract the collection of that information from physical
 nodes. Then, each driver will be able to implement it as necessary for that
 vendor. Eg., an iLO driver may poll its nodes differently than a generic
 IPMI driver, but the resulting data exported to Ceilometer should have the
 same structure.

 I thought I've read the data will be exposed, but it will be just internal
 Ironic abstraction, that will be polled by Ironic and send directly do
 Ceilometer collector. So same as the point 4., right? Yeah I guess this
 will be easier to implement.


Yes -- you are correct. I was referring to an internal abstraction around
different hardware drivers.





 4. Ironic will also emit messages (basically all events regarding the
 hardware) and send them directly to Ceilometer collector


  Correct. I've updated the BP:

  https://blueprints.launchpad.net/ironic/+spec/add-ceilometer-agent

  Let me know if that looks like a good description.


 Yeah, seems great. I would maybe remove the word 'Agent', seems Ironic
 will send it directly to Ceilometer collector, so Ironic acts as agent,
 right?


Fair point - I have updated the BP and renamed it to

https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer







 -Devananda



 Does it seems to be correct? I think that is the basic we must have to
 have Undercloud monitored. We can then build on that.

 Kind regards,
 Ladislav



 On 11/20/2013 09:22 AM, Julien Danjou wrote:

 On Tue, Nov 19 2013, Devananda van der Veen wrote:

 If there is a fixed set of information (eg, temp, fan speed, etc) that
 ceilometer will want,

 Sure, we want everything.

 let's make a list of that and add a driver interface
 within Ironic to abstract the collection of that information from
 physical
 nodes. Then, each driver will be able to implement it as necessary for
 that
 vendor. Eg., an iLO driver may poll its nodes differently than a generic
 IPMI driver, but the resulting data exported to Ceilometer should have
 the
 same structure.

 I like the idea.

 An SNMP agent doesn't fit within the scope of Ironic, as far as I see, so
 this would need to be implemented by Ceilometer.

 We're working on adding pollster for that indeed.

 As far as where the SNMP agent would need to run, it 

[openstack-dev] [Neutron][IPv6] Meeting logs from the first IRC meeting

2013-11-21 Thread Collins, Sean (Contractor)
Meeting minutes and the logs for the Neutron IPv6 meeting has been
posted.

We will not meet next week, due to the Thanksgiving holiday in the US.

Our next meeting will be Thursday Dec 5th - 2100 UTC, where we will
review the goals from this week's meeting and look to create actionable
items for I-2.

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam
[2] http://eavesdrop.openstack.org/meetings/neutron_ipv6/2013/
-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ALL] Wheel-enabling patches

2013-11-21 Thread Flavio Percoco

Greetings,

There are some patches that add support for building wheels. The patch
adds a `[wheel]` section with `universal = True`

`universal=True` means the applications supports py2/py3 which is not
the case for most (all?) openstack projects. So, please, do not
approve those patches.

Glance case: https://review.openstack.org/#/c/57132/

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-21 Thread Chuck Short
On Thu, Nov 21, 2013 at 12:58 PM, Sam Alba sam.a...@gmail.com wrote:

 On Thu, Nov 21, 2013 at 9:39 AM, Krishna Raman kra...@gmail.com wrote:
 
  On Thu, Nov 21, 2013 at 8:57 AM, Sam Alba sam.a...@gmail.com wrote:
 
  I wish we can make a decision during this meeting. Is it confirmed for
  Friday 9am pacific?
 
 
  Friday 9am Pacific seems to be the best time for this meeting. Can we use
  the #openstack-meeting channel for this?
  If not, then I can find another channel.
 
  For the agenda, I propose
   - going through https://etherpad.openstack.org/p/containers-service-apiand
  understand capabilities of all container technologies
   + would like the experts on each of those technologies to fill us in
   - go over the API proposal and see what we need to change.

 I think it's too early to go through the API. Let's first go through
 all options discussed before to support containers in openstack
 compute:
 #1 Have this new compute service for containers (other than Nova)
 #2 Extend Nova virt API to support containers
 #3 Support containers API as a third API for Nova

 Depending how it goes, then it makes sense to do an overview of the API I
 think.

 What do you guys think?


+1 for me




  On Thu, Nov 21, 2013 at 8:24 AM, Chuck Short chuck.sh...@canonical.com
 
  wrote:
   Hi
  
   Has a decision happened when this meeting is going to take place,
   assuming
   it is still taking place tomorrow.
  
   Regards
   chuck
  
  
   On Mon, Nov 18, 2013 at 7:58 PM, Krishna Raman kra...@gmail.com
 wrote:
  
  
   On Nov 18, 2013, at 4:30 PM, Russell Bryant rbry...@redhat.com
 wrote:
  
   On 11/18/2013 06:30 PM, Dan Smith wrote:
  
   Not having been at the summit (maybe the next one), could somebody
   give a really short explanation as to why it needs to be a separate
   service? It sounds like it should fit within the Nova area. It is,
   after all, just another hypervisor type, or so it seems.
  
  
   But it's not just another hypervisor. If all you want from your
   containers is lightweight VMs, then nova is a reasonable place to put
   that (and it's there right now). If, however, you want to expose the
   complex and flexible attributes of a container, such as being able to
   overlap filesystems, have fine-grained control over what is shared
 with
   the host OS, look at the processes within a container, etc, then nova
   ends up needing quite a bit of change to support that.
  
   I think the overwhelming majority of folks in the room, after
   discussing
   it, agreed that Nova is infrastructure and containers is more of a
   platform thing. Making it a separate service lets us define a
 mechanism
   to manage these that makes much more sense than treating them like
 VMs.
   Using Nova to deploy VMs that run this service is the right approach,
   IMHO. Clayton put it very well, I think:
  
If the thing you want to deploy has a kernel, then you need Nova. If
your thing runs on a kernel, you want $new_service_name.
  
   I agree.
  
   Note that this is just another service under the compute project (or
   program, or whatever the correct terminology is this week).
  
  
   The Compute program is correct.  That is established terminology as
   defined by the TC in the last cycle.
  
   So while
   distinct from Nova in terms of code, development should be tightly
   integrated until (and if at some point) it doesn't make sense.
  
  
   And it may share a whole bunch of the code.
  
   Another way to put this:  The API requirements people have for
   containers include a number of features considered outside of the
   current scope of Nova (short version: Nova's scope stops before going
   *inside* the servers it creates, except file injection, which we plan
   to
   remove anyway).  That presents a problem.  A new service is one
   possible
   solution.
  
   My view of the outcome of the session was not it *will* be a new
   service.  Instead, it was, we *think* it should be a new service,
 but
   let's do some more investigation to decide for sure.
  
   The action item from the session was to go off and come up with a
   proposal for what a new service would look like.  In particular, we
   needed a proposal for what the API would look like.  With that in
 hand,
   we need to come back and ask the question again of whether a new
   service
   is the right answer.
  
   I see 3 possible solutions here:
  
   1) Expand the scope of Nova to include all of the things people want
 to
   be able to do with containers.
  
   This is my least favorite option.  Nova is already really big.  We've
   worked to split things out (Networking, Block Storage, Images) to
 keep
   it under control.  I don't think a significant increase in scope is a
   smart move for Nova's future.
  
   2) Declare containers as explicitly out of scope and start a new
   project
   with its own API.
  
   That is what is being proposed here.
  
   3) Some middle ground that is a variation of #2.  Consider Ironic.
  

[openstack-dev] [neutron] Group-based Policy language

2013-11-21 Thread Tim Hinrichs
At the Neutron group-based policy proposal meeting today, we discussed whether 
or not the proposal should include a concrete policy language.  We decided to 
send a note to the list to get additional feedback.

The proposed API extension includes the ability to insert/delete policy 
statements.  But we do not say which policy statements are valid.  The benefit 
of leaving the policy language unspecified is that each plugin can support a 
custom policy language, leading to maximum flexibility in terms of writing 
plugins.  The drawback of leaving the policy language unspecified is that 
there's no way for any person or other OS component to know which API calls are 
valid, unless we know which plugin is being used.  Said another way, the 
current proposal says there are API calls like insert-policy-statement and 
delete-policy-statement, but does not say which arguments are valid to give to 
those calls (and the valid arguments can differ from plugin to plugin).

The thought experiment we went through was to imagine writing a super 
stripped-down version of Heat that only builds applications with a DB tier and 
a Web tier, and the template for the app only specifies how many DB servers and 
how many Web servers we want.  We should be able to implement a function that 
takes the number of DB servers and the number of web servers as input and 
executes a sequence of Nova/Neutron API calls that deploys that app.  But 
without a concrete policy language, we can't use the Neutron policy API  b/c we 
don't know what arguments to give the insert-policy-statement call.

In the end, we discussed adding a concrete language to the proposal.  Does 
anyone see a better alternative?

Thanks,
Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-11-21 Thread Carl Baldwin
Hello,

Please tell me if your experience is similar to what I experienced:

1.  I would see *at most one* MySQL server has gone away error for
each process that was spawned as an API worker.  I saw them within a
minute of spawning the workers and then I did not see these errors
anymore until I restarted the server and spawned new processes.

2.  I noted in patch set 7 the line of code that completely fixed this
for me.  Please confirm that you have applied a patch that includes
this fix.

https://review.openstack.org/#/c/37131/7/neutron/wsgi.py

3.  I did not change anything with pool_recycle or idle_interval in my
config files.  All I did was set api_workers to the number of workers
that I wanted to spawn.  The line of code with my comment in it above
was sufficient for me.

It could be that there is another cause for the errors that you're
seeing.  For example, is there a max connections setting in mysql that
might be exceeded when you spawn multiple workers?  More detail would
be helpful.

Cheers,
Carl

On Wed, Nov 20, 2013 at 7:40 PM, Zhongyue Luo zhongyue@intel.com wrote:
 Carl,

 By 2006 I mean the MySQL server has gong away error code.

 The error message was still appearing when idle_timeout is set to 1 and the
 quantum API server did not work in my case.

 Could you perhaps share your conf file when applying this patch?

 Thanks.



 On Thu, Nov 21, 2013 at 3:34 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Hi, sorry for the delay in response.  I'm glad to look at it.

 Can you be more specific about the error?  Maybe paste the error your
 seeing in paste.openstack.org?  I don't find any reference to 2006.
 Maybe I'm missing something.

 Also, is the patch that you applied the most recent?  With the final
 version of the patch it was no longer necessary for me to set
 pool_recycle or idle_interval.

 Thanks,
 Carl

 On Tue, Nov 19, 2013 at 7:14 PM, Zhongyue Luo zhongyue@intel.com
 wrote:
  Carl, Yingjun,
 
  I'm still getting the 2006 error even by configuring idle_interval to 1.
 
  I applied the patch to the RDO havana dist on centos 6.4.
 
  Are there any other options I should be considering such as min/max pool
  size or use_tpool?
 
  Thanks.
 
 
 
  On Sat, Sep 7, 2013 at 3:33 AM, Baldwin, Carl (HPCS Neutron)
  carl.bald...@hp.com wrote:
 
  This pool_recycle parameter is already configurable using the
  idle_timeout
  configuration variable in neutron.conf.  I tested this with a value of
  1
  as suggested and it did get rid of the mysql server gone away messages.
 
  This is a great clue but I think I would like a long-term solution that
  allows the end-user to still configure this like they were before.
 
  I'm currently thinking along the lines of calling something like
  pool.dispose() in each child immediately after it is spawned.  I think
  this should invalidate all of the existing connections so that when a
  connection is checked out of the pool a new one will be created fresh.
 
  Thoughts?  I'll be testing.  Hopefully, I'll have a fixed patch up
  soon.
 
  Cheers,
  Carl
 
  From:  Yingjun Li liyingjun1...@gmail.com
  Reply-To:  OpenStack Development Mailing List
  openstack-dev@lists.openstack.org
  Date:  Thursday, September 5, 2013 8:28 PM
  To:  OpenStack Development Mailing List
  openstack-dev@lists.openstack.org
  Subject:  Re: [openstack-dev] [Neutron] The three API server
  multi-worker
  process patches.
 
 
  +1 for Carl's patch, and i have abandoned my patch..
 
  About the `MySQL server gone away` problem, I fixed it by set
  'pool_recycle' to 1 in db/api.py.
 
  在 2013年9月6日星期五,Nachi Ueno 写道:
 
  Hi Folks
 
  We choose https://review.openstack.org/#/c/37131/ -- This patch to go
  on.
  We are also discussing in this patch.
 
  Best
  Nachi
 
 
 
  2013/9/5 Baldwin, Carl (HPCS Neutron) carl.bald...@hp.com:
   Brian,
  
   As far as I know, no consensus was reached.
  
   A problem was discovered that happens when spawning multiple
   processes.
   The mysql connection seems to go away after between 10-60 seconds
   in
   my
   testing causing a seemingly random API call to fail.  After that, it
   is
   okay.  This must be due to some interaction between forking the
   process
   and the mysql connection pool.  This needs to be solved but I haven't
   had
   the time to look in to it this week.
  
   I'm not sure if the other proposal suffers from this problem.
  
   Carl
  
   On 9/4/13 3:34 PM, Brian Cline bcl...@softlayer.com wrote:
  
  Was any consensus on this ever reached? It appears both reviews are
   still
  open. I'm partial to review 37131 as it attacks the problem a more
  concisely, and, as mentioned, combined the efforts of the two more
  effective patches. I would echo Carl's sentiments that it's an easy
  review minus the few minor behaviors discussed on the review thread
  today.
  
  We feel very strongly about these making it into Havana -- being
   confined
  to a single neutron-server instance per cluster or region is a huge
  

Re: [openstack-dev] Propose project story wiki idea

2013-11-21 Thread Nicholas Chase

On 11/21/2013 4:43 AM, Thierry Carrez wrote:


The trick is, such coverage requires editors with a deep technical
knowledge, both to be able to determine significant news from marketing
noise *and* to be able to deep dive into a new feature and make an
article out of it that makes a good read for developers or OpenStack
deployers. It's also a full-time job, even if some of those deep-dive
articles could just be contributed by their developers.

LWN is an exception rather than the rule in the tech press. It would be
absolutely awesome if we managed to build something like it to cover
OpenStack, but finding the right people (the right skill set + the will
and the time to do it) will be, I fear, extremely difficult.

Thoughts ? Volunteers ?


(raises hand)

As it happens, according to my job description, doing a deep dive into 
a new feature and make an article out of it that makes a good read for 
developers or OpenStack deployers) IS my full time job, and as of 
yesterday, so is keeping up with weekly technical news that would cover 
updates from major projects. :)  The information site's just emerging 
from beta (when I get back after Thanksgiving, likely) but I'm sure we 
can work something out.


So I'm happy to head this up, if nobody else has time.

  Nick




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for external connectivity

2013-11-21 Thread Jeremy Stanley
On 2013-11-21 13:59:16 +0100 (+0100), Salvatore Orlando wrote:
[...]
 In its default configuration the traffic from the OS instance is
 SNATed and the SRC IP will be rewritten to an address in the
 neutron's public network range (172.24.4.224/28 by default). If
 the OS instance is trying to reach a public server like
 www.google.com, then, assuming ip_forward is enabled on the
 devstack-gate VM,  the traffic should be forwarded via the
 default route with a src IP of 172.24.4.224/28.
 
 If the above is correct, will it be possible for the IP traffic
 to be correctly routed back to the Openstack instance?

We would probably need similar L4 NAT configuration on the
devstack-gate node to re-rewrite that outbound source address to the
global address of the interface (and then it will hit yet another
NAT egressing some providers, for example HPCloud).
-- 
{ PGP( 48F9961143495829 ); FINGER( fu...@cthulhu.yuggoth.org );
WWW( http://fungi.yuggoth.org/ ); IRC( fu...@irc.yuggoth.org#ccl );
WHOIS( STANL3-ARIN ); MUD( kin...@katarsis.mudpy.org:6669 ); }

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Future of Key Distribution Server, Trusted Messaging

2013-11-21 Thread Adam Young

On 11/21/2013 01:55 AM, Russell Bryant wrote:

Greetings,

I'd like to check in on the status of this API addition:

 https://review.openstack.org/#/c/40692/

The last comment is:

propose against stackforge as discussed at summit?


Yes, it was discussed in a small group, and not officially.  That 
comment is just a place holder.


Instead of running it in Keystone, it will run in its own service. There 
really is nothing in Keystone that related to KDS, nor the other way 
around.   KDS is Undercloud specific functiojnality (for now) and not 
really appropriate to expose via the Service catalog.


The current thinking is that Pecan (and maybe WSME) and the current code 
base is the correct way to launch it.
Like all our web services, I suggest the production version run via 
mod_wsgi in Apache HTTPD to allow for TLS and X509 Certificate support.


The service/project will still be under the Keystone program (for now, 
we can discuss where it will live long term).  It should be a relatively 
short ramp up to get it deployed.


I know the Barbican folks are interested as well, and I expect they will 
be contributing to make this happen.




I don't see a session about this and from a quick look, don't see notes
related to it in other session etherpads.

When was this discussed?  Can you summarize it?

Last I heard, this was just being deferred to be merged early in
Icehouse [1].

This is blocking one of the most important security features for
OpenStack, IMO (trusted messaging) [2].  We've been talking about it for
years.  Someone has finally made some real progress on it and I feel
like it has been given little to no attention.

I'm not thrilled about the prospect of this going into a new project for
multiple reasons.

  - Given the priority and how long this has been dragging out, having to
wait for a new project to make its way into OpenStack is not very appealing.

  - A new project needs to be able to stand on its own legs.  It needs to
have a reasonably sized development team to make it sustainable.  Is
this big enough for that?

What's the thinking on this?

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-August/013992.html
[2] https://review.openstack.org/#/c/37913/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing myself - looking for a IPv6 topology

2013-11-21 Thread Martinx - ジェームズ
Cool!

Thank you Kyle! I'll try to join the today's IPv6 meeting...

Cheers!
Thiago


On 20 November 2013 01:08, Kyle Mestery (kmestery) kmest...@cisco.comwrote:

 On Nov 19, 2013, at 7:23 PM, Martinx - ジェームズ thiagocmarti...@gmail.com
 wrote:
 
  One more thing...
 
  I'm thinking about the use case for Floating IPs in a NAT-less IPv6
 OpenStack environment.
 
 
  I can think in two use cases in a IPv6-Only Tenant Subnet:
 
  1- the Floating IP might be used to allocate more IPv6 address for an
 Instance (since there is no plans for NAT66, I believe it is not desired)
 but, instead of allocating it from the allocation pool, get it from the
 tenant subnet directly. This way, the IPv6 Floating IP will appear within
 the Instance it self, not at the L3 Namespace Router as it is with IPv4
 today.
 
  2- we can provide a IPv4 Floating IP address (for a IPv6-Only tenant)
 and the L3 Namespace Router will do the NAT46. This way, the old Internet
 will be able to seamless reach a IPv6-Only network.
 
 
  What do you guys have in mind / roadmap?!
 
  Cheers!
  Thiago
 
 Hi Thiago:

 An IPV6 subteam in Neutron was formed for the Icehouse release. The
 team will have weekly meetings in #openstack-meeting-alt on freenode
 Thursday's at 2100 UTV. See the meeting page here [1]. If you're planning
 to work on IPV6 in any form, it would be great to participate in these and
 help shape the IPV6 direction for Neutron.

 Thanks, and welcome aboard!
 Kyle

 [1] https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam

 
 
  On 19 November 2013 22:57, Martinx - ジェームズ thiagocmarti...@gmail.com
 wrote:
  Hello Stackers!
 
  I'm Thiago and I'm here on dev list mostly to watch you guys...
 
  Nevertheless, I want to say that I would love to test in deep, the IPv6
 support in OpenStack IceHouse.
 
  At at glance, what I'm looking for is more or less specified here, as
 follows:




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Glance] OSLO update

2013-11-21 Thread Doug Hellmann
On Tue, Nov 19, 2013 at 9:12 PM, John Griffith
john.griff...@solidfire.comwrote:

 On Mon, Nov 18, 2013 at 3:53 PM, Mark McLoughlin mar...@redhat.com
 wrote:
  On Mon, 2013-11-18 at 17:24 +, Duncan Thomas wrote:
  Random OSLO updates with no list of what changed, what got fixed etc
  are unlikely to get review attention - doing such a review is
  extremely difficult. I was -2ing them and asking for more info, but
  they keep popping up. I'm really not sure what the best way of
  updating from OSLO is, but this isn't it.
 
  Best practice is to include a list of changes being synced, for example:
 
https://review.openstack.org/54660
 
  Every so often, we throw around ideas for automating the generation of
  this changes list - e.g. cinder would have the oslo-incubator commit ID
  for each module stored in a file in git to tell us when it was last
  synced.
 
  Mark.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 Been away on vacation so I'm afraid I'm a bit late on this... but;

 I think the point Duncan is bringing up here is that there are some
 VERY large and significant patches coming from OSLO pulls.  The DB
 patch in particular being over 1K lines of code to a critical portion
 of the code is a bit unnerving to try and do a review on.  I realize
 that there's a level of trust that goes with the work that's done in
 OSLO and synchronizing those changes across the projects, but I think
 a few key concerns here are:

 1. Doing huge pulls from OSLO like the DB patch here are nearly
 impossible to thoroughly review and test.  Over time we learn a lot
 about real usage scenarios and the database and tweak things as we go,
 so seeing a patch set like this show up is always a bit unnerving and
 frankly nobody is overly excited to review it.

 2. Given a certain level of *trust* for the work that folks do on the
 OSLO side in submitting these patches and new additions, I think some
 of the responsibility on the review of the code falls on the OSLO
 team.  That being said there is still the issue of how these changes
 will impact projects *other* than Nova which I think is sometimes
 neglected.  There have been a number of OSLO synchs pushed to Cinder
 that fail gating jobs, some get fixed, some get abandoned, but in
 either case it shows that there wasn't any testing done with projects
 other than Nova (PLEASE note, I'm not referring to this particular
 round of patches or calling any patch set out, just stating a
 historical fact).

 3. We need better documentation in commit messages explaining why the
 changes are necessary and what they do for us.  I'm sorry but in my
 opinion the answer it's the latest in OSLO and Nova already has it
 is not enough of an answer in my opinion.  The patches mentioned in
 this thread in my opinion met the minimum requirements because they at
 least reference the OSLO commit which is great.  In addition I'd like
 to see something to address any discovered issues or testing done with
 the specific projects these changes are being synced to.

 I'm in no way saying I don't want Cinder to play nice with the common
 code or to get in line with the way other projects do things but I am
 saying that I think we have a ways to go in terms of better
 communication here and in terms of OSLO code actually keeping in mind
 the entire OpenStack eco-system as opposed to just changes that were
 needed/updated in Nova.  Cinder in particular went through some pretty
 massive DB re-factoring and changes during Havana and there was a lot
 of really good work there but it didn't come without a cost and the
 benefits were examined and weighed pretty heavily.  I also think that
 some times the indirection introduced by adding some of the
 openstack.common code is unnecessary and in some cases makes things
 more difficult than they should be.

 I'm just not sure that we always do a very good ROI investigation or
 risk assessment on changes, and that opinion applies to ALL changes in
 OpenStack projects, not OSLO specific or anything else.

 All of that being said, a couple of those syncs on the list are
 outdated.  We should start by doing a fresh pull for these and if
 possible add some better documentation in the commit messages as to
 the justification for the patches that would be great.  We can take a
 closer look at the changes and the history behind them and try to get
 some review progress made here.  Mark mentioned some good ideas
 regarding capturing commit ID's from synchronization pulls and I'd
 like to look into that a bit as well.


+1 to all of this. We'll work on improving the documentation in commit
messages.

At the same time, it would be nice to have some of the tweaks and
improvements you've made pushed back into Oslo to be shared. The db code in
particular is slated to come out of the incubator and become its own
library during 

Re: [openstack-dev] How to stage client major releases in Gerrit?

2013-11-21 Thread Mark Washenberger
On Thu, Nov 21, 2013 at 1:58 AM, Thierry Carrez thie...@openstack.orgwrote:

 Mark Washenberger wrote:
  [...]
  In order to mitigate that risk, I think it would make a lot of sense to
  have a place to stage and carefully consider all the breaking changes we
  want to make. I also would like to have that place be somewhere in
  Gerrit so that it fits in with our current submission and review
  process. But if that place is the 'master' branch and we take a long
  time, then we can't really release any bug fixes to the v0 series in the
  meantime.
 
  I can think of a few workarounds, but they all seem kinda bad. For
  example, we could put all the breaking changes together in one commit,
  or we could do all this prep in github.
 
  My question is, is there a correct way to stage breaking changes in
  Gerrit? Has some other team already dealt with this problem?
  [...]

 It sounds like a case where we could use a feature branch. There have
 been a number of them in the past when people wanted to incrementally
 work on new features without affecting master, and at first glance
 (haha) it sounds appropriate here.


As a quick sidebar, what does a feature branch entail in today's parlance?


 Infra team, thoughts ?

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-21 Thread Sam Alba
On Thu, Nov 21, 2013 at 9:39 AM, Krishna Raman kra...@gmail.com wrote:

 On Thu, Nov 21, 2013 at 8:57 AM, Sam Alba sam.a...@gmail.com wrote:

 I wish we can make a decision during this meeting. Is it confirmed for
 Friday 9am pacific?


 Friday 9am Pacific seems to be the best time for this meeting. Can we use
 the #openstack-meeting channel for this?
 If not, then I can find another channel.

 For the agenda, I propose
  - going through https://etherpad.openstack.org/p/containers-service-api and
 understand capabilities of all container technologies
  + would like the experts on each of those technologies to fill us in
  - go over the API proposal and see what we need to change.

I think it's too early to go through the API. Let's first go through
all options discussed before to support containers in openstack
compute:
#1 Have this new compute service for containers (other than Nova)
#2 Extend Nova virt API to support containers
#3 Support containers API as a third API for Nova

Depending how it goes, then it makes sense to do an overview of the API I think.

What do you guys think?


 On Thu, Nov 21, 2013 at 8:24 AM, Chuck Short chuck.sh...@canonical.com
 wrote:
  Hi
 
  Has a decision happened when this meeting is going to take place,
  assuming
  it is still taking place tomorrow.
 
  Regards
  chuck
 
 
  On Mon, Nov 18, 2013 at 7:58 PM, Krishna Raman kra...@gmail.com wrote:
 
 
  On Nov 18, 2013, at 4:30 PM, Russell Bryant rbry...@redhat.com wrote:
 
  On 11/18/2013 06:30 PM, Dan Smith wrote:
 
  Not having been at the summit (maybe the next one), could somebody
  give a really short explanation as to why it needs to be a separate
  service? It sounds like it should fit within the Nova area. It is,
  after all, just another hypervisor type, or so it seems.
 
 
  But it's not just another hypervisor. If all you want from your
  containers is lightweight VMs, then nova is a reasonable place to put
  that (and it's there right now). If, however, you want to expose the
  complex and flexible attributes of a container, such as being able to
  overlap filesystems, have fine-grained control over what is shared with
  the host OS, look at the processes within a container, etc, then nova
  ends up needing quite a bit of change to support that.
 
  I think the overwhelming majority of folks in the room, after
  discussing
  it, agreed that Nova is infrastructure and containers is more of a
  platform thing. Making it a separate service lets us define a mechanism
  to manage these that makes much more sense than treating them like VMs.
  Using Nova to deploy VMs that run this service is the right approach,
  IMHO. Clayton put it very well, I think:
 
   If the thing you want to deploy has a kernel, then you need Nova. If
   your thing runs on a kernel, you want $new_service_name.
 
  I agree.
 
  Note that this is just another service under the compute project (or
  program, or whatever the correct terminology is this week).
 
 
  The Compute program is correct.  That is established terminology as
  defined by the TC in the last cycle.
 
  So while
  distinct from Nova in terms of code, development should be tightly
  integrated until (and if at some point) it doesn't make sense.
 
 
  And it may share a whole bunch of the code.
 
  Another way to put this:  The API requirements people have for
  containers include a number of features considered outside of the
  current scope of Nova (short version: Nova's scope stops before going
  *inside* the servers it creates, except file injection, which we plan
  to
  remove anyway).  That presents a problem.  A new service is one
  possible
  solution.
 
  My view of the outcome of the session was not it *will* be a new
  service.  Instead, it was, we *think* it should be a new service, but
  let's do some more investigation to decide for sure.
 
  The action item from the session was to go off and come up with a
  proposal for what a new service would look like.  In particular, we
  needed a proposal for what the API would look like.  With that in hand,
  we need to come back and ask the question again of whether a new
  service
  is the right answer.
 
  I see 3 possible solutions here:
 
  1) Expand the scope of Nova to include all of the things people want to
  be able to do with containers.
 
  This is my least favorite option.  Nova is already really big.  We've
  worked to split things out (Networking, Block Storage, Images) to keep
  it under control.  I don't think a significant increase in scope is a
  smart move for Nova's future.
 
  2) Declare containers as explicitly out of scope and start a new
  project
  with its own API.
 
  That is what is being proposed here.
 
  3) Some middle ground that is a variation of #2.  Consider Ironic.  The
  idea is that Nova's API will still be used for basic provisioning,
  which
  Nova will implement by talking to Ironic.  However, there are a lot of
  baremetal management things that don't fit in Nova at all, and 

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-21 Thread Steve Baker
On 11/21/2013 08:48 PM, Thomas Spatzier wrote:
 Excerpts from Steve Baker's message on 21.11.2013 00:00:47:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 21.11.2013 00:04
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/21/2013 11:41 AM, Clint Byrum wrote:
 Excerpts from Mike Spreitzer's message of 2013-11-20 13:46:25 -0800:
 Clint Byrum cl...@fewbar.com wrote on 11/19/2013 04:28:31 PM:
 snip

 I am worried about the explosion of possibilities that comes from
 trying
 to deal with all of the diff's possible inside an instance. If there is
 an
 actual REST interface for a thing, then yes, let's use that. For
 instance,
 if we are using docker, there is in fact a very straight forward way to
 say remove entity X. If we are using packages we have the same thing.
 However, if we are just trying to write chef configurations, we have to
 write reverse chef configurations.

 What I meant to convey is let's give this piece of the interface a lot
 of
 thought. Not this is wrong to even have. Given a couple of days now,
 I think we do need apply and remove. We should also provide really
 solid example templates for this concept.
 You're right, I'm already starting to see issues with my current
 approach.
 This smells like a new blueprint. I'll remove it from the scope of the
 current software config work and raise a blueprint to track
 remove-config.

 So I read thru those recent discussions and in parallel also started to
 update the design wiki. BTW, nanjj renamed the wiki to [1] (but also made a
 redirect from the previous ...-WIP page) and linked it as spec to BP [2].

 I'll leave out the remove-config thing for now. While thinking about the
 overall picture, I came up with some other comments:

 I thought about the name SoftwareApplier some more and while it is clear
 what it does (it applies a software config to a server), the naming is not
 really consistent with all the other resources in Heat. Every other
 resource type is called after the thing that you get when the template gets
 instantiated (a Server, a FloatingIP, a VolumeAttachment etc). In
 case of SoftwareApplier what you actually get from a user perspective is a
 deployed instance of the piece of software described be a SoftwareConfig.
 Therefore, I was calling it SoftwareDeployment orignally, because you get a
 software deployment (according to a config). Any comments on that name?
SoftwareDeployment is a better name, apart from those 3 extra letters.
I'll rename my POC.  Sorry nannj, you'll need to rename them back ;)

 If we think this thru with respect to remove-config (even though this
 needs more thought), a SoftwareApplier (that thing itself) would not really
 go to state DELETE_IN_PROGRESS during an update. It is always there on the
 VM but the software it deploys gets deleted and then reapplied or
 whatever ...

 Now thinking more about update scenarios (which we can leave for an
 iteration after the initial deployment is working), in my mental model it
 would be more consistent to have information for handle_create,
 handle_delete, handle_update kinds of events all defined in the
 SoftwareConfig resource. SoftwareConfig for represent configuration
 information for one specific piece of software, e.g. a web server. So it
 could provide all the information you need to install it, to uninstall it,
 or to update its config. By updating the SoftwareApplier's (or
 SoftwareDeployment's - my preferred name) state at runtime, the in-instance
 tools would grab the respective script of whatever an run it.

 So SoftwareConfig could look like:

 resources:
   my_webserver_config:
 type: OS::Heat::SoftwareConfig
 properties:
   http_port:
 type: number
   # some more config props

   config_create: http://www.example.com/my_scripts/webserver/install.sh
   config_delete:
 http://www.example.com/my_scripts/webserver/uninstall.sh
   config_update:
 http://www.example.com/my_scripts/webserver/applyupdate.sh


 At runtime, when a SoftwareApplier gets created, it looks for the
 'config_create' hook and triggers that automation. When it gets deleted, it
 looks for the 'config_delete' hook and so on. Only config_create is
 mandatory.
 I think that would also give us nice extensibility for future use cases.
 For example, Heat today does not support something like stop-stack or
 start-stack which would be pretty useful though. If we have it one day, we
 would just add a 'config_start' hook to the SoftwareConfig.


 [1]
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec
 [2] https://blueprints.launchpad.net/heat/+spec/hot-software-config

With the caveat that what we're discussing here is a future enhancement...

The problem I see with config_create/config_update/config_delete in a
single SoftwareConfig is that we probably can't assume these 3 scripts
consume the same inputs and produce the same outputs.

Re: [openstack-dev] Top Gate Bugs

2013-11-21 Thread Ken'ichi Ohmichi
Hi Clark,

2013/11/21 Clark Boylan clark.boy...@gmail.com:

 Joe seemed to be on the same track with
 https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:57578,n,z
 but went far enough to revert the change that introduced that test. A
 couple people were going to keep hitting those changes to run them
 through more tests and see if 1251920 goes away.

Thanks for updating my patch and pushing to approve it.
Now 1251920 went away from gerrit :-)


 I don't quite understand why this test is problematic (Joe indicated
 it went in at about the time 1251920 became a problem). I would be
 very interested in finding out why this caused a problem.

test_create_backup deletes two server snapshot images at the end,
and I guess the deleting process runs with the next
test(test_get_console_output)
in parallel. As the result, heavy workload causes at test_get_console_output,
and it is a little difficult to get console log.
Now the problem is in work around, I think we would solve it by waiting
for the end of the image delete in each test. I will dig this problem more
next week.


 You can see frequencies for bugs with known signatures at
 http://status.openstack.org/elastic-recheck/

Thank you for the info, that is interesting.


Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Fox, Kevin M
There is a high priority approved blueprint for a Neutron PoolMember:
https://blueprints.launchpad.net/heat/+spec/loadballancer-pool-members

Thanks,
Kevin

From: Christopher Armstrong [chris.armstr...@rackspace.com]
Sent: Thursday, November 21, 2013 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

On Thu, Nov 21, 2013 at 5:18 AM, Zane Bitter 
zbit...@redhat.commailto:zbit...@redhat.com wrote:
On 20/11/13 23:49, Christopher Armstrong wrote:
On Wed, Nov 20, 2013 at 2:07 PM, Zane Bitter 
zbit...@redhat.commailto:zbit...@redhat.com
mailto:zbit...@redhat.commailto:zbit...@redhat.com wrote:

On 20/11/13 16:07, Christopher Armstrong wrote:

On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter 
zbit...@redhat.commailto:zbit...@redhat.com
mailto:zbit...@redhat.commailto:zbit...@redhat.com
mailto:zbit...@redhat.commailto:zbit...@redhat.com 
mailto:zbit...@redhat.commailto:zbit...@redhat.com wrote:

 On 19/11/13 19:14, Christopher Armstrong wrote:

thought we had a workable solution with the LoadBalancerMember
idea,
which you would use in a way somewhat similar to
CinderVolumeAttachment
in the above example, to hook servers up to load balancers.


I haven't seen this proposal at all. Do you have a link? How does it
handle the problem of wanting to notify an arbitrary service (i.e.
not necessarily a load balancer)?


It's been described in the autoscaling wiki page for a while, and I
thought the LBMember idea was discussed at the summit, but I wasn't
there to verify that :)

https://wiki.openstack.org/wiki/Heat/AutoScaling#LBMember.3F

Basically, the LoadBalancerMember resource (which is very similar to the
CinderVolumeAttachment) would be responsible for removing and adding IPs
from/to the load balancer (which is actually a direct mapping to the way
the various LB APIs work). Since this resource lives with the server
resource inside the scaling unit, we don't really need to get anything
_out_ of that stack, only pass _in_ the load balancer ID.

I see a couple of problems with this approach:

1) It makes the default case hard. There's no way to just specify a server and 
hook it up to a load balancer like you can at the moment. Instead, you _have_ 
to create a template (or template snippet - not really any better) to add this 
extra resource in, even for what should be the most basic, default case (scale 
servers behind a load balancer).

We can provide a standard resource/template for this, LoadBalancedServer, to 
make the common case trivial and only require the user to pass parameters, not 
a whole template.


2) It relies on a plugin being present for any type of thing you might want to 
notify.

I don't understand this point. What do you mean by a plugin? I was assuming 
OS::Neutron::PoolMember (not LoadBalancerMember -- I went and looked up the 
actual name) would become a standard Heat resource, not a third-party thing 
(though third parties could provide their own through the usual heat extension 
mechanisms).

(fwiw the rackspace load balancer API works identically, so it seems a pretty 
standard design).


At summit and - to the best of my recollection - before, we talked about 
scaling a generic group of resources and passing notifications to a generic 
controller, with the types of both defined by the user. I was expecting you to 
propose something based on webhooks, which is why I was surprised not to see 
anything about it in the API. (I'm not prejudging that that is the way to go... 
I'm actually wondering if Marconi has a role to play here.)


I think the main benefit of PoolMember is:

1) it matches with the Neutron LBaaS API perfectly, just like all the rest of 
our resources, which represent individual REST objects.

2) it's already understandable. I don't understand the idea behind 
notifications or how they would work to solve our problems. You can keep saying 
that the notifications idea will solve our problems, but I can't figure out how 
it would solve our problem unless someone actually explains it :)


--
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Proposals for Tempest core

2013-11-21 Thread Sean Dague
With all tempest-core votes in, it's unanimous. Masayuki and Ken'ichi,
welcome to the team!

-Sean

On 11/21/2013 09:55 AM, Attila Fazekas wrote:
 +1 for both!
 
 
 
 - Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, November 15, 2013 2:38:27 PM
 Subject: [openstack-dev] [qa] Proposals for Tempest core

 It's post summit time, so time to evaluate our current core group for
 Tempest. There are a few community members that I'd like to nominate for
 Tempest core, as I've found their review feedback over the last few
 months to be invaluable. Tempest core folks, please +1 or -1 as you feel
 appropriate:

 Masayuki Igawa

 His review history is here -
 https://review.openstack.org/#/q/reviewer:masayuki.igawa%2540gmail.com+project:openstack/tempest,n,z

 Ken'ichi Ohmichi

 His review history is here -
 https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com+project:openstack/tempest,n,z

 They have both been actively engaged in the Tempest community, and have
 been actively contributing to both Tempest and OpenStack integrated
 projects, working hard to both enhance test coverage, and fix the issues
 found in the projects themselves. This has been hugely beneficial to
 OpenStack as a whole.

 At the same time, it's also time, I think, to remove Jay Pipes from
 tempest-core. Jay's not had much time for reviews of late, and it's
 important that the core review team is a working title about actively
 reviewing code.

 With this change Tempest core would end up no longer being majority
 north american, or even majority english as first language (that kind of
 excites me). Adjusting to both there will be another mailing list thread
 about changing our weekly meeting time to make it more friendly to our
 APAC contributors.

  -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Christopher Armstrong
On Thu, Nov 21, 2013 at 5:18 AM, Zane Bitter zbit...@redhat.com wrote:

 On 20/11/13 23:49, Christopher Armstrong wrote:

 On Wed, Nov 20, 2013 at 2:07 PM, Zane Bitter zbit...@redhat.com
 mailto:zbit...@redhat.com wrote:

 On 20/11/13 16:07, Christopher Armstrong wrote:

 On Tue, Nov 19, 2013 at 4:27 PM, Zane Bitter zbit...@redhat.com
 mailto:zbit...@redhat.com
 mailto:zbit...@redhat.com mailto:zbit...@redhat.com wrote:

  On 19/11/13 19:14, Christopher Armstrong wrote:

 thought we had a workable solution with the LoadBalancerMember
 idea,
 which you would use in a way somewhat similar to
 CinderVolumeAttachment
 in the above example, to hook servers up to load balancers.


 I haven't seen this proposal at all. Do you have a link? How does it
 handle the problem of wanting to notify an arbitrary service (i.e.
 not necessarily a load balancer)?


 It's been described in the autoscaling wiki page for a while, and I
 thought the LBMember idea was discussed at the summit, but I wasn't
 there to verify that :)

 https://wiki.openstack.org/wiki/Heat/AutoScaling#LBMember.3F

 Basically, the LoadBalancerMember resource (which is very similar to the
 CinderVolumeAttachment) would be responsible for removing and adding IPs
 from/to the load balancer (which is actually a direct mapping to the way
 the various LB APIs work). Since this resource lives with the server
 resource inside the scaling unit, we don't really need to get anything
 _out_ of that stack, only pass _in_ the load balancer ID.


 I see a couple of problems with this approach:

 1) It makes the default case hard. There's no way to just specify a server
 and hook it up to a load balancer like you can at the moment. Instead, you
 _have_ to create a template (or template snippet - not really any better)
 to add this extra resource in, even for what should be the most basic,
 default case (scale servers behind a load balancer).


We can provide a standard resource/template for this, LoadBalancedServer,
to make the common case trivial and only require the user to pass
parameters, not a whole template.


 2) It relies on a plugin being present for any type of thing you might
 want to notify.


I don't understand this point. What do you mean by a plugin? I was assuming
OS::Neutron::PoolMember (not LoadBalancerMember -- I went and looked up the
actual name) would become a standard Heat resource, not a third-party thing
(though third parties could provide their own through the usual heat
extension mechanisms).

(fwiw the rackspace load balancer API works identically, so it seems a
pretty standard design).



 At summit and - to the best of my recollection - before, we talked about
 scaling a generic group of resources and passing notifications to a generic
 controller, with the types of both defined by the user. I was expecting you
 to propose something based on webhooks, which is why I was surprised not to
 see anything about it in the API. (I'm not prejudging that that is the way
 to go... I'm actually wondering if Marconi has a role to play here.)


I think the main benefit of PoolMember is:

1) it matches with the Neutron LBaaS API perfectly, just like all the rest
of our resources, which represent individual REST objects.

2) it's already understandable. I don't understand the idea behind
notifications or how they would work to solve our problems. You can keep
saying that the notifications idea will solve our problems, but I can't
figure out how it would solve our problem unless someone actually explains
it :)


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Propose project story wiki idea

2013-11-21 Thread Mark McLoughlin
On Thu, 2013-11-21 at 10:43 +0100, Thierry Carrez wrote:
 Stefano Maffulli wrote:
  On 11/19/2013 09:33 PM, Boris Pavlovic wrote:
  The idea of this proposal is that every OpenStack project should have
  story wiki page. It means to publish every week one short message that
  contains most interesting updates for the last week, and high level road
  map for future week. So reading this for 10-15 minutes you can see what
  changed in project, and get better understanding of high level road map
  of the project.
  
  I like the idea.
  
  I have received requests to include high level summaries from all
  projects in the weekly newsletter but it's quite impossible for me to do
  that as I don't have enough understanding of each project to extrapolate
  the significant news from the noise. [...]
 
 This is an interesting point. From various discussions I had with people
 over the last year, the thing the development community is really really
 after is weekly technical news that would cover updates from major
 projects as well as deep dives into new features, tech conference CFPs,
 etc. The reference in the area (and only example I have) is LWN
 (lwn.net) and their awesome weekly coverage of what happens in Linux
 kernel development and beyond.
 
 The trick is, such coverage requires editors with a deep technical
 knowledge, both to be able to determine significant news from marketing
 noise *and* to be able to deep dive into a new feature and make an
 article out of it that makes a good read for developers or OpenStack
 deployers. It's also a full-time job, even if some of those deep-dive
 articles could just be contributed by their developers.
 
 LWN is an exception rather than the rule in the tech press. It would be
 absolutely awesome if we managed to build something like it to cover
 OpenStack, but finding the right people (the right skill set + the will
 and the time to do it) will be, I fear, extremely difficult.
 
 Thoughts ? Volunteers ?

Yeah, I think there's a huge opportunity for something like this. Look
at the volume of interesting stuff that's going on on this list.
Highlighting and summarising some of the more important and interesting
of these discussions in high quality articles would be incredibly
useful.

It will be hard to pull off though. You need good quality writing but,
more importantly, really strong editorial control who understands what
people want to read.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does Nova really need an SQL database?

2013-11-21 Thread Chris Friesen

On 11/21/2013 10:52 AM, Stephen Gran wrote:

On 21/11/13 15:49, Chris Friesen wrote:

On 11/21/2013 02:58 AM, Soren Hansen wrote:

2013/11/20 Chris Friesen chris.frie...@windriver.com:

What about a hybrid solution?
There is data that is only used by the scheduler--for performance
reasons
maybe it would make sense to store that information in RAM as
described at

https://blueprints.launchpad.net/nova/+spec/no-db-scheduler




I suspect that a large performance gain could be had by 2 fairly simple
changes:

a) Break the scheduler in two, so that the chunk of code receiving
updates from the compute nodes can't block the chunk of code scheduling
instances.

b) Use a memcache backend instead of SQL for compute resource information.

My fear with keeping data local to a scheduler instance is that local
state destroys scalability.


a and b are basically what is described in the blueprint above.

Your fear is addressed by having the compute nodes broadcast their 
resource information to all scheduler instances.


As I see it, the scheduler could then make a tentative scheduling 
decision, attempt to reserve the resources from the compute node (which 
would trigger the compute node to send updated resource information in 
all the scheduler instances), and assuming it got the requested 
resources it could then proceed with bringing up the resource.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] L7 Switching

2013-11-21 Thread Vijay Venkatachalam
Hi,

The CLI example is capturing the requirement concisely. Thanks. 
One suggestion, you could bring the --policy policy1 to the beginning of 
create-lb-l7rule command.

Also, could rename associate-lb-pool-vip to associate-lb-vip-pool

It will be best to define the db model to reflect the cli.

For ex.:
   class L7Rule {
 .
  String SelectedPool # This should have been String L7Policy
   }

 neutron associate-lb-pool-vip --pool pool1 --vip vip1 --l7policy policy1
There should be a new collection/table to reflect the association of vip, pool, 
policy

Thanks,
Vijay V.


 -Original Message-
 From: Avishay Balderman [mailto:avish...@radware.com]
 Sent: Wednesday, November 20, 2013 9:06 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] L7 Switching
 
 Hi
 I have created this wiki page: (WIP)
 https://wiki.openstack.org/wiki/Neutron/LBaaS/l7
 
 Comments / Questions are welcomed.
 
 Thanks
 
 Avishay
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-21 Thread Rick Harris
++ on Fri. 9am PST.


On Thu, Nov 21, 2013 at 10:57 AM, Sam Alba sam.a...@gmail.com wrote:

 I wish we can make a decision during this meeting. Is it confirmed for
 Friday 9am pacific?

 On Thu, Nov 21, 2013 at 8:24 AM, Chuck Short chuck.sh...@canonical.com
 wrote:
  Hi
 
  Has a decision happened when this meeting is going to take place,
 assuming
  it is still taking place tomorrow.
 
  Regards
  chuck
 
 
  On Mon, Nov 18, 2013 at 7:58 PM, Krishna Raman kra...@gmail.com wrote:
 
 
  On Nov 18, 2013, at 4:30 PM, Russell Bryant rbry...@redhat.com wrote:
 
  On 11/18/2013 06:30 PM, Dan Smith wrote:
 
  Not having been at the summit (maybe the next one), could somebody
  give a really short explanation as to why it needs to be a separate
  service? It sounds like it should fit within the Nova area. It is,
  after all, just another hypervisor type, or so it seems.
 
 
  But it's not just another hypervisor. If all you want from your
  containers is lightweight VMs, then nova is a reasonable place to put
  that (and it's there right now). If, however, you want to expose the
  complex and flexible attributes of a container, such as being able to
  overlap filesystems, have fine-grained control over what is shared with
  the host OS, look at the processes within a container, etc, then nova
  ends up needing quite a bit of change to support that.
 
  I think the overwhelming majority of folks in the room, after discussing
  it, agreed that Nova is infrastructure and containers is more of a
  platform thing. Making it a separate service lets us define a mechanism
  to manage these that makes much more sense than treating them like VMs.
  Using Nova to deploy VMs that run this service is the right approach,
  IMHO. Clayton put it very well, I think:
 
   If the thing you want to deploy has a kernel, then you need Nova. If
   your thing runs on a kernel, you want $new_service_name.
 
  I agree.
 
  Note that this is just another service under the compute project (or
  program, or whatever the correct terminology is this week).
 
 
  The Compute program is correct.  That is established terminology as
  defined by the TC in the last cycle.
 
  So while
  distinct from Nova in terms of code, development should be tightly
  integrated until (and if at some point) it doesn't make sense.
 
 
  And it may share a whole bunch of the code.
 
  Another way to put this:  The API requirements people have for
  containers include a number of features considered outside of the
  current scope of Nova (short version: Nova's scope stops before going
  *inside* the servers it creates, except file injection, which we plan to
  remove anyway).  That presents a problem.  A new service is one possible
  solution.
 
  My view of the outcome of the session was not it *will* be a new
  service.  Instead, it was, we *think* it should be a new service, but
  let's do some more investigation to decide for sure.
 
  The action item from the session was to go off and come up with a
  proposal for what a new service would look like.  In particular, we
  needed a proposal for what the API would look like.  With that in hand,
  we need to come back and ask the question again of whether a new service
  is the right answer.
 
  I see 3 possible solutions here:
 
  1) Expand the scope of Nova to include all of the things people want to
  be able to do with containers.
 
  This is my least favorite option.  Nova is already really big.  We've
  worked to split things out (Networking, Block Storage, Images) to keep
  it under control.  I don't think a significant increase in scope is a
  smart move for Nova's future.
 
  2) Declare containers as explicitly out of scope and start a new project
  with its own API.
 
  That is what is being proposed here.
 
  3) Some middle ground that is a variation of #2.  Consider Ironic.  The
  idea is that Nova's API will still be used for basic provisioning, which
  Nova will implement by talking to Ironic.  However, there are a lot of
  baremetal management things that don't fit in Nova at all, and those
  only exist in Ironic's API.
 
  I wanted to mention this option for completeness, but I don't actually
  think it's the right choice here.  With Ironic you have a physical
  resource (managed by Ironic), and then instances of an image running on
  these physical resources (managed by Nova).
 
  With containers, there's a similar line.  You have instances of
  containers (managed either by Nova or the new service) running on
  servers (managed by Nova).  I think there is a good line for separating
  concerns, with a container service on top of Nova.
 
 
  Let's ask ourselves:  How much overlap is there between the current
  compute API and a proposed containers API?  Effectively, what's the
  diff?  How much do we expect this diff to change in the coming years?
 
  The current diff demonstrates a significant clash with the current scope
  of Nova.  I also expect a lot of innovation around containers in the
  next 

Re: [openstack-dev] How to stage client major releases in Gerrit?

2013-11-21 Thread Monty Taylor


On 11/21/2013 01:58 AM, Thierry Carrez wrote:
 Mark Washenberger wrote:
 [...]
 In order to mitigate that risk, I think it would make a lot of sense to
 have a place to stage and carefully consider all the breaking changes we
 want to make. I also would like to have that place be somewhere in
 Gerrit so that it fits in with our current submission and review
 process. But if that place is the 'master' branch and we take a long
 time, then we can't really release any bug fixes to the v0 series in the
 meantime.

 I can think of a few workarounds, but they all seem kinda bad. For
 example, we could put all the breaking changes together in one commit,
 or we could do all this prep in github.

 My question is, is there a correct way to stage breaking changes in
 Gerrit? Has some other team already dealt with this problem?
 [...]
 
 It sounds like a case where we could use a feature branch. There have
 been a number of them in the past when people wanted to incrementally
 work on new features without affecting master, and at first glance
 (haha) it sounds appropriate here. Infra team, thoughts ?

Hi!

This is a really complex one because of the gate. It's not just about
the semver major version bump. I agree with earlier sentiment - the way
to handle breaking changes is to bump the major version, and on the
surface I don't have a problem with us doing that, since there is
already a mechanism to deal with that.

HOWEVER - it's more complex than that with us, because the client libs
are part of our integration.

We've already agreed on and have been operating on the assumption that
client libs do not break rest api backwards compat. We're 3 seconds away
from landing gating tests to ensure this is the case. The reasoning here
is that an end user of OpenStack should not need to know what version of
OpenStack a vendor is running - the latest python-glanceclient should
work with diablo and it should work with icehouse. Nothing in this
thread breaks that - I just bring it up because it's one of the overall
design points that we'll be rubbing against.

Now, in the gate, without bringing backwards compat into the picture -
we test master against master, and stable/havana against stable/havana
across all the projects. If a project (like a client lib) doesn't have a
stable/havana, we use its master branch - this is how master client lib
is tested against stable/grizzly and stable/havana right now. And I
don't just mean as an end-user test - I mean that we deploy devstack for
stable/grizzly with master python-glanceclient and that's what any of
the other projects that need to talk to glance (heat, horizon) uses.

We do not pin uses of the client libs in stable branches - because we
actually explicitly want to gate on the combination, and we want to be
sure that releasing a new version of glanceclient does not break
someone's grizzly deployment.

With all of that background ...

In order to consider a set of changes that would be a major version bump
for the client libs, we're going to have to figure out what the testing
matrix looks like (as in, what do we _want_ to test with each other) and
we're going to have to figure out how to orchestrate that in the logic
that prepares sets of branches to be tested with each other in the gate.

For dev, there are two approaches - we can make a
feature/glanceclient-2.0 branch, and leave master as it is, or we can
make a stable/1.0 branch and do the breaking work on master.

If we do the stable/1.0 approach, we'd probably have to go pin
stable/grizzly and stable/havana at =2.0. Problem is, I don't know how
to tell devstack gate that stable/grizzly and stable/havana want
glanceclient stable/1.0

Alternately, if we do the feature branch, we can avoid thinking about
the stable problem for a minute, but we still gate feature branch
patches - so you'd have to figure out how to deal with the fact that the
feature branch would be gating against master of the other projects.

Why don't we just stop cross-gating on the client libs and have our
servers consume releases of our clients? Well, that's because they'd be
requesting different versions of them at different times. We need to
make sure that the client libs can't land changes that break the server
projects BEFORE they release, because otherwise the
tag/release/tag/re-release cycle would kill us.

In any case, sorry for the novel, this request is PARTICULARLY complex
to work through, as backwards incompat client library changes is a
thing we explicitly designed the integrated gate to consider would never
happen. I understand the request, and like I said, it's not unreasonable
on its face - but it's going to take some brain time from the infra team
I believe ... and fixing the current race conditions has been priority
number one this week...

That said - bear with us here - if you can hang on for a bit until we've
got some space to properly brainstorm about what the physical
possibilities are, we can come back with some suggestions and

Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-21 Thread Krishna Raman
On Thu, Nov 21, 2013 at 8:57 AM, Sam Alba sam.a...@gmail.com wrote:

 I wish we can make a decision during this meeting. Is it confirmed for
 Friday 9am pacific?


Friday 9am Pacific seems to be the best time for this meeting. Can we use
the #openstack-meeting channel for this?
If not, then I can find another channel.

For the agenda, I propose
 - going through
https://etherpad.openstack.org/p/containers-service-apiand understand
capabilities of all container technologies
 + would like the experts on each of those technologies to fill us in
 - go over the API proposal and see what we need to change.

--Krishna



 On Thu, Nov 21, 2013 at 8:24 AM, Chuck Short chuck.sh...@canonical.com
 wrote:
  Hi
 
  Has a decision happened when this meeting is going to take place,
 assuming
  it is still taking place tomorrow.
 
  Regards
  chuck
 
 
  On Mon, Nov 18, 2013 at 7:58 PM, Krishna Raman kra...@gmail.com wrote:
 
 
  On Nov 18, 2013, at 4:30 PM, Russell Bryant rbry...@redhat.com wrote:
 
  On 11/18/2013 06:30 PM, Dan Smith wrote:
 
  Not having been at the summit (maybe the next one), could somebody
  give a really short explanation as to why it needs to be a separate
  service? It sounds like it should fit within the Nova area. It is,
  after all, just another hypervisor type, or so it seems.
 
 
  But it's not just another hypervisor. If all you want from your
  containers is lightweight VMs, then nova is a reasonable place to put
  that (and it's there right now). If, however, you want to expose the
  complex and flexible attributes of a container, such as being able to
  overlap filesystems, have fine-grained control over what is shared with
  the host OS, look at the processes within a container, etc, then nova
  ends up needing quite a bit of change to support that.
 
  I think the overwhelming majority of folks in the room, after discussing
  it, agreed that Nova is infrastructure and containers is more of a
  platform thing. Making it a separate service lets us define a mechanism
  to manage these that makes much more sense than treating them like VMs.
  Using Nova to deploy VMs that run this service is the right approach,
  IMHO. Clayton put it very well, I think:
 
   If the thing you want to deploy has a kernel, then you need Nova. If
   your thing runs on a kernel, you want $new_service_name.
 
  I agree.
 
  Note that this is just another service under the compute project (or
  program, or whatever the correct terminology is this week).
 
 
  The Compute program is correct.  That is established terminology as
  defined by the TC in the last cycle.
 
  So while
  distinct from Nova in terms of code, development should be tightly
  integrated until (and if at some point) it doesn't make sense.
 
 
  And it may share a whole bunch of the code.
 
  Another way to put this:  The API requirements people have for
  containers include a number of features considered outside of the
  current scope of Nova (short version: Nova's scope stops before going
  *inside* the servers it creates, except file injection, which we plan to
  remove anyway).  That presents a problem.  A new service is one possible
  solution.
 
  My view of the outcome of the session was not it *will* be a new
  service.  Instead, it was, we *think* it should be a new service, but
  let's do some more investigation to decide for sure.
 
  The action item from the session was to go off and come up with a
  proposal for what a new service would look like.  In particular, we
  needed a proposal for what the API would look like.  With that in hand,
  we need to come back and ask the question again of whether a new service
  is the right answer.
 
  I see 3 possible solutions here:
 
  1) Expand the scope of Nova to include all of the things people want to
  be able to do with containers.
 
  This is my least favorite option.  Nova is already really big.  We've
  worked to split things out (Networking, Block Storage, Images) to keep
  it under control.  I don't think a significant increase in scope is a
  smart move for Nova's future.
 
  2) Declare containers as explicitly out of scope and start a new project
  with its own API.
 
  That is what is being proposed here.
 
  3) Some middle ground that is a variation of #2.  Consider Ironic.  The
  idea is that Nova's API will still be used for basic provisioning, which
  Nova will implement by talking to Ironic.  However, there are a lot of
  baremetal management things that don't fit in Nova at all, and those
  only exist in Ironic's API.
 
  I wanted to mention this option for completeness, but I don't actually
  think it's the right choice here.  With Ironic you have a physical
  resource (managed by Ironic), and then instances of an image running on
  these physical resources (managed by Nova).
 
  With containers, there's a similar line.  You have instances of
  containers (managed either by Nova or the new service) running on
  servers (managed by Nova).  I think there is a good line 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Thomas Hervé
On Thu, Nov 21, 2013 at 12:18 PM, Zane Bitter zbit...@redhat.com wrote:
 On 20/11/13 23:49, Christopher Armstrong wrote:

 https://wiki.openstack.org/wiki/Heat/AutoScaling#LBMember.3F

 Basically, the LoadBalancerMember resource (which is very similar to the
 CinderVolumeAttachment) would be responsible for removing and adding IPs
 from/to the load balancer (which is actually a direct mapping to the way
 the various LB APIs work). Since this resource lives with the server
 resource inside the scaling unit, we don't really need to get anything
 _out_ of that stack, only pass _in_ the load balancer ID.


 I see a couple of problems with this approach:

 1) It makes the default case hard. There's no way to just specify a server
 and hook it up to a load balancer like you can at the moment. Instead, you
 _have_ to create a template (or template snippet - not really any better) to
 add this extra resource in, even for what should be the most basic, default
 case (scale servers behind a load balancer).

First, the design we had implied that we had a template all the time.
Now that changed, it does make things a bit harder than the
LoadBalancerNames list, but it's still fairly simple to me, and brings
a lot of flexibility.

Personally, my idea was to build a generic API, and then build helpers
on top of it to make common cases easier. It seems it's not a shared
view, but I don't see how we can do both at once.

 2) It relies on a plugin being present for any type of thing you might want
 to notify.

 At summit and - to the best of my recollection - before, we talked about
 scaling a generic group of resources and passing notifications to a generic
 controller, with the types of both defined by the user. I was expecting you
 to propose something based on webhooks, which is why I was surprised not to
 see anything about it in the API. (I'm not prejudging that that is the way
 to go... I'm actually wondering if Marconi has a role to play here.)

We definitely talked about notifications between resources. But,
putting it in the way of the autoscaling API would postpone things
quite a bit, whereas we don't really need it for the first phase. If
we use the member concept, we can provide a first integration step,
where the only missing thing would be rolling updates.

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova][social-apects] Social aspects shouldn't impact on dev process

2013-11-21 Thread Joe Gordon
On Thu, Nov 21, 2013 at 6:29 AM, David Ripton drip...@redhat.com wrote:

 On 11/20/2013 02:06 AM, Boris Pavlovic wrote:

  I faced some social problems in community.

 We started working on purge engine for DB (before HK summit)

 This is very important, because at this moment we don't have any working
 way to purge DB... so admins should make it by hand.


 And we made this BP (in october)
 https://blueprints.launchpad.net/nova/+spec/db-purge-engine

 And made patch that makes this work.
 But only because our BP wasn't approved we got -2 from Joe Gordon.
 (https://review.openstack.org/#/c/51523/ ) And there was long discussion
 to remove this -2.

 And now after summit David Ripton made the similar BP (probably he
 didn't know):
 https://blueprints.launchpad.net/nova/+spec/db-purge2
 That is already approved by Joe Gordon. (that already know that we are
 working on same problem)

 Why?

 (btw question about Purge Engine was raised by me on the summit and
 community accepted that)


 I discussed this with Boris on IRC yesterday.  When I volunteered to write
 a DB purger at Summit, I wasn't aware that there was already one actively
 in progress.  (So many patches around the end of Havana.)  When I went to
 file a blueprint and noticed the existing db-purge blueprint, I saw that
 its patch had been -2'd and figured it was dead.  But as long as Boris is
 working to actively improve that patch (he's on vacation now but said he'd
 probably have something on Monday), I won't submit a patch for the
 competing blueprint.  Instead, I'll work to make sure Boris's code meets
 everyone's requirements (some that I got from Joe Gordon and Phil Day are
 mentioned in db-purge2), and when it does I'll withdraw the db-purge2
 blueprint and retarget remove-db-archiving to depend on Boris's blueprint
 instead.


Thanks! unless anyone else has further complaints, I think that wraps up
this thread.

best,
Joe




 --
 David Ripton   Red Hat   drip...@redhat.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Unable to see console using VNC on ESX hypervisor

2013-11-21 Thread Rajshree Thorat

Hi All,

I have configured OpenStack Grizzly to control ESX hypervisor. I can 
successfully launch instances but unable to see its console using VNC.


Following is my configuration.

***Compute node :

nova.conf for vnc:

vnc_enabled = true
novncproxy_base_url=http://public_ip_of_controller:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=management_ip_of_compute
vncserver_listen=0.0.0.0

***Controller node:

nova.conf for vnc:

novncproxy_base_url=http://public_ip_of_controller:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=management_ip_of_controller
vncserver_listen=0.0.0.0

root@openstk2:~# tail /var/log/nova/nova-consoleauth.log
2013-11-21 18:40:35.228 7570 AUDIT nova.service [-] Starting consoleauth 
node (version 2013.1.3)
2013-11-21 18:40:35.395 INFO nova.openstack.common.rpc.common 
[req-179d456d-f306-426f-b65e-242362758f73 None None] Connected to AMQP 
server on controller_ip:5672
2013-11-21 18:42:34.012 AUDIT nova.consoleauth.manager 
[req-ebc33f34-f57b-492b-8429-39eb3240e5d7 
a8f0e9af6e6b4d08b1729acae0510d54 db63e4a448fc426086562638726f9081] 
Received Token: 1bcb7408-5c59-466d-a84d-528481af3c37, {'instance_uuid': 
u'969e49b0-af3f-45bd-8618-1320ba337962', 'internal_access_path': None, 
'last_activity_at': 1385039554.012067, 'console_type': u'novnc', 'host': 
u'ESX_host_IP', 'token': u'1bcb7408-5c59-466d-a84d-528481af3c37', 
'port': 6031})
2013-11-21 18:42:34.015 INFO nova.openstack.common.rpc.common 
[req-ebc33f34-f57b-492b-8429-39eb3240e5d7 
a8f0e9af6e6b4d08b1729acae0510d54 db63e4a448fc426086562638726f9081] 
Connected to AMQP server on controller_ip:5672
2013-11-21 18:42:34.283 AUDIT nova.consoleauth.manager 
[req-518ed47e-5d68-491d-8c57-16952744a2d8 None None] Checking Token: 
1bcb7408-5c59-466d-a84d-528481af3c37, True)
2013-11-21 18:42:35.710 AUDIT nova.consoleauth.manager 
[req-2d65d8ac-c003-4f4d-9014-9e8995794ad6 None None] Checking Token: 
1bcb7408-5c59-466d-a84d-528481af3c37, True)


With same configuration I can connect to vm's console on KVM setup. Is 
there any other setting to access console for ESX hypervisor?


Any help would be highly appreciated.

Thanks in advance,

Regards,
Rajshree




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Does Nova really need an SQL database?

2013-11-21 Thread Chris Friesen

On 11/21/2013 02:58 AM, Soren Hansen wrote:

2013/11/20 Chris Friesen chris.frie...@windriver.com:

What about a hybrid solution?
There is data that is only used by the scheduler--for performance reasons
maybe it would make sense to store that information in RAM as described at

https://blueprints.launchpad.net/nova/+spec/no-db-scheduler

For the rest of the data, perhaps it could be persisted using some alternate
backend.


What would that solve?


The scheduler has performance issues.  Currently the design is 
suboptimal--the compute nodes write resource information to the 
database, then the scheduler pulls a bunch of data out of the database, 
copies it over into python, and analyzes it in python to do the filtering.


For large clusters this can lead to significant time spent scheduling.

Based on the above, for performance reasons it would be beneficial for 
the scheduler to have the necessary data already available in python 
rather than needing to pull it out of the database.


For other uses of the database people are proposing alternatives to SQL 
in order to get reliability.  I don't have any experience with that so I 
have no opinion on it.  But as long as the data is sitting on-disk (or 
even in a database process instead of in the scheduler process) it's 
going to slow down the scheduler.


If the primary consumer of a give piece of data (free ram, free cpu, 
free disk, etc) is the scheduler, then I think it makes sense for the 
compute nodes to report it directly to the scheduler.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Find the compute host on which a VM runs

2013-11-21 Thread Aaron Rosen
On Thu, Nov 21, 2013 at 8:12 AM, Robert Kukura rkuk...@redhat.com wrote:

 On 11/21/2013 04:20 AM, Stefan Apostoaie wrote:
  Hello again,
 
  I studied the portbindings extension (the quantum.db.portbindings_db and
  quantum.extensions.portbindings modules). However it's unclear for me
  who sets the portbindings.HOST_ID attribute. I ran some tests with OVS:
  called quantum port-create command and
  the OVSQuantumPluginV2.create_port method got called and it had
  'binding:host_id': object object at memory_address. If I print out
  the port object I have 'binding:host_id': None.
 
  What other plugins are doing:
  1. extend the quantum.db.portbindings_db.PortBindingMixin class
  2. call the _process_portbindings_create_and_update method in
  create/update port

 Take look at how the ML2 plugin handles port binding and uses
 binding:host_id with its set of registered MechanismDrivers. It does not
 use the mixin class because the values of binding:vif_type and other
 portbinding attributes vary depending on what MechanismDriver binds the
 port.

 Hi Bob,

I don't want to reopen a can of worms here but I'm still wondering why
neutron needs to know the location where ports are (I understand it makes
sense to have some metadata for network ports (i.e dhcp) as to which agent
they are mapped to) but not really for ports that are mapped to instances.

For example, in the ML2 case there are agents running on all the
hypervisors so the agent knows which compute node he is running on and can
determine the ports on that compute node himself by looking in the ovsdb
interface table (i.e: external_ids:
{attached-mac=fa:16:3e:92:2b:53,
iface-id=d7bf8418-e4ad-4dd7-8dda-a3c430ef4d9f, iface-status=active,
vm-id=a9be8cff-87f6-49a0-b355-a53ec1579b56}) . Where the iface-id is the
neutron port-id and the vm-id is the nova-instance id. nova-compute puts
this information in there.

I also wonder about the merrit of having neutron return which vif_type nova
should use. Right now most plugins just return:
{pbin.VIF_TYPE: pbin.VIF_TYPE_OVS} . It seems to me that the
agent/nova-compute host should know what type of vif plugging to use based
on the type of node he is (and how he has been configured). I don't think
neutron should know this information imo.

I think I have been missing the reason why we have the port-binding
extension for a while now . Sorry if I've already brought this up in the
past. Would you mind shedding some light on this again?

Thanks,

Aaron







 In fact, you may want to consider implementing an ML2 MechanismDriver
 rather than a entire new monolithic plugin - it will save you a lot of
 work, initially and in the longer term!

  What I cannot find is where the portbindings.HOST_ID attribute is being
 set.

 Its set by nova, either on port creation, or as an update to an existing
 port. See allocate_for_instance() and
 _populate_neutron_extension_values() in nova/network/neutronv2/api.py.

 -Bob

 
  Regards,
  Stefan
 
 
  On Fri, Nov 15, 2013 at 10:57 PM, Mark McClain
  mark.mccl...@dreamhost.com mailto:mark.mccl...@dreamhost.com wrote:
 
  Stefan-
 
  Your workflow is very similar to many other plugins.  You’ll want to
  look at implementing the port binding extension in your plugin.  The
  port binding extension allows Nova to inform Neutron of the host
  where the VM is running.
 
  mark
 
  On Nov 15, 2013, at 9:55 AM, Stefan Apostoaie ioss...@gmail.com
  mailto:ioss...@gmail.com wrote:
 
   Hello,
  
   I'm creating a Neutron/Quantum plugin to work with a networking
  controller that takes care of the configuration of the virtual
  networks. Basically what we are doing is receive the API calls and
  forward them to our controller to run the required configuration on
  the compute hosts.
   What I need to know when a create_port call is made to my plugin
  is on which compute host the VM is created (so that our controller
  will run the configuration on that host). Is there a way to find out
  this information from the plugin?
  
   Regards,
   Stefan Apostoaie
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Zane Bitter

On 21/11/13 18:44, Christopher Armstrong wrote:


2) It relies on a plugin being present for any type of thing you
might want to notify.


I don't understand this point. What do you mean by a plugin? I was
assuming OS::Neutron::PoolMember (not LoadBalancerMember -- I went and
looked up the actual name) would become a standard Heat resource, not a
third-party thing (though third parties could provide their own through
the usual heat extension mechanisms).


I mean it requires a resource type plugin written in Python. So cloud 
operators could provide their own implementations, but ordinary users 
could not.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-11-21 Thread Matt Riedemann



On Wednesday, November 20, 2013 11:53:45 PM, Clark Boylan wrote:

On Wed, Nov 20, 2013 at 9:43 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com wrote:

Hi Joe,

2013/11/20 Joe Gordon joe.gord...@gmail.com:

Hi All,

As many of you have noticed the gate has been in very bad shape over the
past few days.  Here is a list of some of the top open bugs (without pending
patches, and many recent hits) that we are hitting.  Gate won't be stable,
and it will be hard to get your code merged, until we fix these bugs.

1) https://bugs.launchpad.net/bugs/1251920
nova
468 Hits


Can we know the frequency of each failure?
I'm trying 1251920 and putting the investigation tempest patch.
  https://review.openstack.org/#/c/57193/

The patch can avoid this problem 4 times, but I am not sure this is
worth or not.


Thanks
Ken'ichi Ohmichi

---

2) https://bugs.launchpad.net/bugs/1251784
neutron, Nova
328 Hits
3) https://bugs.launchpad.net/bugs/1249065
neutron
   122 hits
4) https://bugs.launchpad.net/bugs/1251448
neutron
65 Hits

Raw Data:


Note: If a bug has any hits for anything besides failure, it means the
fingerprint isn't perfect.

Elastic recheck known issues
Bug: https://bugs.launchpad.net/bugs/1251920 = message:assertionerror:
console output was empty AND filename:console.html Title: Tempest
failures due to failure to return console logs from an instance Project:
Status nova: Confirmed Hits FAILURE: 468 Bug:
https://bugs.launchpad.net/bugs/1251784 = message:Connection to neutron
failed: Maximum attempts reached AND filename:logs/screen-n-cpu.txt
Title: nova+neutron scheduling error: Connection to neutron failed: Maximum
attempts reached Project: Status neutron: New nova: New Hits FAILURE: 328
UNSTABLE: 13 SUCCESS: 275 Bug: https://bugs.launchpad.net/bugs/1240256 =
message: 503 AND filename:logs/syslog.txt AND
syslog_program:proxy-server Title: swift proxy-server returning 503 during
tempest run Project: Status openstack-ci: Incomplete swift: New tempest: New
Hits FAILURE: 136 SUCCESS: 83
Pending Patch Bug: https://bugs.launchpad.net/bugs/1249065 = message:No
nw_info cache associated with instance AND filename:logs/screen-n-api.txt
Title: Tempest failure: tempest/scenario/test_snapshot_pattern.py Project:
Status neutron: New nova: Confirmed Hits FAILURE: 122 Bug:
https://bugs.launchpad.net/bugs/1252514 = message:Got error from Swift:
put_object AND filename:logs/screen-g-api.txt Title: glance doesn't
recover if Swift returns an error Project: Status devstack: New glance: New
swift: New Hits FAILURE: 95
Pending Patch Bug: https://bugs.launchpad.net/bugs/1244255 =
message:NovaException: Unexpected vif_type=binding_failed AND
filename:logs/screen-n-cpu.txt Title: binding_failed because of l2 agent
assumed down Project: Status neutron: Fix Committed Hits FAILURE: 92
SUCCESS: 29 Bug: https://bugs.launchpad.net/bugs/1251448 = message:
possible networks found, use a Network ID to be more specific. (HTTP 400)
AND filename:console.html Title: BadRequest: Multiple possible networks
found, use a Network ID to be more specific. Project: Status neutron: New
Hits FAILURE: 65 Bug: https://bugs.launchpad.net/bugs/1239856 =
message:tempest/services AND message:/images_client.py AND
message:wait_for_image_status AND filename:console.html Title:
TimeoutException: Request timed out on
tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestXML
Project: Status glance: New Hits FAILURE: 62 Bug:
https://bugs.launchpad.net/bugs/1235435 = message:One or more ports have
an IP allocation from this subnet AND message: SubnetInUse: Unable to
complete operation on subnet AND filename:logs/screen-q-svc.txt Title:
'SubnetInUse: Unable to complete operation on subnet UUID. One or more ports
have an IP allocation from this subnet.' Project: Status neutron: Incomplete
nova: Fix Committed tempest: New Hits FAILURE: 48 Bug:
https://bugs.launchpad.net/bugs/1224001 =
message:tempest.scenario.test_network_basic_ops AssertionError: Timed out
waiting for AND filename:console.html Title: test_network_basic_ops fails
waiting for network to become available Project: Status neutron: In Progress
swift: Invalid tempest: Invalid Hits FAILURE: 42 Bug:
https://bugs.launchpad.net/bugs/1218391 = message:Cannot 'createImage'
AND filename:console.html Title:
tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestXML.test_delete_image_that_is_not_yet_active
spurious failure Project: Status nova: Confirmed swift: Confirmed tempest:
Confirmed Hits FAILURE: 25



best,
Joe Gordon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Joe seemed to be on the same track with

Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for external connectivity

2013-11-21 Thread Salvatore Orlando
Forgive my ignorance, but I would like to make sure that packets generated
from Openstack instances on neutron private networks will actually be able
to reach public addresses.

In its default configuration the traffic from the OS instance is SNATed and
the SRC IP will be rewritten to an address in the neutron's public network
range (172.24.4.224/28 by default). If the OS instance is trying to reach a
public server like www.google.com, then, assuming ip_forward is enabled on
the devstack-gate VM,  the traffic should be forwarded via the default
route with a src IP of 172.24.4.224/28.

If the above is correct, will it be possible for the IP traffic to be
correctly routed back to the Openstack instance?

Regards,
Salvatore


On 20 November 2013 23:17, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2013-11-20 14:07:49 -0800 (-0800), Sean Dague wrote:
  On 11/18/2013 02:41 AM, Yair Fried wrote:
  [...]
   2. add fields in tempest.conf for
* external connectivity = False/True
* external ip to test against (ie 8.8.8.8)
 
  +1 for #2. In the gate we'll need to think about what that address
  can / should be. It may be different between different AZs. At this
  point I'd leave the rest of the options off the table until #2 is
  working reliably.
 [...]

 Having gone down this path in the past, I suggest the test check for
 no fewer than three addresses, sending several probes to each, and
 be considered successful if at least one gets a response.
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The recent gate performance and how it affects you

2013-11-21 Thread James E. Blair
Matt Riedemann mrie...@linux.vnet.ibm.com writes:

 People get heads-down in their own projects and what they are working
 on and it's hard to keep up with what's going on in the infra channel
 (or nova channel for that matter), so sending out a recap that
 everyone can see in the mailing list is helpful to reset where things
 are at and focus possibly various isolated investigations (as we saw
 happen this week).

Further on that point, Joe and I and others have been brainstorming
about how to prevent this situation and improve things when it does
happen.  To that end, I'd like to propose we adopt some process around
gate-blocking bugs:

1) The QA team should have the ability to triage bugs in _all_ OpenStack
projects, specifically so that they may set gate-blocking bugs to
critical priority.

2) If there isn't an immediately obvious assignee for the bug, send an
email to the -dev list announcing it and asking for someone to take or
be assigned to the bug.

I think the expectation should be that the bug triage teams or PTLs
should help get someone assigned to the bug in a reasonable time (say,
24 hours, or ideally much less).

3) If things get really bad, as they have recently, we send a mail to
the list asking core devs to stop approving patches that don't address
gate-blocking bugs.

I don't think any of this is revolutionary -- we have more or less done
these things already in this situation, but we usually take a while to
get there.  I think setting expectations around this and standardizing
how we proceed will make us better able to handle it.

Separately we will be following up with information on some changes that
we hope will reduce the likelihood of nondeterministic bugs creeping in
in the first place.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance] Support of v1 and v2 glance APIs in Nova

2013-11-21 Thread Matt Riedemann



On 11/3/2013 5:22 AM, Joe Gordon wrote:


On Nov 1, 2013 6:46 PM, John Garbutt j...@johngarbutt.com
mailto:j...@johngarbutt.com wrote:
 
  On 29 October 2013 16:11, Eddie Sheffield
eddie.sheffi...@rackspace.com mailto:eddie.sheffi...@rackspace.com
wrote:
  
   John Garbutt j...@johngarbutt.com mailto:j...@johngarbutt.com
said:
  
   Going back to Joe's comment:
   Can both of these cases be covered by configuring the keystone
catalog?
   +1
  
   If both v1 and v2 are present, pick v2, otherwise just pick what is in
   the catalogue. That seems cool. Not quite sure how the multiple glance
   endpoints works in the keystone catalog, but should work I assume.
  
   We hard code nova right now, and so we probably want to keep that
route too?
  
   Nova doesn't use the catalog from Keystone when talking to Glance.
There is a config value glance_api_servers which defines a list of
Glance servers that gets randomized and cycled through. I assume that's
what you're referring to with we hard code nova. But currently there's
nowhere in this path (internal nova to glance) where the keystone
catalog is available.
 
  Yes. I was not very clear. I am proposing we change that. We could try
  shoehorn the multiple glance nodes in the keystone catalog, then cache
  that in the context, but maybe that doesn't make sense. This is a
  separate change really.

FYI:  We cache the cinder endpoints from keystone catalog in the context
already. So doing something like that with glance won't be without
president.

 
  But clearly, we can't drop the direct configuration of glance servers
  for some time either.
 
   I think some of the confusion may be that Glanceclient at the
programmatic client level doesn't talk to keystone. That happens happens
higher in the CLI level which doesn't come into play here.
  
   From: Russell Bryant rbry...@redhat.com
mailto:rbry...@redhat.com
   On 10/17/2013 03:12 PM, Eddie Sheffield wrote:
   Might I propose a compromise?
  
   1) For the VERY short term, keep the config value and get the
change otherwise
   reviewed and hopefully accepted.
  
   2) Immediately file two blueprints:
  - python-glanceclient - expose a way to discover available
versions
  - nova - depends on the glanceclient bp and allowing
autodiscovery of glance
   version
   and making the config value optional (tho not
deprecated / removed)
  
   Supporting both seems reasonable.  At least then *most* people don't
   need to worry about it and it just works, but the override is
there if
   necessary, since multiple people seem to be expressing a desire
to have
   it available.
  
   +1
  
   Can we just do this all at once?  Adding this to glanceclient doesn't
   seem like a huge task.
  
   I worry about us never getting the full solution, but it seems to have
   got complicated.
  
   The glanceclient side is done, as far as allowing access to the
list of available API versions on a given server. It's getting Nova to
use this info that's a bit sticky.
 
  Hmm, OK. Could we not just cache the detected version, to reduce the
  impact of that decision.
 
   On 28 October 2013 15:13, Eddie Sheffield
eddie.sheffi...@rackspace.com mailto:eddie.sheffi...@rackspace.com
wrote:
   So...I've been working on this some more and hit a bit of a snag. The
   Glanceclient change was easy, but I see now that doing this in
nova will require
   a pretty huge change in the way things work. Currently, the API
version is
   grabbed from the config value, the appropriate driver is
instantiated, and calls
   go through that. The problem comes in that the actually glance
server isn't
   communicated with until very late in the process. Nothing sees
the servers at
   the level where the driver is determined. Also there isn't a
single glance server
   but a list of them, and in the even of certain communication
failures the list is
   cycled through until success or a number of retries has passed.
  
   So to change this to auto configuring will require turning this
upside down,
   cycling through the servers at a higher level, choosing the
appropriate driver
   for that server, and handling retries at that same level.
  
   Doable, but a much larger task than I first was thinking.
  
   Also, I don't really want the added overhead of getting the api
versions before
   every call, so I'm thinking that going through the list of
servers at startup and
   discovering the versions then and caching that somehow would be
helpful as well.
  
   Thoughts?
  
   I do worry about that overhead. But with Joe's comment, does it not
   just boil down to caching the keystone catalog in the context?
  
   I am not a fan of all the specific talk to glance code we have in
   nova, moving more of that into glanceclient can only be a good thing.
   For the XenServer itegration, for efficiency reasons, we need glance
   to talk from dom0, so it has dom0 making the final HTTP call. So we
   would need a way of extracting that info from the glance 

Re: [openstack-dev] [nova][heat][[keystone] RFC: introducing request identification

2013-11-21 Thread Andrew Laski

On 11/19/13 at 08:04pm, haruka tanizawa wrote:

Hi stackers!!

I'd like to ask for your opinions about my idea of identifying request.

Challenges
==

We have no way to know the final result of an API request.
Indeed we can continuously get the status of allocated resources,
but this is just resource status, not request status.

It doesn't matter so much for manual operations.
But it does for automated clients like heat.
We need request-oriented-status and it should be disclosed to clients.

Literally, we need to address two items for it.
1. how to identify request from clients
2. how to disclose status of request to clients

Note that this email includes only 1 for initiating the discussion.
Also, bp:instance-tasks-api[0] should include both two items above.

Proposal: introducing request identification
=

I'd like to introduce request identification, which is disclosed to
clients.
There are two characteristics:

- request identification is unique ID for each request
  so that we can identify tell a specific request from others.
- request identification is available for clients
  so that we can enable any after-request-operations like check, retry[1]
or cancel[2].
  (of course we need to add more logic for check/retry/cancel,
   but I'm pretty sure that indentifying request is the starting point for
these features)

In my understandings, main objections should be 'who should generate and
maintain such IDs?'.

How to implement request identification
=

There are several options at least. (See also recent discussion at
openstack-dev[3])

1. Enable user to provide his/her own request identification within API
request.
  This should be the simplest way. But providing id from outside looks
hard to be accepted.
  For example, Idempotency for OpenStack API[4].
  Or instance-tasks-api enable to user to put his/her own request
identification.


I'm working on the implementation of instance-tasks-api[0] in Nova and 
this is what I've been moving towards so far.  The API will accept a 
string to be a part of the task but it will have meaning only to the 
client, not to Nova.  Then if tasks can be searched or filtered by that 
field I think that would meet the requirements you layed out above, or 
is something missing?





2. Use correlation_id in oslo as request identification.
  correlation_id looks similar concept of request indentification,
  but correlation_id in nova was already rejected[5].

3. Enable keystone to generate request identification (we can call it
'request-token', for example).
  Before sending actual API request to nova-api, client sends a request to
keystone to get 'request-token'.
  Then client sends an actual API request with 'request-token'.
  Nova-api will consult it to keystone whether it was really generated.
  Sounds like a auth-token generated by keystone, differences are:
[lifecycle] auth-token is used for multiple API requests before it
expires,
   'request-token' is used for only single API request.
[reusing] if the same 'request-token' was specified twice or more
times,
   nova-api simply returns 20x (works like client token in AWS[6]).
   Keystone needs to maintain 'request-tokens' until they expire.
  For backward compatibility, actual API request without 'request-token'
should work as before.
  We can consider several options for uniqueness of 'request-token':
uuid, any string with uniqueness per tennant, etc.

IMO, since adding new implementation to Keystone is a little bit hard work,
so implement of 1 is reasonable for me, just idea.

Any comments will be appreciated.

Sincerely, Haruka Tanizawa

[0] https://blueprints.launchpad.net/nova/+spec/instance-tasks-api
[1] https://wiki.openstack.org/wiki/Support-retry-with-idempotency
[2] https://blueprints.launchpad.net/nova/+spec/cancel-swap-volume
[3]
http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg09023.html
[4] https://blueprints.launchpad.net/nova/+spec/idempotentcy-client-token
[5] https://review.openstack.org/#/c/29480/
[6]
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-21 Thread Jesse Noller

On Nov 20, 2013, at 9:09 AM, Thierry Carrez thie...@openstack.org wrote:

 Hi everyone,
 
 How should we proceed to make sure UX (user experience) is properly
 taken into account into OpenStack development ? Historically it was hard
 for UX sessions (especially the ones that affect multiple projects, like
 CLI / API experience) to get session time at our design summits. This
 visibility issue prompted the recent request by UX-minded folks to make
 UX an official OpenStack program.
 
 However, as was apparent in the Technical Committee meeting discussion
 about it yesterday, most of us are not convinced that establishing and
 blessing a separate team is the most efficient way to give UX the
 attention it deserves. Ideally, UX-minded folks would get active
 *within* existing project teams rather than form some sort of
 counter-power as a separate team. In the same way we want scalability
 and security mindset to be present in every project, we want UX to be
 present in every project. It's more of an advocacy group than a
 program imho.
 
 So my recommendation would be to encourage UX folks to get involved
 within projects and during project-specific weekly meetings to
 efficiently drive better UX there, as a direct project contributor. If
 all the UX-minded folks need a forum to coordinate, I think [UX] ML
 threads and, maybe, a UX weekly meeting would be an interesting first step.
 
 There would still be an issue with UX session space at the Design
 Summit... but that's a well known issue that affects more than just UX:
 the way our design summits were historically organized (around programs
 only) made it difficult to discuss cross-project and cross-program
 issues. To address that, the plan is to carve cross-project space into
 the next design summit, even if that means a little less topical
 sessions for everyone else.
 
 Thoughts ?

Hello again everyone - let me turn this around a little bit, I’m working on 
proposing something based on the Oslo work and openstack-client, and overall 
looking at the *Developer Experience* focused around application developers and 
end-users more so than the individual UX issues (configuration, UI, IxD, etc).

I’ve spoken to Everett and others about discussions had at the summit around 
ideas like developer.openstack.org - and I think the idea is a good start 
towards improving the lives of downstream application developers. However, one 
of the problems (as I and others see it) is that there’s a series of 
disconnects between the needs of the individual projects to have a command line 
client for administrative / basic usage and the needs of application developers 
and end-users (not Openstack admins, just end users).

What I’d like to propose is a team that’s not focused on the overarching UX 
(from horizon to **) but rather a team / group focused on some key areas:

1: Creating an *application developer* focused SDK for openstack services 
2: Unifying the back-end code and common tools for the command line clients 
into 1 
3: Providing extension points for downstream vendors to add custom extensions 
as needed
4: Based on 1; make deriving project-specific CLIs a matter of 
importing/subclassing and extending 

This is a bit of a hybrid between what the awesome openstackclient team has 
done to make a unified CLI, but takes a step back to focus on a unified back 
end with clean APIs that can not only power CLIs, but also act as an SDK. This 
would allow many vendors (Rackspace, for example) to willingly drop their SDKs 
and leverage this unified back end.

In my “perfect world” you’d be able to, as an application developer targeting 
Openstack providers, do something close to (code sketch):

from openstack.api.auth import AuthClass
from openstack.api.nova import NovaClient
from openstack.api.nova import NovaAdmin

auth = AuthClass(…)

nova = NovaClient(auth)
nova.server.create(… block=True)

nova_admin = NovaAdmin(auth)
nova_admin.delete_flavor(…)

Downstream vendors could further extend each of these and either create very 
thin shims or meta packages that add provider specific services, e.g:

from openstack.vendor.rackspace.api.auth AuthClass 

…

The end goals being:

1: provide a common rest client back end for all the things
2: Collapse all common functions (such as error retries) into a common lib
3: DO NOT DICTATE a concurrency system: no eventlet, no greenlet. Just Python; 
allow application developers to use what they need to.
4: Provide a cliff based extension system for vendors 
5: Document everything.
6: Python 3  2 compatible code base

As I said earlier; this would build on work already in flight within openstack, 
and additionally within vendors such as rackspace to contribute to this effort 
directly and reduce the proliferation of SDKs/clis/etc. Existing SDKs could be 
end-of-lifed. The team working on this would be comprised of people focused on 
working across the openstack projects not just as dictators of supreme design, 
but actually 

[openstack-dev] [nova][scheduler][metrics] Additional metrics

2013-11-21 Thread Abbass MAROUNI
Hello,

I'm in the process of writing a new scheduling algorithm for openstack
nova.
I have a set of compute nodes that I'm going to filter and weigh according
to some metrics collected from these compute nodes.
I saw nova.compute.resource_tracker and metrics (ram, disk and cpu) that it
collects from compute nodes and updates the rows corresponding to compute
nodes in the database.

I'm planning to write some modules that will collect the new metrics but
I'm wondering if I need to modify the database schema by adding more
columns in the 'compute_nodes' table for my new metrics. Will this require
some modification to the *compute model* ? Then how can I use these metrics
during the scheduling process, do I fetch each compute node row from the
database ? Is there any easier way around this problem ?

Best Regards,
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Future of Key Distribution Server, Trusted Messaging

2013-11-21 Thread Jarret Raim
The Barbican team has been taking a look at the KDS feature and the
proposed patch and I think this may be better placed in Barbican rather
than Keystone. The patch, from what I can tell, seems to require that a
service account create  use a key under its own tenant. In this use case,
Barbican can handle the entire exchange and Keystone can just provide
auth/auth for the process. This would allow for some great additional
features including guaranteed entropy and additional security through the
use of HSMs, auditing / logging, etc.

Barbican is pretty far along at this point and it doesn¹t appear to be a
huge amount of work to move the patch over as it doesn¹t seem to use any
Keystone internals.

What would people think about this approach? We¹re happy to help move the
patch over and I¹m certainly happy to merge it as it feels like a good
feature for barbican.




Jarret






On 11/21/13, 12:55 AM, Russell Bryant rbry...@redhat.com wrote:

Greetings,

I'd like to check in on the status of this API addition:

https://review.openstack.org/#/c/40692/

The last comment is:

   propose against stackforge as discussed at summit?

I don't see a session about this and from a quick look, don't see notes
related to it in other session etherpads.

When was this discussed?  Can you summarize it?

Last I heard, this was just being deferred to be merged early in
Icehouse [1].

This is blocking one of the most important security features for
OpenStack, IMO (trusted messaging) [2].  We've been talking about it for
years.  Someone has finally made some real progress on it and I feel
like it has been given little to no attention.

I'm not thrilled about the prospect of this going into a new project for
multiple reasons.

 - Given the priority and how long this has been dragging out, having to
wait for a new project to make its way into OpenStack is not very
appealing.

 - A new project needs to be able to stand on its own legs.  It needs to
have a reasonably sized development team to make it sustainable.  Is
this big enough for that?

What's the thinking on this?

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-August/013992.html
[2] https://review.openstack.org/#/c/37913/

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Great response to yesterday's Gate Priority

2013-11-21 Thread Anita Kuno
Hello Neutron:

Just wanted to extend a hearty pat on the back to all of Neutron for
helping to comply with yesterdays gate prioritization:
http://lists.openstack.org/pipermail/openstack-dev/2013-November/019949.html

We had a lot of really good activity in the -neutron channel:
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2013-11-20.log
timestamp: 2013-11-20T19:23:56 which is really great to see. Way to go,
team!

Following up on this, a couple of important points to keep in mind:

1. patchsets need to pass the check tests before they should be approved.

2. please ask for another core to approve your patch, hopefully in times
of urgency cores will gather in -neutron and be available for patch
approval at short notice.

So let's keep this great bug fixing energy going and turn to
https://bugs.launchpad.net/neutron/+bug/1249065 and
https://bugs.launchpad.net/neutron/+bug/1251448 and see if we can
support Aaron and Maru as they work on these bugs. Being able to
recreate them and/or adding any observations or thoughts to the bug
reports would be really helpful. Offering a patchset for discussion is
also highly recommended, even if it is just a general idea.

Great work, Neutron,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting Nov 21

2013-11-21 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in #openstack-meeting-alt 
channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_November.2C_21

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20131121T18

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat][[keystone] RFC: introducing request identification

2013-11-21 Thread haruka tanizawa
Thanks for your reply.

I'm working on the implementation of instance-tasks-api[0] in Nova and
this is what I've been moving towards so far.
Yes, I know. I think that is good idea.

The API will accept a string to be a part of the task but it will have
meaning only to the client, not to Nova.  Then if tasks can be searched or
filtered by that field I think that would meet the requirements you layed
out above, or is something missing?
Hmmm, as far as I understand, keystone(keystone work plan blueprint)
generate request_id to each request.
(I think that is a good idea.)
And task_id is generated by instance-tasks-api.
Is my understanding of this correct?
Or if I miss something, thanks for telling me anything.

Haruka Tanizawa
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Does cinder support HDS AMS 2300 storage now?

2013-11-21 Thread Lei Zhang
Did you test on it?
That's meaning the CLI on both AMS and HUS are the same? Right?


On Fri, Nov 22, 2013 at 4:14 AM, Steven Sonnenberg 
steven.sonnenb...@hds.com wrote:

   On *Thu Nov 21 03:32:10 UTC 2013, Lei asked:*

 I just found the HUS is supported. But I have a old AMS storage machine
 and want to use it.

 So I want to make sure is it possible?

 The answer is that both AMS and HUS arrays are supported.



 Steve Sonnenberg

 Master Solutions Consultant

 Hitachi Data Systems

 Cell: 443-929-6543



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Lei Zhang
Blog: http://xcodest.me
twitter/weibo: @jeffrey4l
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-21 Thread Sam Alba
I wish we can make a decision during this meeting. Is it confirmed for
Friday 9am pacific?

On Thu, Nov 21, 2013 at 8:24 AM, Chuck Short chuck.sh...@canonical.com wrote:
 Hi

 Has a decision happened when this meeting is going to take place, assuming
 it is still taking place tomorrow.

 Regards
 chuck


 On Mon, Nov 18, 2013 at 7:58 PM, Krishna Raman kra...@gmail.com wrote:


 On Nov 18, 2013, at 4:30 PM, Russell Bryant rbry...@redhat.com wrote:

 On 11/18/2013 06:30 PM, Dan Smith wrote:

 Not having been at the summit (maybe the next one), could somebody
 give a really short explanation as to why it needs to be a separate
 service? It sounds like it should fit within the Nova area. It is,
 after all, just another hypervisor type, or so it seems.


 But it's not just another hypervisor. If all you want from your
 containers is lightweight VMs, then nova is a reasonable place to put
 that (and it's there right now). If, however, you want to expose the
 complex and flexible attributes of a container, such as being able to
 overlap filesystems, have fine-grained control over what is shared with
 the host OS, look at the processes within a container, etc, then nova
 ends up needing quite a bit of change to support that.

 I think the overwhelming majority of folks in the room, after discussing
 it, agreed that Nova is infrastructure and containers is more of a
 platform thing. Making it a separate service lets us define a mechanism
 to manage these that makes much more sense than treating them like VMs.
 Using Nova to deploy VMs that run this service is the right approach,
 IMHO. Clayton put it very well, I think:

  If the thing you want to deploy has a kernel, then you need Nova. If
  your thing runs on a kernel, you want $new_service_name.

 I agree.

 Note that this is just another service under the compute project (or
 program, or whatever the correct terminology is this week).


 The Compute program is correct.  That is established terminology as
 defined by the TC in the last cycle.

 So while
 distinct from Nova in terms of code, development should be tightly
 integrated until (and if at some point) it doesn't make sense.


 And it may share a whole bunch of the code.

 Another way to put this:  The API requirements people have for
 containers include a number of features considered outside of the
 current scope of Nova (short version: Nova's scope stops before going
 *inside* the servers it creates, except file injection, which we plan to
 remove anyway).  That presents a problem.  A new service is one possible
 solution.

 My view of the outcome of the session was not it *will* be a new
 service.  Instead, it was, we *think* it should be a new service, but
 let's do some more investigation to decide for sure.

 The action item from the session was to go off and come up with a
 proposal for what a new service would look like.  In particular, we
 needed a proposal for what the API would look like.  With that in hand,
 we need to come back and ask the question again of whether a new service
 is the right answer.

 I see 3 possible solutions here:

 1) Expand the scope of Nova to include all of the things people want to
 be able to do with containers.

 This is my least favorite option.  Nova is already really big.  We've
 worked to split things out (Networking, Block Storage, Images) to keep
 it under control.  I don't think a significant increase in scope is a
 smart move for Nova's future.

 2) Declare containers as explicitly out of scope and start a new project
 with its own API.

 That is what is being proposed here.

 3) Some middle ground that is a variation of #2.  Consider Ironic.  The
 idea is that Nova's API will still be used for basic provisioning, which
 Nova will implement by talking to Ironic.  However, there are a lot of
 baremetal management things that don't fit in Nova at all, and those
 only exist in Ironic's API.

 I wanted to mention this option for completeness, but I don't actually
 think it's the right choice here.  With Ironic you have a physical
 resource (managed by Ironic), and then instances of an image running on
 these physical resources (managed by Nova).

 With containers, there's a similar line.  You have instances of
 containers (managed either by Nova or the new service) running on
 servers (managed by Nova).  I think there is a good line for separating
 concerns, with a container service on top of Nova.


 Let's ask ourselves:  How much overlap is there between the current
 compute API and a proposed containers API?  Effectively, what's the
 diff?  How much do we expect this diff to change in the coming years?

 The current diff demonstrates a significant clash with the current scope
 of Nova.  I also expect a lot of innovation around containers in the
 next few years, which will result in wanting to do new cool things in
 the API.  I feel that all of this justifies a new API service to best
 position ourselves for the long term.


 +1

 We need to come up with 

Re: [openstack-dev] [ALL] Wheel-enabling patches

2013-11-21 Thread Flavio Percoco

On 21/11/13 18:44 +0100, Flavio Percoco wrote:

Greetings,

There are some patches that add support for building wheels. The patch
adds a `[wheel]` section with `universal = True`

`universal=True` means the applications supports py2/py3 which is not
the case for most (all?) openstack projects. So, please, do not
approve those patches.

Glance case: https://review.openstack.org/#/c/57132/


Also, Thanks Monty for pointing this out!


--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unable to see console using VNC on ESX hypervisor

2013-11-21 Thread Ben Nemec
Please do not cross-post messages to multiple mailing lists.  The 
openstack-dev list is for development discussion only, so this belongs 
on the general openstack list.


Thanks.

-Ben

On 2013-11-21 07:33, Rajshree Thorat wrote:

Hi All,

I have configured OpenStack Grizzly to control ESX hypervisor. I can
successfully launch instances but unable to see its console using VNC.

Following is my configuration.

***Compute node :

nova.conf for vnc:

vnc_enabled = true
novncproxy_base_url=http://public_ip_of_controller:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=management_ip_of_compute
vncserver_listen=0.0.0.0

***Controller node:

nova.conf for vnc:

novncproxy_base_url=http://public_ip_of_controller:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=management_ip_of_controller
vncserver_listen=0.0.0.0

root@openstk2:~# tail /var/log/nova/nova-consoleauth.log
2013-11-21 18:40:35.228 7570 AUDIT nova.service [-] Starting
consoleauth node (version 2013.1.3)
2013-11-21 18:40:35.395 INFO nova.openstack.common.rpc.common
[req-179d456d-f306-426f-b65e-242362758f73 None None] Connected to AMQP
server on controller_ip:5672
2013-11-21 18:42:34.012 AUDIT nova.consoleauth.manager
[req-ebc33f34-f57b-492b-8429-39eb3240e5d7
a8f0e9af6e6b4d08b1729acae0510d54 db63e4a448fc426086562638726f9081]
Received Token: 1bcb7408-5c59-466d-a84d-528481af3c37,
{'instance_uuid': u'969e49b0-af3f-45bd-8618-1320ba337962',
'internal_access_path': None, 'last_activity_at': 1385039554.012067,
'console_type': u'novnc', 'host': u'ESX_host_IP', 'token':
u'1bcb7408-5c59-466d-a84d-528481af3c37', 'port': 6031})
2013-11-21 18:42:34.015 INFO nova.openstack.common.rpc.common
[req-ebc33f34-f57b-492b-8429-39eb3240e5d7
a8f0e9af6e6b4d08b1729acae0510d54 db63e4a448fc426086562638726f9081]
Connected to AMQP server on controller_ip:5672
2013-11-21 18:42:34.283 AUDIT nova.consoleauth.manager
[req-518ed47e-5d68-491d-8c57-16952744a2d8 None None] Checking Token:
1bcb7408-5c59-466d-a84d-528481af3c37, True)
2013-11-21 18:42:35.710 AUDIT nova.consoleauth.manager
[req-2d65d8ac-c003-4f4d-9014-9e8995794ad6 None None] Checking Token:
1bcb7408-5c59-466d-a84d-528481af3c37, True)

With same configuration I can connect to vm's console on KVM setup. Is
there any other setting to access console for ESX hypervisor?

Any help would be highly appreciated.

Thanks in advance,

Regards,
Rajshree




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Future of Key Distribution Server, Trusted Messaging

2013-11-21 Thread Adam Young

On 11/21/2013 03:08 PM, Jarret Raim wrote:

The Barbican team has been taking a look at the KDS feature and the
proposed patch and I think this may be better placed in Barbican rather
than Keystone. The patch, from what I can tell, seems to require that a
service account create  use a key under its own tenant. In this use case,
Barbican can handle the entire exchange and Keystone can just provide
auth/auth for the process. This would allow for some great additional
features including guaranteed entropy and additional security through the
use of HSMs, auditing / logging, etc.

Barbican is pretty far along at this point and it doesn¹t appear to be a
huge amount of work to move the patch over as it doesn¹t seem to use any
Keystone internals.

What would people think about this approach? We¹re happy to help move the
patch over and I¹m certainly happy to merge it as it feels like a good
feature for barbican.


I'm ok with it.

I would, however, like to suggest that we work to make the KDS as a 
seperatly runnable service, so that you don't need to run the rest of 
Barbican to get it.   Barbican was originally envisioned as a 
customer/outward facing project, and KDS is internal (primarily), they 
should be runnable at the same time, without getting confused about 
which service they belong in.  Thus, While I would be OK with KDS under 
the Barbican/CloudKeep program, it might not make sense to bundle it 
with the Barbican server.  Using Barbican as a way to bootstrap the 
deployment for the short term is probably OK, though.










Jarret






On 11/21/13, 12:55 AM, Russell Bryant rbry...@redhat.com wrote:


Greetings,

I'd like to check in on the status of this API addition:

https://review.openstack.org/#/c/40692/

The last comment is:

   propose against stackforge as discussed at summit?

I don't see a session about this and from a quick look, don't see notes
related to it in other session etherpads.

When was this discussed?  Can you summarize it?

Last I heard, this was just being deferred to be merged early in
Icehouse [1].

This is blocking one of the most important security features for
OpenStack, IMO (trusted messaging) [2].  We've been talking about it for
years.  Someone has finally made some real progress on it and I feel
like it has been given little to no attention.

I'm not thrilled about the prospect of this going into a new project for
multiple reasons.

- Given the priority and how long this has been dragging out, having to
wait for a new project to make its way into OpenStack is not very
appealing.

- A new project needs to be able to stand on its own legs.  It needs to
have a reasonably sized development team to make it sustainable.  Is
this big enough for that?

What's the thinking on this?

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-August/013992.html
[2] https://review.openstack.org/#/c/37913/

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Easier way of trying TripleO

2013-11-21 Thread James Slagle
On Tue, Nov 19, 2013 at 6:57 PM, Robert Collins
robe...@robertcollins.net wrote:
 On 20 November 2013 10:40, James Slagle james.sla...@gmail.com wrote:
 I'd like to propose an idea around a simplified and complimentary version of
 devtest that makes it easier for someone to get started and try TripleO.

 I think its a grand idea (in fact it's been floated many times). For a
 while we ran our own jenkins with such downloadable images.

 Right now I think we need to continue the two primary arcs we have:
 CI/CD integration and a CD HA overcloud so that we are being tested,
 and then work on making the artifacts from those tests available.

Yes, understood.

There are people focused on CI/CD work currently, and I don't think this effort
would take away from that focus, other than the time it takes to do patch
reviews, but I don't think that should be a reason not to do it.

It'd be nice to have the images delivered as output from a well tested CD run,
but I'd like to not delay until we have that.  I think we could make this
available quicker than we could get to that point.

Plus, I think if we make this easier to try, we might get more community
participation.  Right now, I don't think we're attracting people who want to
try a test/development tripleo based deployment with devtest.  We're really
only attracting people who want to contribute and develop on tripleo.  Which,
may certainly be as designed at this point.  But I feel we have a lot of
positive momentum coming out of summit, so it makes sense to me to try and give
people something easier to try.

Given that, I think the next steps would be:
 - add a bit more detail in a blueprint and get some feedback on that
 - open up cards to track the work in the tripleo trello
 - start the work on it :).  If it's just me working on it, I'm fine with
   that.  I expect there may be 1 or 2 other folks that might work on it
   as well, but these aren't folks that are looking at the CI/CD stories right
   now.

Any opposition to that approach or other thoughts?


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Javascript development improvement

2013-11-21 Thread Jiri Tomasek

Hi,

I also don't see an issue with using nodejs in Horizon development 
environment. Is the problem in Django not differentiating the 
development and production environments by default?
Could the problem be resolved by having two different environments with 
the two requirements files etc. similar as Rails does?


Regarding less, I don't really care what compiler we use as long as it 
works. And if we need to provide uncompiled less for production, then 
let's use Lesscpy.


Jirka

On 11/21/2013 09:21 AM, Ladislav Smola wrote:

Hello,

as long as node won't be Production dependency, it shouldn't be a 
problem, right? I give +1 to that


Regards
Ladislav

On 11/20/2013 05:01 PM, Maxime Vidori wrote:
Hi all, I know it is pretty annoying but I have to resurrect this 
subject.


With the integration of Angularjs into Horizon we will encounter a 
lot of issues with javascript. I ask you to reconsider to bring back 
Nodejs as a development platform. I am not talking about production, 
we are all agree that Node is not ready for production, and we do not 
want it as a backend. But the facts are that we need a lot of its 
features, which will increase the tests and the development. 
Currently, we do not have any javascript code quality: jslint is a 
great tool and can be used easily into node. Angularjs also provides 
end-to-end testing based on nodejs again, testing is important 
especially if we start to put more logic into JS. Selenium is used 
just to run qUnit tests, we can bring back these tests into node and 
have a clean unified testing platform. Tests will be easier to perform.


Finally, (do not punch me in the face) lessc which is used for 
bootstrap is completely integrated into it. I am afraid that modern 
javascript development can not be perform without this tool.


Regards

Maxime Vidori


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-11-21 Thread Jesse Noller


 On Nov 21, 2013, at 10:43 AM, Ben Nemec openst...@nemebean.com wrote:
 
 On 2013-11-21 10:20, Jesse Noller wrote:
 On Nov 20, 2013, at 9:09 AM, Thierry Carrez thie...@openstack.org wrote:
 Hi everyone,
 How should we proceed to make sure UX (user experience) is properly
 taken into account into OpenStack development ? Historically it was hard
 for UX sessions (especially the ones that affect multiple projects, like
 CLI / API experience) to get session time at our design summits. This
 visibility issue prompted the recent request by UX-minded folks to make
 UX an official OpenStack program.
 However, as was apparent in the Technical Committee meeting discussion
 about it yesterday, most of us are not convinced that establishing and
 blessing a separate team is the most efficient way to give UX the
 attention it deserves. Ideally, UX-minded folks would get active
 *within* existing project teams rather than form some sort of
 counter-power as a separate team. In the same way we want scalability
 and security mindset to be present in every project, we want UX to be
 present in every project. It's more of an advocacy group than a
 program imho.
 So my recommendation would be to encourage UX folks to get involved
 within projects and during project-specific weekly meetings to
 efficiently drive better UX there, as a direct project contributor. If
 all the UX-minded folks need a forum to coordinate, I think [UX] ML
 threads and, maybe, a UX weekly meeting would be an interesting first step.
 There would still be an issue with UX session space at the Design
 Summit... but that's a well known issue that affects more than just UX:
 the way our design summits were historically organized (around programs
 only) made it difficult to discuss cross-project and cross-program
 issues. To address that, the plan is to carve cross-project space into
 the next design summit, even if that means a little less topical
 sessions for everyone else.
 Thoughts ?
 Hello again everyone - let me turn this around a little bit, I’m
 working on proposing something based on the Oslo work and
 openstack-client, and overall looking at the *Developer Experience*
 focused around application developers and end-users more so than the
 individual UX issues (configuration, UI, IxD, etc).
 I’ve spoken to Everett and others about discussions had at the summit
 around ideas like developer.openstack.org - and I think the idea is a
 good start towards improving the lives of downstream application
 developers. However, one of the problems (as I and others see it) is
 that there’s a series of disconnects between the needs of the
 individual projects to have a command line client for administrative /
 basic usage and the needs of application developers and end-users (not
 Openstack admins, just end users).
 What I’d like to propose is a team that’s not focused on the
 overarching UX (from horizon to **) but rather a team / group focused
 on some key areas:
 1: Creating an *application developer* focused SDK for openstack services
 2: Unifying the back-end code and common tools for the command line
 clients into 1
 3: Providing extension points for downstream vendors to add custom
 extensions as needed
 4: Based on 1; make deriving project-specific CLIs a matter of
 importing/subclassing and extending
 This is a bit of a hybrid between what the awesome openstackclient
 team has done to make a unified CLI, but takes a step back to focus on
 a unified back end with clean APIs that can not only power CLIs, but
 also act as an SDK. This would allow many vendors (Rackspace, for
 example) to willingly drop their SDKs and leverage this unified back
 end.
 In my “perfect world” you’d be able to, as an application developer
 targeting Openstack providers, do something close to (code sketch):
 from openstack.api.auth import AuthClass
 from openstack.api.nova import NovaClient
 from openstack.api.nova import NovaAdmin
 auth = AuthClass(…)
 nova = NovaClient(auth)
 nova.server.create(… block=True)
 nova_admin = NovaAdmin(auth)
 nova_admin.delete_flavor(…)
 Downstream vendors could further extend each of these and either
 create very thin shims or meta packages that add provider specific
 services, e.g:
 from openstack.vendor.rackspace.api.auth AuthClass
 …
 The end goals being:
 1: provide a common rest client back end for all the things
 2: Collapse all common functions (such as error retries) into a common lib
 3: DO NOT DICTATE a concurrency system: no eventlet, no greenlet. Just
 Python; allow application developers to use what they need to.
 4: Provide a cliff based extension system for vendors
 5: Document everything.
 6: Python 3  2 compatible code base
 As I said earlier; this would build on work already in flight within
 openstack, and additionally within vendors such as rackspace to
 contribute to this effort directly and reduce the proliferation of
 SDKs/clis/etc. Existing SDKs could be end-of-lifed. The team working
 on this would be comprised of 

Re: [openstack-dev] [Cinder][Glance] OSLO update

2013-11-21 Thread Doug Hellmann
On Wed, Nov 20, 2013 at 3:37 AM, Elena Ezhova eezh...@mirantis.com wrote:


 20.11.2013, 06:18, John Griffith john.griff...@solidfire.com:


 On Mon, Nov 18, 2013 at 3:53 PM, Mark McLoughlin mar...@redhat.com
 wrote:

   On Mon, 2013-11-18 at 17:24 +, Duncan Thomas wrote:
   Random OSLO updates with no list of what changed, what got fixed etc
   are unlikely to get review attention - doing such a review is
   extremely difficult. I was -2ing them and asking for more info, but
   they keep popping up. I'm really not sure what the best way of
   updating from OSLO is, but this isn't it.
   Best practice is to include a list of changes being synced, for example:
 
 https://review.openstack.org/54660
 
   Every so often, we throw around ideas for automating the generation of
   this changes list - e.g. cinder would have the oslo-incubator commit ID
   for each module stored in a file in git to tell us when it was last
   synced.
 
   Mark.
 
   _

 __
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Been away on vacation so I'm afraid I'm a bit late on this... but;

 I think the point Duncan is bringing up here is that there are some
 VERY large and significant patches coming from OSLO pulls.  The DB
 patch in particular being over 1K lines of code to a critical portion
 of the code is a bit unnerving to try and do a review on.  I realize
 that there's a level of trust that goes with the work that's done in
 OSLO and synchronizing those changes across the projects, but I think
 a few key concerns here are:

 1. Doing huge pulls from OSLO like the DB patch here are nearly
 impossible to thoroughly review and test.  Over time we learn a lot
 about real usage scenarios and the database and tweak things as we go,
 so seeing a patch set like this show up is always a bit unnerving and
 frankly nobody is overly excited to review it.

 2. Given a certain level of *trust* for the work that folks do on the
 OSLO side in submitting these patches and new additions, I think some
 of the responsibility on the review of the code falls on the OSLO
 team.  That being said there is still the issue of how these changes
 will impact projects *other* than Nova which I think is sometimes
 neglected.  There have been a number of OSLO synchs pushed to Cinder
 that fail gating jobs, some get fixed, some get abandoned, but in
 either case it shows that there wasn't any testing done with projects
 other than Nova (PLEASE note, I'm not referring to this particular
 round of patches or calling any patch set out, just stating a
 historical fact).

 3. We need better documentation in commit messages explaining why the
 changes are necessary and what they do for us.  I'm sorry but in my
 opinion the answer it's the latest in OSLO and Nova already has it
 is not enough of an answer in my opinion.  The patches mentioned in
 this thread in my opinion met the minimum requirements because they at
 least reference the OSLO commit which is great.  In addition I'd like
 to see something to address any discovered issues or testing done with
 the specific projects these changes are being synced to.

 I'm in no way saying I don't want Cinder to play nice with the common
 code or to get in line with the way other projects do things but I am
 saying that I think we have a ways to go in terms of better
 communication here and in terms of OSLO code actually keeping in mind
 the entire OpenStack eco-system as opposed to just changes that were
 needed/updated in Nova.  Cinder in particular went through some pretty
 massive DB re-factoring and changes during Havana and there was a lot
 of really good work there but it didn't come without a cost and the
 benefits were examined and weighed pretty heavily.  I also think that
 some times the indirection introduced by adding some of the
 openstack.common code is unnecessary and in some cases makes things
 more difficult than they should be.

 I'm just not sure that we always do a very good ROI investigation or
 risk assessment on changes, and that opinion applies to ALL changes in
 OpenStack projects, not OSLO specific or anything else.

 All of that being said, a couple of those syncs on the list are
 outdated.  We should start by doing a fresh pull for these and if
 possible add some better documentation in the commit messages as to
 the justification for the patches that would be great.  We can take a
 closer look at the changes and the history behind them and try to get
 some review progress made here.  Mark mentioned some good ideas
 regarding capturing commit ID's from synchronization pulls and I'd
 like to look into that a bit as well.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 I see now that updating OSLO is a 

Re: [openstack-dev] [nova][scheduler][metrics] Additional metrics

2013-11-21 Thread Lu, Lianhao

Abbass MAROUNI wrote on 2013-11-21:
 Hello,
 
 I'm in the process of writing a new scheduling algorithm for openstack nova.
 I have a set of compute nodes that I'm going to filter and weigh according to 
 some metrics collected from these compute nodes.
 I saw nova.compute.resource_tracker and metrics (ram, disk and cpu) that it 
 collects from compute nodes and updates the rows
 corresponding to compute nodes in the database.
 
 I'm planning to write some modules that will collect the new metrics but I'm 
 wondering if I need to modify the database schema by adding
 more columns in the 'compute_nodes' table for my new metrics. Will this 
 require some modification to the compute model ? Then how can I
 use these metrics during the scheduling process, do I fetch each compute node 
 row from the database ? Is there any easier way around
 this problem ?
 
 Best Regards,

There are currently some effort on this:
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling 
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking 

- Lianhao


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-11-21 Thread Tim Bell

Can we make sure that the costs for the end users are also considered as part 
of this ?


-  Configuration management will need further modules

-  Dashboard confusion as we get multiple tabs

-  Accounting, Block Storage, Networking, Orchestration confusion as 
the concepts diverge

Is this really a good idea to create another project considering the needs of 
the whole openstack community ?

Tim



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [policy] Logs and notes from first Neutron Policy IRC meeting

2013-11-21 Thread Kyle Mestery (kmestery)
HI all!

The Neutron Policy sub-team had it's first IRC meeting today [1].
Relevant logs from the meeting are here [2]. We're hoping to
continue the discussion going forward. I've noted action items
in both the meeting logs and on the wiki page. We'll cover those
for the next meeting we have.

Note: We'll not meet next week due to the Thanksgiving holiday
in the US.

Hope to see everyone on #openstack-meeting-alt at 1600 UTC
on Thursday December 5th! In the meantime, please continue
the discussion in IRC on #openstack-neutron and on the
openstack-dev mailing list.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy
[2] http://eavesdrop.openstack.org/meetings/networking_policy/2013/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] meaning of resource_id in a meter

2013-11-21 Thread Gordon Chung
 In all cases, these are free string fields. `user_id' and `project_id'
 map to Keystone _most of the time_,

i'm sort of torn between the two -- which is why i brought it up i guess. 
i like the flexibility of having resource as a free string field but the 
difference between resource and project/user fields is that we can query 
directly on Resources. when we get a Resource, we get a list of associated 
Meters and if we don't set resource_id in a consistent manner, i worry we 
may be losing some relational information between Meters that groupings 
based off consistent resource_id can provide.

cheers,
gordon chung

openstack, ibm software standards
email: chungg [at] ca.ibm.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Find the compute host on which a VM runs

2013-11-21 Thread Akihiro Motoki
Hi Stefan,

HOST_ID is set by the client. In most usual case nova compute set
binding:host_id.


On Thu, Nov 21, 2013 at 6:20 PM, Stefan Apostoaie ioss...@gmail.com wrote:
 Hello again,

 I studied the portbindings extension (the quantum.db.portbindings_db and
 quantum.extensions.portbindings modules). However it's unclear for me who
 sets the portbindings.HOST_ID attribute. I ran some tests with OVS: called
 quantum port-create command and the OVSQuantumPluginV2.create_port method
 got called and it had 'binding:host_id': object object at
 memory_address. If I print out the port object I have 'binding:host_id':
 None.

 What other plugins are doing:
 1. extend the quantum.db.portbindings_db.PortBindingMixin class
 2. call the _process_portbindings_create_and_update method in create/update
 port
 What I cannot find is where the portbindings.HOST_ID attribute is being set.

 Regards,
 Stefan


 On Fri, Nov 15, 2013 at 10:57 PM, Mark McClain mark.mccl...@dreamhost.com
 wrote:

 Stefan-

 Your workflow is very similar to many other plugins.  You’ll want to look
 at implementing the port binding extension in your plugin.  The port binding
 extension allows Nova to inform Neutron of the host where the VM is running.

 mark

 On Nov 15, 2013, at 9:55 AM, Stefan Apostoaie ioss...@gmail.com wrote:

  Hello,
 
  I'm creating a Neutron/Quantum plugin to work with a networking
  controller that takes care of the configuration of the virtual networks.
  Basically what we are doing is receive the API calls and forward them to 
  our
  controller to run the required configuration on the compute hosts.
  What I need to know when a create_port call is made to my plugin is on
  which compute host the VM is created (so that our controller will run the
  configuration on that host). Is there a way to find out this information
  from the plugin?
 
  Regards,
  Stefan Apostoaie
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Support for Type-1 Hypervisors

2013-11-21 Thread Mate Lakat
Hi Stackers,

First of all: I am not a TripleO/baremetal expert.

I am looking at ways how to fit Type 1 hypervisors to TripleO. I am not
sure how other such hypervisors integrate with OS, but in my case -
XenServer - ,I have a VM that runs nova - let's call it domU - , and
that's talking to the hypervisor.

When it comes to deployment, I see two tasks:

- Being able to install the hypervisor from an image
For this, the hypervisor has to be able to behave like a good cloud
guest.
- How to put a VM on top of that hypervisor
We can't provision the VM with the hypervisor's nova driver, as the
domU is not yet installed, so let's fake that the domU is a physical
entity.

The idea is that once the hypervisor is installed, we would treat it as
a baremetal pool, so that we can install the domU - as if it were a
baremetal node.

The sequence would look like this:

1.) install seed VM
2.) seed VM is talking to Bare Metal, provisioning a machine
3.) seed VM installs an image that contains the hypervisor
4.) seed VM can use that hypervisor as if it were a baremetal Rack -
We would have a hypervisor-baremetal interface that would provide an
IPMI interface.
5.) seed VM can install a nova image on domU
6.) Life goes on...

This is a really rough workflow, and I am really interested in what you
guys think about it.

Thanks,

Mate
-- 
Mate Lakat

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-21 Thread Christopher Armstrong
On Thu, Nov 21, 2013 at 12:31 PM, Zane Bitter zbit...@redhat.com wrote:

 On 21/11/13 18:44, Christopher Armstrong wrote:


 2) It relies on a plugin being present for any type of thing you
 might want to notify.


 I don't understand this point. What do you mean by a plugin? I was
 assuming OS::Neutron::PoolMember (not LoadBalancerMember -- I went and
 looked up the actual name) would become a standard Heat resource, not a
 third-party thing (though third parties could provide their own through
 the usual heat extension mechanisms).


 I mean it requires a resource type plugin written in Python. So cloud
 operators could provide their own implementations, but ordinary users could
 not.


Okay, but that sounds like a general problem to solve (custom third-party
plugins supplied by the user instead of cloud operators, which is an idea I
really love btw), and I don't see why it should be a point against the idea
of simply using a Neutron::PoolMember in a scaling unit.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-21 Thread Mike Spreitzer
Thomas Spatzier thomas.spatz...@de.ibm.com wrote on 11/21/2013 02:48:14 
AM:
 ...
 Now thinking more about update scenarios (which we can leave for an
 iteration after the initial deployment is working),

I recommend thinking about UPDATE from the start.  We should have an 
implementation in which CREATE and UPDATE share as much mechanism as is 
reasonable, which requires thinking about UPDATE while designing CREATE.

 in my mental model it
 would be more consistent to have information for handle_create,
 handle_delete, handle_update kinds of events all defined in the
 SoftwareConfig resource.

+1 for putting these on the definition instead of the use; I also noted 
this earlier.

-1 for having an update method.  The orientation to idempotent 
forward-progress operations means that we need only one, which handles 
both CREATE and UPDATE.

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-21 Thread Boris Pavlovic
Robert,

It is nice that community like idea of making one scheduler as a service.


But I saw in https://etherpad.openstack.org/p/icehouse-external-schedulersome
misleading about
https://blueprints.launchpad.net/nova/+spec/no-db-scheduler

Approach no-db-scheduler is actually base step that allows us to make
scalable scheduler as a service without huge pain of changing too much in
current architecture: As I mention on HK summit, there are just few steps,
that should be implemented:

1) Scheduler should store all data by self:
1.1) Keep all data locally + mechanism that effectively sync all
schedulers
1.2) new scheduler RPC method: update_host(host_name, namespace,
values)
 e.g. in this patchset https://review.openstack.org/#/c/45867/
 It is still WIP:
   During this week we are going to make final version:
 a) Garbage collector for sync mechanism
 b) Support of name spaces
 c) Support of sqlaclhemy backends (not only memcached)

2) Cleanup Nova scheduler
2.1) Remove compute_node tables
2.2) Remove from db.api methods that returns state of hosts, and use
Scheduler

3.a) Make Nova Scheduler as a separated service

3.b) Call from cinder in cinder namespace  Scheduler Service
 update_host() method
4) Move cinder scheduler rpc method to  scheduler service
5) Remove Cinder Scheduler

3.c) Call from malina in malina namespace  Scheduler Service
update_host() method
4) Move malina scheduler rpc method to scheduler service
5) Remove Malina Scheduler


I don't think that it is step backward as it was mentioned in etherpad..



Best regards,
Boris Pavlovic



On Fri, Nov 22, 2013 at 12:58 AM, Robert Collins
robe...@robertcollins.netwrote:

 https://etherpad.openstack.org/p/icehouse-external-scheduler

 I'm looking for 4-5 folk who have:
  - modest Nova skills
  - time to follow a fairly mechanical (but careful and detailed work
 needed) plan to break the status quo around scheduler extraction

 And of course, discussion galore about the idea :)

 Cheers,
 Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-11-21 Thread Robert Collins
On 22 November 2013 15:57, Boris Pavlovic bpavlo...@mirantis.com wrote:
 Robert,

 It is nice that community like idea of making one scheduler as a service.


 But I saw in https://etherpad.openstack.org/p/icehouse-external-scheduler
 some misleading about
 https://blueprints.launchpad.net/nova/+spec/no-db-scheduler

Ok, lets fix that up.

 Approach no-db-scheduler is actually base step that allows us to make
 scalable scheduler as a service without huge pain of changing too much in
 current architecture: As I mention on HK summit, there are just few steps,
 that should be implemented:
..
 3.a) Make Nova Scheduler as a separated service

This is the bit that all previous efforts on the scheduler have failed
to deliver - and it's just the bit that my proposal tackles.
...

 I don't think that it is step backward as it was mentioned in etherpad..

I don't see any mention of a step backwards.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-11-21 Thread Christopher Yeoh
On Fri, Nov 22, 2013 at 2:28 AM, Matt Riedemann
mrie...@linux.vnet.ibm.comwrote:



 On Wednesday, November 20, 2013 11:53:45 PM, Clark Boylan wrote:

 On Wed, Nov 20, 2013 at 9:43 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
 wrote:

 Hi Joe,

 2013/11/20 Joe Gordon joe.gord...@gmail.com:

 Hi All,

 As many of you have noticed the gate has been in very bad shape over the
 past few days.  Here is a list of some of the top open bugs (without
 pending
 patches, and many recent hits) that we are hitting.  Gate won't be
 stable,
 and it will be hard to get your code merged, until we fix these bugs.

 1) https://bugs.launchpad.net/bugs/1251920
 nova
 468 Hits


 Can we know the frequency of each failure?
 I'm trying 1251920 and putting the investigation tempest patch.
   https://review.openstack.org/#/c/57193/

 The patch can avoid this problem 4 times, but I am not sure this is
 worth or not.


 Thanks
 Ken'ichi Ohmichi

 ---

 2) https://bugs.launchpad.net/bugs/1251784
 neutron, Nova
 328 Hits
 3) https://bugs.launchpad.net/bugs/1249065
 neutron
122 hits
 4) https://bugs.launchpad.net/bugs/1251448
 neutron
 65 Hits

 Raw Data:


 Note: If a bug has any hits for anything besides failure, it means the
 fingerprint isn't perfect.

 Elastic recheck known issues
 Bug: https://bugs.launchpad.net/bugs/1251920 =
 message:assertionerror:
 console output was empty AND filename:console.html Title: Tempest
 failures due to failure to return console logs from an instance Project:
 Status nova: Confirmed Hits FAILURE: 468 Bug:
 https://bugs.launchpad.net/bugs/1251784 = message:Connection to
 neutron
 failed: Maximum attempts reached AND filename:logs/screen-n-cpu.txt
 Title: nova+neutron scheduling error: Connection to neutron failed:
 Maximum
 attempts reached Project: Status neutron: New nova: New Hits FAILURE:
 328
 UNSTABLE: 13 SUCCESS: 275 Bug: https://bugs.launchpad.net/bugs/1240256=
 message: 503 AND filename:logs/syslog.txt AND
 syslog_program:proxy-server Title: swift proxy-server returning 503
 during
 tempest run Project: Status openstack-ci: Incomplete swift: New
 tempest: New
 Hits FAILURE: 136 SUCCESS: 83
 Pending Patch Bug: https://bugs.launchpad.net/bugs/1249065 =
 message:No
 nw_info cache associated with instance AND filename:logs/screen-n-api.
 txt
 Title: Tempest failure: tempest/scenario/test_snapshot_pattern.py
 Project:
 Status neutron: New nova: Confirmed Hits FAILURE: 122 Bug:
 https://bugs.launchpad.net/bugs/1252514 = message:Got error from
 Swift:
 put_object AND filename:logs/screen-g-api.txt Title: glance doesn't
 recover if Swift returns an error Project: Status devstack: New glance:
 New
 swift: New Hits FAILURE: 95
 Pending Patch Bug: https://bugs.launchpad.net/bugs/1244255 =
 message:NovaException: Unexpected vif_type=binding_failed AND
 filename:logs/screen-n-cpu.txt Title: binding_failed because of l2
 agent
 assumed down Project: Status neutron: Fix Committed Hits FAILURE: 92
 SUCCESS: 29 Bug: https://bugs.launchpad.net/bugs/1251448 = message:
 possible networks found, use a Network ID to be more specific. (HTTP
 400)
 AND filename:console.html Title: BadRequest: Multiple possible
 networks
 found, use a Network ID to be more specific. Project: Status neutron:
 New
 Hits FAILURE: 65 Bug: https://bugs.launchpad.net/bugs/1239856 =
 message:tempest/services AND message:/images_client.py AND
 message:wait_for_image_status AND filename:console.html Title:
 TimeoutException: Request timed out on
 tempest.api.compute.images.test_list_image_filters.
 ListImageFiltersTestXML
 Project: Status glance: New Hits FAILURE: 62 Bug:
 https://bugs.launchpad.net/bugs/1235435 = message:One or more ports
 have
 an IP allocation from this subnet AND message: SubnetInUse: Unable to
 complete operation on subnet AND filename:logs/screen-q-svc.txt
 Title:
 'SubnetInUse: Unable to complete operation on subnet UUID. One or more
 ports
 have an IP allocation from this subnet.' Project: Status neutron:
 Incomplete
 nova: Fix Committed tempest: New Hits FAILURE: 48 Bug:
 https://bugs.launchpad.net/bugs/1224001 =
 message:tempest.scenario.test_network_basic_ops AssertionError: Timed
 out
 waiting for AND filename:console.html Title: test_network_basic_ops
 fails
 waiting for network to become available Project: Status neutron: In
 Progress
 swift: Invalid tempest: Invalid Hits FAILURE: 42 Bug:
 https://bugs.launchpad.net/bugs/1218391 = message:Cannot
 'createImage'
 AND filename:console.html Title:
 tempest.api.compute.images.test_images_oneserver.
 ImagesOneServerTestXML.test_delete_image_that_is_not_yet_active
 spurious failure Project: Status nova: Confirmed swift: Confirmed
 tempest:
 Confirmed Hits FAILURE: 25



 best,
 Joe Gordon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [openstack][nova][social-apects] Social aspects shouldn't impact on dev process

2013-11-21 Thread David Ripton

On 11/20/2013 02:06 AM, Boris Pavlovic wrote:


I faced some social problems in community.

We started working on purge engine for DB (before HK summit)

This is very important, because at this moment we don't have any working
way to purge DB... so admins should make it by hand.


And we made this BP (in october)
https://blueprints.launchpad.net/nova/+spec/db-purge-engine

And made patch that makes this work.
But only because our BP wasn't approved we got -2 from Joe Gordon.
(https://review.openstack.org/#/c/51523/ ) And there was long discussion
to remove this -2.

And now after summit David Ripton made the similar BP (probably he
didn't know):
https://blueprints.launchpad.net/nova/+spec/db-purge2
That is already approved by Joe Gordon. (that already know that we are
working on same problem)

Why?

(btw question about Purge Engine was raised by me on the summit and
community accepted that)


I discussed this with Boris on IRC yesterday.  When I volunteered to 
write a DB purger at Summit, I wasn't aware that there was already one 
actively in progress.  (So many patches around the end of Havana.)  When 
I went to file a blueprint and noticed the existing db-purge blueprint, 
I saw that its patch had been -2'd and figured it was dead.  But as long 
as Boris is working to actively improve that patch (he's on vacation now 
but said he'd probably have something on Monday), I won't submit a patch 
for the competing blueprint.  Instead, I'll work to make sure Boris's 
code meets everyone's requirements (some that I got from Joe Gordon and 
Phil Day are mentioned in db-purge2), and when it does I'll withdraw the 
db-purge2 blueprint and retarget remove-db-archiving to depend on 
Boris's blueprint instead.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Search Project - summit follow up

2013-11-21 Thread Dmitri Zimin(e) | StackStorm
On Wed, Nov 20, 2013 at 2:11 PM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Wed, Nov 20, 2013 at 1:06 PM, Dmitri Zimin(e) | StackStorm 
 d...@stackstorm.com wrote:

 Thanks Terry for highlighting this:

 Yes, tenant isolation is the must. It's not reflected in the prototype -
 it queries Solr directly; but the proper implementation will go through the
 query API service, where ACL will be applied.

 UX folks are welcome to comment on expected queries.

 I think the key benefit of cross-resource index over querying DBs is that
 it saves the clients from implementing complex queries case by case,
 leaving flexibility to the user.


 I question the need for this service, as this service **should** very much
 be dependent on the clients for this functionality. Expecting to query
 backends directly must be a misunderstanding somewhere... Start with a
 specification for filtering across all services and advocate for it on both
 existing and new APIs.



Dolph, thanks for the suggestion: we will begin drafting the API on the
wiki.

Just to be clear: this is not filtering. Existing filtering APIs are
[getting] sufficient. This is a full text search, which doesn't exist yet.

SWIFT is now considering Search API, ideologically similar, but limited to
Object Storage metadata [1]. Search middleware can make it generic and
extensible. And yes, middleware may be a better term, as this is not a
service like nova or cinder, but a layer on top.

Do we need to clarify where search middleware shall live?
Or do we question wether there is the need for search functionality?

What else shall we do to make the discussion forward?

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-November/019014.html



 -- Dmitri.




 On Wed, Nov 20, 2013 at 2:27 AM, Thierry Carrez thie...@openstack.orgwrote:

 Dmitri Zimin(e) | StackStorm wrote:
  Hi Stackers,
 
  The project Search is a service providing fast full-text search for
  resources across OpenStack services.
  [...]

 At first glance this looks slightly scary from a security / tenant
 isolation perspective. Most search results would be extremely
 user-specific (and leaking data from one user to another would be
 catastrophic), so the benefits of indexing (vs. querying DB) would be
 very limited ?

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Easier way of trying TripleO

2013-11-21 Thread Robert Collins
On 22 November 2013 05:55, James Slagle james.sla...@gmail.com wrote:
 On Tue, Nov 19, 2013 at 6:57 PM, Robert Collins

 Right now I think we need to continue the two primary arcs we have:
 CI/CD integration and a CD HA overcloud so that we are being tested,
 and then work on making the artifacts from those tests available.

 Yes, understood.

 There are people focused on CI/CD work currently, and I don't think this 
 effort
 would take away from that focus, other than the time it takes to do patch
 reviews, but I don't think that should be a reason not to do it.

It won't *help* with that work:
https://etherpad.openstack.org/p/tripleo-test-cluster - there is more
there that can be helped, and iteration 3 and four are not even
started. And we can't deploy this until we have the cloud to run the
nodepool instances in, which means:
 - the rebuild work (Joe and Roman are on that)
 - migrating our elements to split out state and config properly with
use-ephemeral
 - The HA spec
 - Heat rolling upgrades.

There is a /tonne/ of stuff that will drive TripleO directly towards
higher quality and a wider set of use cases. I'd really like to see
you putting time into one of those things, because they will all
support the virtuous circle of folk using it - contributing to it.

 It'd be nice to have the images delivered as output from a well tested CD run,
 but I'd like to not delay until we have that.  I think we could make this
 available quicker than we could get to that point.

Sure. But if it's not tested, it's broken right? Our previous
experience from such images is that we had to say 'build your own'
because we didn't know if a given image would work or not.

 Plus, I think if we make this easier to try, we might get more community
 participation.  Right now, I don't think we're attracting people who want to
 try a test/development tripleo based deployment with devtest.  We're really
 only attracting people who want to contribute and develop on tripleo.  Which,
 may certainly be as designed at this point.  But I feel we have a lot of
 positive momentum coming out of summit, so it makes sense to me to try and 
 give
 people something easier to try.

I totally get that, and I would love to see that too. However, I'm
pushing back for three reasons:
 - dib is quite reliable: the issues I see with folk that ask for
support are bad /contents/. And bad Fedora mirrors :(.
 - being able to use something that doesn't meet your needs isn't very
*ahem* useful.
 - we can't push in a meaningful way on a lot of things until the
current core stories are complete, and while folk will work on what
they want to work on, I want to encourage folk to collaborate on the
things the project /really needs/ right now.

 Given that, I think the next steps would be:
  - add a bit more detail in a blueprint and get some feedback on that
  - open up cards to track the work in the tripleo trello
  - start the work on it :).  If it's just me working on it, I'm fine with
that.  I expect there may be 1 or 2 other folks that might work on it
as well, but these aren't folks that are looking at the CI/CD stories right
now.

If it gets /more/ people working on this instead of on the CI or  CD
stories, then I think it's actively harmful as opposed to not-helpful.
I really don't want to see that happen. Lets focus in as a team and
get the hard yards done for a stack we can recommend to anyone and
*then* spread out sideways into useful additional stories like this.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Find the compute host on which a VM runs

2013-11-21 Thread Robert Kukura
On 11/21/2013 04:20 AM, Stefan Apostoaie wrote:
 Hello again,
 
 I studied the portbindings extension (the quantum.db.portbindings_db and
 quantum.extensions.portbindings modules). However it's unclear for me
 who sets the portbindings.HOST_ID attribute. I ran some tests with OVS:
 called quantum port-create command and
 the OVSQuantumPluginV2.create_port method got called and it had
 'binding:host_id': object object at memory_address. If I print out
 the port object I have 'binding:host_id': None. 
 
 What other plugins are doing:
 1. extend the quantum.db.portbindings_db.PortBindingMixin class
 2. call the _process_portbindings_create_and_update method in
 create/update port

Take look at how the ML2 plugin handles port binding and uses
binding:host_id with its set of registered MechanismDrivers. It does not
use the mixin class because the values of binding:vif_type and other
portbinding attributes vary depending on what MechanismDriver binds the
port.

In fact, you may want to consider implementing an ML2 MechanismDriver
rather than a entire new monolithic plugin - it will save you a lot of
work, initially and in the longer term!

 What I cannot find is where the portbindings.HOST_ID attribute is being set.

Its set by nova, either on port creation, or as an update to an existing
port. See allocate_for_instance() and
_populate_neutron_extension_values() in nova/network/neutronv2/api.py.

-Bob

 
 Regards,
 Stefan
 
 
 On Fri, Nov 15, 2013 at 10:57 PM, Mark McClain
 mark.mccl...@dreamhost.com mailto:mark.mccl...@dreamhost.com wrote:
 
 Stefan-
 
 Your workflow is very similar to many other plugins.  You’ll want to
 look at implementing the port binding extension in your plugin.  The
 port binding extension allows Nova to inform Neutron of the host
 where the VM is running.
 
 mark
 
 On Nov 15, 2013, at 9:55 AM, Stefan Apostoaie ioss...@gmail.com
 mailto:ioss...@gmail.com wrote:
 
  Hello,
 
  I'm creating a Neutron/Quantum plugin to work with a networking
 controller that takes care of the configuration of the virtual
 networks. Basically what we are doing is receive the API calls and
 forward them to our controller to run the required configuration on
 the compute hosts.
  What I need to know when a create_port call is made to my plugin
 is on which compute host the VM is created (so that our controller
 will run the configuration on that host). Is there a way to find out
 this information from the plugin?
 
  Regards,
  Stefan Apostoaie
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Friday Dec 20th - Doc Bug Day

2013-11-21 Thread Tom Fifield

All,

This month, docs reaches 500 bugs, making it the 2nd-largest project by 
bug count in all of OpenStack. Yes, it beats Cinder, Horizon, Swift, 
Keystone and Glance, and will soon surpass Neutron.


In order to start the new year in a slightly better state, we have 
arranged a bug squash day:



Friday, December 20th


https://wiki.openstack.org/wiki/Documentation/BugDay


Join us in #openstack-doc whenever you get to your computer, and let's 
beat the bugs :)



For those who are unfamiliar:
Bug days are a day-long event where all the OpenStack community focuses 
exclusively on a task around bugs corresponding to the bug day topic. 
With so many community members available around the same task, these 
days are a great way to start joining the OpenStack community.



Regards,


Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-11-21 Thread Zhongyue Luo
Thanks, I'll give it a try.


On Fri, Nov 22, 2013 at 2:35 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Hello,

 Please tell me if your experience is similar to what I experienced:

 1.  I would see *at most one* MySQL server has gone away error for
 each process that was spawned as an API worker.  I saw them within a
 minute of spawning the workers and then I did not see these errors
 anymore until I restarted the server and spawned new processes.

 2.  I noted in patch set 7 the line of code that completely fixed this
 for me.  Please confirm that you have applied a patch that includes
 this fix.

 https://review.openstack.org/#/c/37131/7/neutron/wsgi.py

 3.  I did not change anything with pool_recycle or idle_interval in my
 config files.  All I did was set api_workers to the number of workers
 that I wanted to spawn.  The line of code with my comment in it above
 was sufficient for me.

 It could be that there is another cause for the errors that you're
 seeing.  For example, is there a max connections setting in mysql that
 might be exceeded when you spawn multiple workers?  More detail would
 be helpful.

 Cheers,
 Carl

 On Wed, Nov 20, 2013 at 7:40 PM, Zhongyue Luo zhongyue@intel.com
 wrote:
  Carl,
 
  By 2006 I mean the MySQL server has gong away error code.
 
  The error message was still appearing when idle_timeout is set to 1 and
 the
  quantum API server did not work in my case.
 
  Could you perhaps share your conf file when applying this patch?
 
  Thanks.
 
 
 
  On Thu, Nov 21, 2013 at 3:34 AM, Carl Baldwin c...@ecbaldwin.net
 wrote:
 
  Hi, sorry for the delay in response.  I'm glad to look at it.
 
  Can you be more specific about the error?  Maybe paste the error your
  seeing in paste.openstack.org?  I don't find any reference to 2006.
  Maybe I'm missing something.
 
  Also, is the patch that you applied the most recent?  With the final
  version of the patch it was no longer necessary for me to set
  pool_recycle or idle_interval.
 
  Thanks,
  Carl
 
  On Tue, Nov 19, 2013 at 7:14 PM, Zhongyue Luo zhongyue@intel.com
  wrote:
   Carl, Yingjun,
  
   I'm still getting the 2006 error even by configuring idle_interval to
 1.
  
   I applied the patch to the RDO havana dist on centos 6.4.
  
   Are there any other options I should be considering such as min/max
 pool
   size or use_tpool?
  
   Thanks.
  
  
  
   On Sat, Sep 7, 2013 at 3:33 AM, Baldwin, Carl (HPCS Neutron)
   carl.bald...@hp.com wrote:
  
   This pool_recycle parameter is already configurable using the
   idle_timeout
   configuration variable in neutron.conf.  I tested this with a value
 of
   1
   as suggested and it did get rid of the mysql server gone away
 messages.
  
   This is a great clue but I think I would like a long-term solution
 that
   allows the end-user to still configure this like they were before.
  
   I'm currently thinking along the lines of calling something like
   pool.dispose() in each child immediately after it is spawned.  I
 think
   this should invalidate all of the existing connections so that when a
   connection is checked out of the pool a new one will be created
 fresh.
  
   Thoughts?  I'll be testing.  Hopefully, I'll have a fixed patch up
   soon.
  
   Cheers,
   Carl
  
   From:  Yingjun Li liyingjun1...@gmail.com
   Reply-To:  OpenStack Development Mailing List
   openstack-dev@lists.openstack.org
   Date:  Thursday, September 5, 2013 8:28 PM
   To:  OpenStack Development Mailing List
   openstack-dev@lists.openstack.org
   Subject:  Re: [openstack-dev] [Neutron] The three API server
   multi-worker
   process patches.
  
  
   +1 for Carl's patch, and i have abandoned my patch..
  
   About the `MySQL server gone away` problem, I fixed it by set
   'pool_recycle' to 1 in db/api.py.
  
   在 2013年9月6日星期五,Nachi Ueno 写道:
  
   Hi Folks
  
   We choose https://review.openstack.org/#/c/37131/ -- This patch to
 go
   on.
   We are also discussing in this patch.
  
   Best
   Nachi
  
  
  
   2013/9/5 Baldwin, Carl (HPCS Neutron) carl.bald...@hp.com:
Brian,
   
As far as I know, no consensus was reached.
   
A problem was discovered that happens when spawning multiple
processes.
The mysql connection seems to go away after between 10-60 seconds
in
my
testing causing a seemingly random API call to fail.  After that,
 it
is
okay.  This must be due to some interaction between forking the
process
and the mysql connection pool.  This needs to be solved but I
 haven't
had
the time to look in to it this week.
   
I'm not sure if the other proposal suffers from this problem.
   
Carl
   
On 9/4/13 3:34 PM, Brian Cline bcl...@softlayer.com wrote:
   
   Was any consensus on this ever reached? It appears both reviews are
still
   open. I'm partial to review 37131 as it attacks the problem a more
   concisely, and, as mentioned, combined the efforts of the two more
   effective patches. I would echo Carl's 

Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-11-21 Thread Balaji P
Hi,

Are we having IRC meeting every week. Can anyone please update me on the 
current plan based on the discussions we had at Havana Design Summit.

Thanks in advance.

Regards,
Balaji.P

From: Regnier, Greg J [mailto:greg.j.regn...@intel.com]
Sent: Friday, October 11, 2013 3:30 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases


The use cases defined (so far) cover these cases:
Single service instance in a single service VM (agree this 
avoids complexity pointed out by Harshad)
Multiple service instances on a single service VM (provides 
flexibility, extensibility)

Not explicitly covered is the case of a logical service across 1 VM.
This seems like a potentially common case, and can be added.
But implementation-wise, when a service wants to span multiple service VMs, it 
seems that is a policy and scheduling decision to be made by the service 
plugin. Question: Does the multiple VM use case put any new requirements on 
this framework (within its scope as a helper library for service plugins)?

Thx,
Greg


From: Bob Melander (bmelande) 
[mailto:bmela...@cisco.com]mailto:[mailto:bmela...@cisco.com]
Sent: Thursday, October 10, 2013 12:48 PM
To: OpenStack Development Mailing List
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Possibly but not necessarily. Some VMs have a large footprint, have 
multi-service capability and physical devices with capabilities sufficient for 
tenant isolation are not that rare (especially if tenants can only indirectly 
control them through a cloud service API).

My point is that if we take into account, in the design, the case where 
multiple service instances are hosted by a single service VM we'll be well 
positioned to support other use cases. But that is not to say the 
implementation effort should target that aspect initially.

Thanks,
 Bob

10 okt 2013 kl. 15:12 skrev Harshad Nakil 
hna...@contrailsystems.commailto:hna...@contrailsystems.com:
Won't it be simpler to keep service instance  as one or more VMs, rather than 
1VM being many service instances?
Usually a appliance is collectively (all it's functions) providing a service. 
Like firewall or load balancer. A appliance is packaged as VM.
It will be easier to manage
it will be easier for the provider to charge.
It will be easier to control resource allocation.
Once a appliance is physical device than you have all of the above issues and 
usually multi-tenancy implementation is weak in most of physical appliances.

Regards
-Harshad


On Oct 10, 2013, at 12:44 AM, Bob Melander (bmelande) 
bmela...@cisco.commailto:bmela...@cisco.com wrote:
Harshad,

By service instance I referred to the logical entities that Neutron creates 
(e.g. Neutron's router). I see a service VM as a (virtual) host where one or 
several service instances can be placed.
The service VM (at least if managed through Nova) will belong to a tenant and 
the service instances are owned by tenants.

If the service VM tenant is different from service instance tenants (which is a 
simple way to hide the service VM from the tenants owning the service 
instances) then it is not clear to me how the existing access control in 
openstack will support pinning the service VM to a particular tenant owning a 
service instance.

Thanks,
Bob

From: Harshad Nakil 
hna...@contrailsystems.commailto:hna...@contrailsystems.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: onsdag 9 oktober 2013 18:56
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Admin creating service instance for a tenant could common use case. But 
ownership of service can be controlled via already existing access control 
mechanism in openstack. If the service instance belonged to a particular 
project then other tenants should by definition should not be able to use this 
instance.
On Tue, Oct 8, 2013 at 11:34 PM, Bob Melander (bmelande) 
bmela...@cisco.commailto:bmela...@cisco.com wrote:
For use case 2, ability to pin an admin/operator owned VM to a particular 
tenant can be useful.
I.e., the service VMs are owned by the operator but a particular service VM 
will only allow service instances from a single tenant.

Thanks,
Bob

From: Regnier, Greg J 
greg.j.regn...@intel.commailto:greg.j.regn...@intel.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: tisdag 8 oktober 2013 23:48
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Hi,

Re: blueprint:  

[openstack-dev] [Tempest] Review request for tempest patches

2013-11-21 Thread Malawade, Abhijeet
HI all,

I have submitted patches for tempest. I have also addressed review comments 
given on patches.
It will be great if someone can review the following patches.

https://review.openstack.org/#/c/47078/
https://review.openstack.org/#/c/47079/


Thanks,
Abhijeet Malawade



__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Review request for tempest patches

2013-11-21 Thread Robert Collins
Hi, I'm glad that you are contributing patches and want to see them
landed. We discourage sending reminders to this list because:

 - reviewers often already have emails sent to them by gerrit
 - or they review by polling on a regular basis
 - if everyone did this we'd have hundreds of emails a day that only
serve to point reviewers at gerrit - no actual value in the threads on
the list.

So - have a little patience, if it's urgent ping someone in
#openstack-qa on IRC.

Thanks!
-Rob

On 22 November 2013 19:52, Malawade, Abhijeet
abhijeet.malaw...@nttdata.com wrote:
 HI all,



 I have submitted patches for tempest. I have also addressed review comments
 given on patches.

 It will be great if someone can review the following patches.



 https://review.openstack.org/#/c/47078/

 https://review.openstack.org/#/c/47079/





 Thanks,

 Abhijeet Malawade






 __
 Disclaimer:This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-21 Thread Thomas Spatzier
Steve Baker sba...@redhat.com wrote on 21.11.2013 21:19:07:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 21.11.2013 21:25
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/21/2013 08:48 PM, Thomas Spatzier wrote:
  Excerpts from Steve Baker's message on 21.11.2013 00:00:47:
  From: Steve Baker sba...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 21.11.2013 00:04
  Subject: Re: [openstack-dev] [Heat] HOT software configuration
  refined after design summit discussions
snip
  I thought about the name SoftwareApplier some more and while it is
clear
  what it does (it applies a software config to a server), the naming is
not
  really consistent with all the other resources in Heat. Every other
  resource type is called after the thing that you get when the template
gets
  instantiated (a Server, a FloatingIP, a VolumeAttachment etc). In
  case of SoftwareApplier what you actually get from a user perspective
is a
  deployed instance of the piece of software described be a
SoftwareConfig.
  Therefore, I was calling it SoftwareDeployment orignally, because you
get a
  software deployment (according to a config). Any comments on that name?
 SoftwareDeployment is a better name, apart from those 3 extra letters.
 I'll rename my POC.  Sorry nannj, you'll need to rename them back ;)

Ok, I'll change the name back in the wiki :-)


  If we think this thru with respect to remove-config (even though this
  needs more thought), a SoftwareApplier (that thing itself) would not
really
  go to state DELETE_IN_PROGRESS during an update. It is always there on
the
  VM but the software it deploys gets deleted and then reapplied or
  whatever ...
 
  Now thinking more about update scenarios (which we can leave for an
  iteration after the initial deployment is working), in my mental model
it
  would be more consistent to have information for handle_create,
  handle_delete, handle_update kinds of events all defined in the
  SoftwareConfig resource. SoftwareConfig for represent configuration
  information for one specific piece of software, e.g. a web server. So
it
  could provide all the information you need to install it, to uninstall
it,
  or to update its config. By updating the SoftwareApplier's (or
  SoftwareDeployment's - my preferred name) state at runtime, the
in-instance
  tools would grab the respective script of whatever an run it.
 
  So SoftwareConfig could look like:
 
  resources:
my_webserver_config:
  type: OS::Heat::SoftwareConfig
  properties:
http_port:
  type: number
# some more config props
 
config_create:
http://www.example.com/my_scripts/webserver/install.sh
config_delete:
  http://www.example.com/my_scripts/webserver/uninstall.sh
config_update:
  http://www.example.com/my_scripts/webserver/applyupdate.sh
 
 
  At runtime, when a SoftwareApplier gets created, it looks for the
  'config_create' hook and triggers that automation. When it gets
deleted, it
  looks for the 'config_delete' hook and so on. Only config_create is
  mandatory.
  I think that would also give us nice extensibility for future use
cases.
  For example, Heat today does not support something like stop-stack or
  start-stack which would be pretty useful though. If we have it one day,
we
  would just add a 'config_start' hook to the SoftwareConfig.
 
 
  [1]
 
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec
  [2] https://blueprints.launchpad.net/heat/+spec/hot-software-config
 
 With the caveat that what we're discussing here is a future
enhancement...

 The problem I see with config_create/config_update/config_delete in a
 single SoftwareConfig is that we probably can't assume these 3 scripts
 consume the same inputs and produce the same outputs.

We could make it a convention that creators of software configs have to use
the same signature for the automation of create, delete etc. Or at least
input param names must be the same, while some pieces might take a subset
only. E.g. delete will probably take less inputs. This way we could have a
self-contained config.
As you said above, implementation-wise this is probably a future
enhancement, so once we have he config_create handling in place we could
just do a PoC patch on-top and try it out.


 Another option might be to have a separate confg/deployment pair for
 delete workloads, and a property on the deployment resource which states
 which phase the workload is executed in (create or delete).

Yes, this would be an option, but IMO a bit confusing for users. Especially
when I inspect a deployed stack, I would be wondering why there are many
SoftwareDeployment resources hanging around for the same piece of software
installed on a server.


 I'd like to think that special treatment for config_update won't be
 needed at all, since CM tools are supposed to be good at converging to
 whatever you