Re: [openstack-dev] [Nova] Blueprint review process

2013-10-29 Thread Russell Bryant
On 10/30/2013 02:26 AM, Tom Fifield wrote:
> On 30/10/13 17:14, Russell Bryant wrote:
>> On 10/29/2013 07:14 PM, Tom Fifield wrote:
>>> So, how would you feel about giving some priority manipulation abilities
>>> to the user committee? :)
>>
>> Abilities, no, but input, of course.
> 
> Any practical ideas on the best way to make that work for you?

How about having someone drop by the weekly nova meeting any time they
would like to discuss something?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Revisiting current column number limit in code

2013-10-29 Thread Jason Kölker
On Wed, Oct 30, 2013 at 6:19 AM, Ravi Kumar Pasumarthy
 wrote:
> Hello,
>
> Looks like the current column limit for the code base is 80. Because of this
> large space on the right hand side is not used. Please find the attached
> pictures of couple of developer screens working on openstack codebase.
>
> Are there any thoughts to increase the current column limit to 120 atleast.
>
> How does this help ? I can see more lines of code, it is easier on the eyes.
> Some external monitors supports rotation, but for laptops we cannot rotate.
>
> So is it possible to revisit the column number limit ?


It looks from the screenshots that the common thread is using GUI
IDE's. Using vim/emacs with splits, 3 views can be comfortably seen
side by side on most high dpi monitors at 80 columns which is totes
useful when coding against multiple projects at the same time.

Happy Hacking!

7-11

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][Nova][Discussion]Blueprint : Auto VM Discovery in OpenStack for existing workload

2013-10-29 Thread Russell Bryant
On 10/30/2013 02:36 AM, Swapnil Kulkarni wrote:
> I had a discussion with russellb regarding this for yesterday, I would
> like to discuss this with the team regarding the blueprint mentioned in
> subject.
> 
> https://blueprints.launchpad.net/nova/+spec/auto-vm-discovery-on-hypervisor
> 
> Description: Organizations opting to use openstack can have varied
> amount of workload that they would like to be available directly with
> the use of some discovery workflows. One common usage of this would be
> exising virtual machines present on the hypervisors. If this instances
> can be disovered by the compute agent during discovery, it would help to
> use Openstack to manage the existing workload directly. Auto VM
> Discovery will enable this functionality initially for KVM guests, the
> widely used hypervisor configuration in OpenStack deployments and
> enhance it further for other hypervisors.

I feel that Nova managing VMs that it didn't create is not an
appropriate use case to support.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][Nova][Discussion]Blueprint : Auto VM Discovery in OpenStack for existing workload

2013-10-29 Thread Swapnil Kulkarni
I had a discussion with russellb regarding this for yesterday, I would like
to discuss this with the team regarding the blueprint mentioned in subject.

https://blueprints.launchpad.net/nova/+spec/auto-vm-discovery-on-hypervisor

Description: Organizations opting to use openstack can have varied amount
of workload that they would like to be available directly with the use of
some discovery workflows. One common usage of this would be exising virtual
machines present on the hypervisors. If this instances can be disovered by
the compute agent during discovery, it would help to use Openstack to
manage the existing workload directly. Auto VM Discovery will enable this
functionality initially for KVM guests, the widely used hypervisor
configuration in OpenStack deployments and enhance it further for other
hypervisors.

Best Regards,
Swapnil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint review process

2013-10-29 Thread Tom Fifield

On 30/10/13 17:14, Russell Bryant wrote:

On 10/29/2013 07:14 PM, Tom Fifield wrote:

On 30/10/13 07:58, Russell Bryant wrote:

On 10/29/2013 04:24 PM, Stefano Maffulli wrote:

On 10/28/2013 10:28 AM, Russell Bryant wrote:

2) Setting clearer expectations.  Since we have so many blueprints for
Nova, I feel it's very important to accurately set expectations for how
the priority of different projects compare.  In the last cycle,
priorities were mainly subjectively set by me.  Setting priorities
based
on what reviewers are willing to spend time on is a more accurate
reflection of the likelihood of a set of changes making it in to the
release.


I'm all for managing expectations :) I had a conversation with Tom about
this and we agreed that there may be a risk that new contributors with
not much karma in the project would have a harder time getting their
blueprint assigned higher priorities. If a new group proposes a
blueprint, they may need to "court" bp reviewers to convince them to
dedicate attention to their first bp. The risk is that the reviewers of
Blueprints risk of becoming a sort of gatekeeper, or what other projects
call 'committers'.

I think this is a concrete risk, it exists but I don't know if it's
possible to eliminate it. I don't think we have to eliminate it but we
need to manage it to minimize it in order to keep our promise of being
'open' as in open to new contributors, even the ones with low karma.

What do you think?


I think you're right, but it's actually no different than things have
been in the past.  It's just blueprints better reflecting how things
actually work.

However, people that have a proven track record of producing high
quality work are going to have an easier time getting attention, because
it takes less work overall to get their patches in.  With that said, if
the blueprint is good, it should get priority based on its merit, even
if the submitter has lower karma in the community.

Where we seem to hit the most friction is actually when merit alone
doesn't grant higher priority (only relevant to a small number of users,
for example), and submitter hasn't built up karma, either.  Those are
the ones that have a hard time, but it's not that surprising.


The user committee might be able to help here. Through the user survey,
and engagement with communities around the world, they have an idea of
what affects what number of users and how.

So, how would you feel about giving some priority manipulation abilities
to the user committee? :)


Abilities, no, but input, of course.


Any practical ideas on the best way to make that work for you?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Alex Glikson
Mike Spreitzer  wrote on 30/10/2013 06:11:04 AM:
> Date: 30/10/2013 06:12 AM
> 
> Alex also wrote: 
> ``I wonder whether it is possible to find an approach that takes 
> into account cross-resource placement considerations (VM-to-VM 
> communicating over the application network, or VM-to-volume 
> communicating over storage network), but does not require delivering
> all the intimate details of the entire environment to a single place
> -- which probably can not be either of Nova/Cinder/Neutron/etc.. but
> can we still use the individual schedulers in each of them with 
> partial view of the environment to drive a placement decision which 
> is consistently better than random?'' 
> 
> I think you could create a cross-scheduler protocol that would 
> accomplish joint placement decision making --- but would not want 
> to.  It would involve a lot of communication, and the subject matter
> of that communication would be most of what you need in a 
> centralized placement solver anyway.  You do not need "all the 
> intimate details", just the bits that are essential to making the 
> placement decision. 

Amount of communication depends on the protocol, and what exactly needs to 
be shared.. Maybe there is a range of options here that we can potentially 
explore, between what exists today (Heat talking to each of the 
components, retrieving local information about availability zones, flavors 
and volume types, existing resources, etc, and communicates back with 
scheduler hints), and having a centralized DB that keeps the entire data 
model.
Also, maybe different points on the continuum between 'share few' and 
'share a lot' would be a good match for different kinds of environments 
and different kinds of workload mix (for example, as you pointed out, in 
an environment with flat network and centralized storage, the sharing can 
be rather minimal).

> Alex Glikson asked why not go directly to holistic if there is no 
> value in doing Nova-only.  Yathi replied to that concern, and let me
> add some notes.  I think there *are* scenarios in which doing Nova-
> only joint policy-based scheduling is advantageous.

Great, I am not trying to claim that such scenarios do not exist - I am 
just saying that it is important to spell them out, to better understand 
the trade-off between the benefit and the complexity, and to make sure out 
design is flexible enough to accommodate the high-priority ones, and 
extensible enough to accommodate the rest going forward.

Regards,
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS IceHouse summit prep and IRC meeting

2013-10-29 Thread Yi Sun
I think the support of the subnet should be part of address object or address 
book object. We should not eliminate the possibility to run firewall as an 
add-on service on top of a virtual router. As matter fact, there are many VM 
based firewall providing certain level of routing service anyway. And such a 
firewall should be able to use the router interfaces to construct the zone 
concept. With both address object and zone, we should be able to support the 
most of requirements. 

Yi 


On Oct 29, 2013, at 10:59 PM, Sumit Naiksatam  wrote:

> I believe people would like to define the zone based on the router port 
> (corresponding to that router's interface). The zone definition at port-level 
> granularity allows one to do that.
> 
> I think your other question is answered as well (firewall will be supported 
> on particular routers).
> 
> Thanks,
> ~Sumit.
> 
> 
> On Mon, Oct 28, 2013 at 7:12 PM,  wrote:
> My mainly concern is using neutron port for zones may cause 
> confusion/misconfig while you can have two ports connected to same 
> network/subnet in different zone. Using network, or subnet (in the form of 
> network/subnet uuid), on the other hand, is more general and can still be 
> mapped to any interface that has port in those network/subnet.
> 
> Also, which "ports" we're talking about here? Router's port (but a Firewall 
> doesn't necessary associate with a router in current model)? Firewall's ports 
> (does Firewall even have ports now? In addition, this means we're not able to 
> create a rule with zones before a Firewall is created)? Definitely not VM's 
> port
> 
> Thanks,
> 
> -Kaiwei
> 
> 
> 
> From: "Rajesh Mohan" 
> To: "OpenStack Development Mailing List" 
> Sent: Thursday, October 24, 2013 2:48:39 PM
> Subject: Re: [openstack-dev] [Neutron] FWaaS IceHouse summit prep and IRC 
>meeting
> 
> This is good discussion.
> 
> +1 for using Neutron ports for defining zones. I see Kaiwei's point but for 
> DELL, neutron ports makes more sense.
> 
> I am not sure if I completely understood the bump-in-the-wire/zone 
> discussion. DELL security appliance allows using different zones with 
> bump-in-the-wire. If the firewall is inserted in bump-in-the-wire mode 
> between router and LAN hosts, then it does makes sense to apply different 
> zones on ports connected to LAN and Router. The there are cases where the 
> end-users apply same zones on both sides but this is a decision we should 
> leave to end customers. We should allow configuring zones in bump-in-the-wire 
> mode as well.
> 
> 
> 
> 
> 
> On Wed, Oct 23, 2013 at 12:08 PM, Sumit Naiksatam  
> wrote:
> Log from today's meeting:
> http://eavesdrop.openstack.org/meetings/networking_fwaas/2013/networking_fwaas.2013-10-23-18.02.log.html
> 
> 
> Action items for some of the folks included.
> 
> Please join us for the meeting next week.
> 
> Thanks,
> ~Sumit.
> 
> On Tue, Oct 22, 2013 at 2:00 PM, Sumit Naiksatam  
> wrote:
> Reminder - we will have the Neutron FWaaS IRC meeting tomorrow Wednesday 
> 18:00 UTC (11 AM PDT).
> 
> Agenda:
> * Tempest tests
> * Definition and use of zones
> * Address Objects
> * Counts API
> * Service Objects
> * Integration with service type framework
> * Open discussion - any other topics you would like to bring up for 
> discussion during the summit.
> 
> https://wiki.openstack.org/wiki/Meetings/FWaaS
> 
> Thanks,
> ~Sumit.
> 
> 
> On Sun, Oct 13, 2013 at 1:56 PM, Sumit Naiksatam  
> wrote:
> Hi All,
> 
> For the next of phase of FWaaS development we will be considering a number of 
> features. I am proposing an IRC meeting on Oct 16th Wednesday 18:00 UTC (11 
> AM PDT) to discuss this.
> 
> The etherpad for the summit session proposal is here:
> https://etherpad.openstack.org/p/icehouse-neutron-fwaas
> 
> and has a high level list of features under consideration.
> 
> Thanks,
> ~Sumit.
> 
>  
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint review process

2013-10-29 Thread Russell Bryant
On 10/29/2013 07:14 PM, Tom Fifield wrote:
> On 30/10/13 07:58, Russell Bryant wrote:
>> On 10/29/2013 04:24 PM, Stefano Maffulli wrote:
>>> On 10/28/2013 10:28 AM, Russell Bryant wrote:
 2) Setting clearer expectations.  Since we have so many blueprints for
 Nova, I feel it's very important to accurately set expectations for how
 the priority of different projects compare.  In the last cycle,
 priorities were mainly subjectively set by me.  Setting priorities
 based
 on what reviewers are willing to spend time on is a more accurate
 reflection of the likelihood of a set of changes making it in to the
 release.
>>>
>>> I'm all for managing expectations :) I had a conversation with Tom about
>>> this and we agreed that there may be a risk that new contributors with
>>> not much karma in the project would have a harder time getting their
>>> blueprint assigned higher priorities. If a new group proposes a
>>> blueprint, they may need to "court" bp reviewers to convince them to
>>> dedicate attention to their first bp. The risk is that the reviewers of
>>> Blueprints risk of becoming a sort of gatekeeper, or what other projects
>>> call 'committers'.
>>>
>>> I think this is a concrete risk, it exists but I don't know if it's
>>> possible to eliminate it. I don't think we have to eliminate it but we
>>> need to manage it to minimize it in order to keep our promise of being
>>> 'open' as in open to new contributors, even the ones with low karma.
>>>
>>> What do you think?
>>
>> I think you're right, but it's actually no different than things have
>> been in the past.  It's just blueprints better reflecting how things
>> actually work.
>>
>> However, people that have a proven track record of producing high
>> quality work are going to have an easier time getting attention, because
>> it takes less work overall to get their patches in.  With that said, if
>> the blueprint is good, it should get priority based on its merit, even
>> if the submitter has lower karma in the community.
>>
>> Where we seem to hit the most friction is actually when merit alone
>> doesn't grant higher priority (only relevant to a small number of users,
>> for example), and submitter hasn't built up karma, either.  Those are
>> the ones that have a hard time, but it's not that surprising.
> 
> The user committee might be able to help here. Through the user survey,
> and engagement with communities around the world, they have an idea of
> what affects what number of users and how.
> 
> So, how would you feel about giving some priority manipulation abilities
> to the user committee? :)

Abilities, no, but input, of course.

If users are screaming for something, then we should absolutely be
paying attention to that.  But at the end of the day, the priorities
still have to be based on where code reviewers are willing to spend
their time.

FWIW, I love the user survey and use the results to help me think about
priorities.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS IceHouse summit prep and IRC meeting

2013-10-29 Thread Sumit Naiksatam
I believe people would like to define the zone based on the router port
(corresponding to that router's interface). The zone definition at
port-level granularity allows one to do that.

I think your other question is answered as well (firewall will be supported
on particular routers).

Thanks,
~Sumit.


On Mon, Oct 28, 2013 at 7:12 PM,  wrote:

> My mainly concern is using neutron port for zones may cause
> confusion/misconfig while you can have two ports connected to same
> network/subnet in different zone. Using network, or subnet (in the form of
> network/subnet uuid), on the other hand, is more general and can still be
> mapped to any interface that has port in those network/subnet.
>
> Also, which "ports" we're talking about here? Router's port (but a
> Firewall doesn't necessary associate with a router in current model)?
> Firewall's ports (does Firewall even have ports now? In addition, this
> means we're not able to create a rule with zones before a Firewall is
> created)? Definitely not VM's port
>
> Thanks,
>
> -Kaiwei
>
>
> --
> *From: *"Rajesh Mohan" 
> *To: *"OpenStack Development Mailing List" <
> openstack-dev@lists.openstack.org>
> *Sent: *Thursday, October 24, 2013 2:48:39 PM
> *Subject: *Re: [openstack-dev] [Neutron] FWaaS IceHouse summit prep and
> IRCmeeting
>
> This is good discussion.
>
> +1 for using Neutron ports for defining zones. I see Kaiwei's point but
> for DELL, neutron ports makes more sense.
>
> I am not sure if I completely understood the bump-in-the-wire/zone
> discussion. DELL security appliance allows using different zones with
> bump-in-the-wire. If the firewall is inserted in bump-in-the-wire mode
> between router and LAN hosts, then it does makes sense to apply different
> zones on ports connected to LAN and Router. The there are cases where the
> end-users apply same zones on both sides but this is a decision we should
> leave to end customers. We should allow configuring zones in
> bump-in-the-wire mode as well.
>
>
>
>
>
> On Wed, Oct 23, 2013 at 12:08 PM, Sumit Naiksatam <
> sumitnaiksa...@gmail.com> wrote:
>
>> Log from today's meeting:
>>
>>
>> http://eavesdrop.openstack.org/meetings/networking_fwaas/2013/networking_fwaas.2013-10-23-18.02.log.html
>>
>> Action items for some of the folks included.
>>
>> Please join us for the meeting next week.
>>
>> Thanks,
>> ~Sumit.
>>
>> On Tue, Oct 22, 2013 at 2:00 PM, Sumit Naiksatam <
>> sumitnaiksa...@gmail.com> wrote:
>>
>>> Reminder - we will have the Neutron FWaaS IRC meeting tomorrow Wednesday
>>> 18:00 UTC (11 AM PDT).
>>>
>>> Agenda:
>>> * Tempest tests
>>> * Definition and use of zones
>>> * Address Objects
>>> * Counts API
>>> * Service Objects
>>> * Integration with service type framework
>>> * Open discussion - any other topics you would like to bring up for
>>> discussion during the summit.
>>>
>>> https://wiki.openstack.org/wiki/Meetings/FWaaS
>>>
>>> Thanks,
>>> ~Sumit.
>>>
>>>
>>> On Sun, Oct 13, 2013 at 1:56 PM, Sumit Naiksatam <
>>> sumitnaiksa...@gmail.com> wrote:
>>>
 Hi All,

 For the next of phase of FWaaS development we will be considering a
 number of features. I am proposing an IRC meeting on Oct 16th Wednesday
 18:00 UTC (11 AM PDT) to discuss this.

 The etherpad for the summit session proposal is here:
 https://etherpad.openstack.org/p/icehouse-neutron-fwaas

 and has a high level list of features under consideration.

 Thanks,
 ~Sumit.



>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] {opestack-dev][Horizon] Errors while creating networks

2013-10-29 Thread Aaron Rosen
Just curious what does keystone endpoint-list show?
On Oct 29, 2013 9:36 PM, "Somanchi Trinath-B39208" 
wrote:

>  Hi-
>
> ** **
>
> I have got the following error in apache error logs while I try to bring
> up a new instance. 
>
> ** **
>
> I have followed Openstack Havana for Ubuntu 12.04 LTS installation manual
> from docs.openstack.org.
>
> ** **
>
> I’m going with single node (both controller and compute node on a single
> machine) installation. 
>
> ** **
>
> ** **
>
> [Tue Oct 29 11:17:20 2013] [error] Problem instantiating action class.
>
> [Tue Oct 29 11:17:20 2013] [error] Traceback (most recent call last):
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/lib/python2.7/dist-packages/horizon/workflows/base.py", line 376, in
> action
>
> [Tue Oct 29 11:17:20 2013] [error] context)
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/lib/python2.7/dist-packages/horizon/workflows/base.py", line 141, in
> __init__
>
> [Tue Oct 29 11:17:20 2013] [error] self._populate_choices(request,
> context)
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/lib/python2.7/dist-packages/horizon/workflows/base.py", line 154, in
> _populate_choices
>
> [Tue Oct 29 11:17:20 2013] [error] bound_field.choices = meth(request,
> context)
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
> line 510, in populate_network_choices
>
> [Tue Oct 29 11:17:20 2013] [error] _('Unable to retrieve networks.'))*
> ***
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
> line 503, in populate_network_choices
>
> [Tue Oct 29 11:17:20 2013] [error] networks =
> api.neutron.network_list_for_tenant(request, tenant_id)
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/neutron.py",
> line 456, in network_list_for_tenant
>
> [Tue Oct 29 11:17:20 2013] [error] shared=False, **params)
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/neutron.py",
> line 434, in network_list
>
> [Tue Oct 29 11:17:20 2013] [error] networks =
> neutronclient(request).list_networks(**params).get('networks')
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/neutron.py",
> line 423, in neutronclient
>
> [Tue Oct 29 11:17:20 2013] [error] % (request.user.token.id,
> base.url_for(request, 'network')))
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/base.py",
> line 268, in url_for
>
> [Tue Oct 29 11:17:20 2013] [error] raise
> exceptions.ServiceCatalogException(service_type)
>
> [Tue Oct 29 11:17:20 2013] [error] ServiceCatalogException: Invalid
> service catalog service: network
>
> [Tue Oct 29 11:17:20 2013] [error] Internal Server Error:
> /horizon/project/instances/launch
>
> [Tue Oct 29 11:17:20 2013] [error] Traceback (most recent call last):
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 140,
> in get_response
>
> [Tue Oct 29 11:17:20 2013] [error] response = response.render()
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/lib/python2.7/dist-packages/django/template/response.py", line 105,
> in render
>
> [Tue Oct 29 11:17:20 2013] [error] self.content = self.rendered_content
> 
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/lib/python2.7/dist-packages/django/template/response.py", line 82, in
> rendered_content
>
> [Tue Oct 29 11:17:20 2013] [error] content = template.render(context)*
> ***
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/lib/python2.7/dist-packages/django/template/base.py", line 140, in
> render
>
> [Tue Oct 29 11:17:20 2013] [error] return self._render(context)
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/lib/python2.7/dist-packages/django/template/base.py", line 134, in
> _render
>
> [Tue Oct 29 11:17:20 2013] [error] return self.nodelist.render(context)
> 
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/lib/python2.7/dist-packages/django/template/base.py", line 830, in
> render
>
> [Tue Oct 29 11:17:20 2013] [error] bit = self.render_node(node,
> context)
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/usr/lib/python2.7/dist-packages/django/template/base.py", line 844, in
> render_node
>
> [Tue Oct 29 11:17:20 2013] [error] return node.render(context)
>
> [Tue Oct 29 11:17:20 2013] [error]   File
> "/

[openstack-dev] {opestack-dev][Horizon] Errors while creating networks

2013-10-29 Thread Somanchi Trinath-B39208
Hi-

I have got the following error in apache error logs while I try to bring up a 
new instance.

I have followed Openstack Havana for Ubuntu 12.04 LTS installation manual from 
docs.openstack.org.

I'm going with single node (both controller and compute node on a single 
machine) installation.


[Tue Oct 29 11:17:20 2013] [error] Problem instantiating action class.
[Tue Oct 29 11:17:20 2013] [error] Traceback (most recent call last):
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/workflows/base.py", line 376, in 
action
[Tue Oct 29 11:17:20 2013] [error] context)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/workflows/base.py", line 141, in 
__init__
[Tue Oct 29 11:17:20 2013] [error] self._populate_choices(request, context)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/workflows/base.py", line 154, in 
_populate_choices
[Tue Oct 29 11:17:20 2013] [error] bound_field.choices = meth(request, 
context)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 510, in populate_network_choices
[Tue Oct 29 11:17:20 2013] [error] _('Unable to retrieve networks.'))
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 503, in populate_network_choices
[Tue Oct 29 11:17:20 2013] [error] networks = 
api.neutron.network_list_for_tenant(request, tenant_id)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/neutron.py",
 line 456, in network_list_for_tenant
[Tue Oct 29 11:17:20 2013] [error] shared=False, **params)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/neutron.py",
 line 434, in network_list
[Tue Oct 29 11:17:20 2013] [error] networks = 
neutronclient(request).list_networks(**params).get('networks')
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/neutron.py",
 line 423, in neutronclient
[Tue Oct 29 11:17:20 2013] [error] % (request.user.token.id, 
base.url_for(request, 'network')))
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/base.py",
 line 268, in url_for
[Tue Oct 29 11:17:20 2013] [error] raise 
exceptions.ServiceCatalogException(service_type)
[Tue Oct 29 11:17:20 2013] [error] ServiceCatalogException: Invalid service 
catalog service: network
[Tue Oct 29 11:17:20 2013] [error] Internal Server Error: 
/horizon/project/instances/launch
[Tue Oct 29 11:17:20 2013] [error] Traceback (most recent call last):
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 140, in 
get_response
[Tue Oct 29 11:17:20 2013] [error] response = response.render()
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/template/response.py", line 105, in 
render
[Tue Oct 29 11:17:20 2013] [error] self.content = self.rendered_content
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/template/response.py", line 82, in 
rendered_content
[Tue Oct 29 11:17:20 2013] [error] content = template.render(context)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/template/base.py", line 140, in render
[Tue Oct 29 11:17:20 2013] [error] return self._render(context)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/template/base.py", line 134, in _render
[Tue Oct 29 11:17:20 2013] [error] return self.nodelist.render(context)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/template/base.py", line 830, in render
[Tue Oct 29 11:17:20 2013] [error] bit = self.render_node(node, context)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/template/base.py", line 844, in 
render_node
[Tue Oct 29 11:17:20 2013] [error] return node.render(context)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/template/defaulttags.py", line 485, in 
render
[Tue Oct 29 11:17:20 2013] [error] output = self.nodelist.render(context)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/template/base.py", line 830, in render
[Tue Oct 29 11:17:20 2013] [error] bit = self.render_node(node, context)
[Tue Oct 29 11:17:20 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/template/base.py", line 844, in 
render_node
[Tue Oct 29 11:17:20 2013] [error] return node.render(c

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Jiang, Yunhong


> -Original Message-
> From: Isaku Yamahata [mailto:isaku.yamah...@gmail.com]
> Sent: Tuesday, October 29, 2013 8:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: isaku.yamah...@gmail.com; Itzik Brown
> Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
> support
> 
> Hi Yunhong.
> 
> On Tue, Oct 29, 2013 at 08:22:40PM +,
> "Jiang, Yunhong"  wrote:
> 
> > > * describe resource external to nova that is attached to VM in the API
> > > (block device mapping and/or vif references)
> > > * ideally the nova scheduler needs to be aware of the local capacity,
> > > and how that relates to the above information (relates to the cross
> > > service scheduling issues)
> >
> > I think this possibly a bit different. For volume, it's sure managed by
> Cinder, but for PCI devices, currently
> > It ;s managed by nova. So we possibly need nova to translate the
> information (possibly before nova scheduler).
> >
> > > * state of the device should be stored by Neutron/Cinder
> > > (attached/detached, capacity, IP, etc), but still exposed to the
> > > "scheduler"
> >
> > I'm not sure if we can keep the state of the device in Neutron. Currently
> nova manage all PCI devices.
> 
> Yes, with the current implementation, nova manages PCI devices and it
> works.
> That's great. It will remain so in Icehouse cycle (maybe also J?).
> 
> But how about long term direction?
> Neutron should know/manage such network related resources on
> compute nodes?

So you mean the PCI device management will be spited between Nova and Neutron? 
For example, non-NIC device owned by nova and NIC device owned by neutron?

There have been so many discussion of the scheduler enhancement, like 
https://etherpad.openstack.org/p/grizzly-split-out-scheduling , so possibly 
that's the right direction? Let's wait for the summit discussion.

> The implementation in Nova will be moved into Neutron like what Cinder
> did?
> any opinions/thoughts?
> It seems that not so many Neutron developers are interested in PCI
> passthrough at the moment, though.
> 
> There are use cases for this, I think.
> For example, some compute nodes use OVS plugin, another nodes LB
> plugin.
> (Right now it may not possible easily, but it will be with ML2 plugin and
> mechanism driver). User wants their VMs to run on nodes with OVS plugin
> for
> some reason(e.g. performance difference).
> Such usage would be handled similarly.
> 
> Thanks,
> ---
> Isaku Yamahata
> 
> 
> >
> > Thanks
> > --jyh
> >
> >
> > > * connection params get given to Nova from Neutron/Cinder
> > > * nova still has the vif driver or volume driver to make the final
> connection
> > > * the disk should be formatted/expanded, and network info injected in
> > > the same way as before (cloud-init, config drive, DHCP, etc)
> > >
> > > John
> > >
> > > On 29 October 2013 10:17, Irena Berezovsky
> 
> > > wrote:
> > > > Hi Jiang, Robert,
> > > >
> > > > IRC meeting option works for me.
> > > >
> > > > If I understand your question below, you are looking for a way to tie
> up
> > > > between requested virtual network(s) and requested PCI device(s).
> The
> > > way we
> > > > did it in our solution  is to map a provider:physical_network to an
> > > > interface that represents the Physical Function. Every virtual
> network is
> > > > bound to the provider:physical_network, so the PCI device should
> be
> > > > allocated based on this mapping.  We can  map a PCI alias to the
> > > > provider:physical_network.
> > > >
> > > >
> > > >
> > > > Another topic to discuss is where the mapping between neutron
> port
> > > and PCI
> > > > device should be managed. One way to solve it, is to propagate the
> > > allocated
> > > > PCI device details to neutron on port creation.
> > > >
> > > > In case  there is no qbg/qbh support, VF networking configuration
> > > should be
> > > > applied locally on the Host.
> > > >
> > > > The question is when and how to apply networking configuration on
> the
> > > PCI
> > > > device?
> > > >
> > > > We see the following options:
> > > >
> > > > * it can be done on port creation.
> > > >
> > > > * It can be done when nova VIF driver is called for vNIC
> > > plugging.
> > > > This will require to  have all networking configuration available to
> the
> > > VIF
> > > > driver or send request to the neutron server to obtain it.
> > > >
> > > > * It can be done by  having a dedicated L2 neutron agent
> on
> > > each
> > > > Host that scans for allocated PCI devices  and then retrieves
> networking
> > > > configuration from the server and configures the device. The agent
> will
> > > be
> > > > also responsible for managing update requests coming from the
> neutron
> > > > server.
> > > >
> > > >
> > > >
> > > > For macvtap vNIC type assignment, the networking configuration can
> be
> > > > applied by a dedicated L2 neutron agent.
> > > >
> > > >
> > > >
> > > > BR,
> > > >
> > > > Irena
> > > >
> > > >
> > > >
> > > > F

Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Mike Spreitzer
Following is my reaction to the last few hours of discussion.

Russell Bryant wrote "Nova calling heat to orchestrate Nova seems 
fundamentally wrong".  I am not totally happy about this either, but would 
you be OK with Nova orchestrating Nova?  To me, that seems worse --- 
duplicating functionality we already have in Heat.  The way I see it, we 
have to decide how cope with the inescapable fact that orchestration is 
downstream from joint decision making.  I see no better choices than: (1) 
a 1-stage API in which the client presents the whole top-level group and 
is done, or (2) a 2-stage API in which the client first presents the whole 
top-level group and second proceeds to orchestrate the creations of the 
resources in that group.  BTW, when we go holistic, (1) will look less 
offensive: there will be a holistic infrastructure scheduler doing the 
joint decision making first, not one of the individual services, and that 
is followed by orchestration of the individual resources.  If we took Alex 
Glikson's suggestion and started holistic, we would not be so upset on 
this issue.

Alex also wrote:
``I wonder whether it is possible to find an approach that takes into 
account cross-resource placement considerations (VM-to-VM communicating 
over the application network, or VM-to-volume communicating over storage 
network), but does not require delivering all the intimate details of the 
entire environment to a single place -- which probably can not be either 
of Nova/Cinder/Neutron/etc.. but can we still use the individual 
schedulers in each of them with partial view of the environment to drive a 
placement decision which is consistently better than random?''

I think you could create a cross-scheduler protocol that would accomplish 
joint placement decision making --- but would not want to.  It would 
involve a lot of communication, and the subject matter of that 
communication would be most of what you need in a centralized placement 
solver anyway.  You do not need "all the intimate details", just the bits 
that are essential to making the placement decision.

Reacting to Andrew Lasky's note, Chris Friesen noted:
``As soon as we start trying to do placement logic outside of Nova it 
becomes trickier to deal with race conditions when competing against 
other API users trying to acquire resources at the same time.''

I have two reactions.  The simpler one is: we can avoid this problem if we 
simply route all placement problems (either all placement problems for 
Compute, or all placement problems for a larger set of services) though 
one thing that decides and commits allocations.  My other reaction is: we 
will probably want multi-engine.  That is, the option to run several 
placement solvers concurrently --- with optimistic concurrency control. 
That presents essentially the same problem as Chris noted.  As Yathi noted 
in one of his responses, this can be handled by appropriate implementation 
structure.  In the spring I worked out a multi-engine design for my 
group's old code.  The conclusion I reached is that after a placement 
engine finds a solution, you want an essentially ACID transaction that (1) 
checks that the solution is still valid and, if so, (2) makes the 
allocations in that solution.

Yathi wrote that the 2-stage API creates race conditions, but I do not see 
that.  As we are starting with Nova only, in the first of the two stages 
Nova can both decide and commit the allocations in one transaction; the 
second stage just picks up and uses the allocations made in the first 
stage.

Alex Glikson asked why not go directly to holistic if there is no value in 
doing Nova-only.  Yathi replied to that concern, and let me add some 
notes.  I think there *are* scenarios in which doing Nova-only joint 
policy-based scheduling is advantageous.  For example, if the storage is 
in SAN or NAS then you do not have a strong interaction between scheduling 
compute and storage so you do not need holistic scheduling to get good 
availability.  I know some organizations build their datacenters that way, 
with full cross-sectional bandwidth between the compute and storage, 
because (among other things) it makes that simplification.  Another thing 
that can be done with joint policy-based scheduling is minimize license 
costs for certain IBM software.  That software is licensed based on how 
many cores the software has access to, so in a situation with 
hyperthreading or overcommitment the license cost can depend on how the VM 
instances are arranged among hosts.

Yathi replied to Khanh-Toan's remark about edge policies, but I suspect 
there was a misunderstanding.  I think the critique concerns this part of 
the input:

  "policies" : [ {
"edge" : "http-app-edge-1",
"policy_uuid" : "some-policy-uuid-2",
"type" : "edge",
"policy_id" : 3
  } ],
  "edges" : [ {
"r_member" : "app-server-group-1",
"l_member" : "http-server-group-1",
"name" : "http-app-edge-1"
  } ],

That is, t

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Isaku Yamahata
Hi Yunhong.

On Tue, Oct 29, 2013 at 08:22:40PM +,
"Jiang, Yunhong"  wrote:

> > * describe resource external to nova that is attached to VM in the API
> > (block device mapping and/or vif references)
> > * ideally the nova scheduler needs to be aware of the local capacity,
> > and how that relates to the above information (relates to the cross
> > service scheduling issues)
> 
> I think this possibly a bit different. For volume, it's sure managed by 
> Cinder, but for PCI devices, currently
> It ;s managed by nova. So we possibly need nova to translate the information 
> (possibly before nova scheduler).
> 
> > * state of the device should be stored by Neutron/Cinder
> > (attached/detached, capacity, IP, etc), but still exposed to the
> > "scheduler"
> 
> I'm not sure if we can keep the state of the device in Neutron. Currently 
> nova manage all PCI devices.

Yes, with the current implementation, nova manages PCI devices and it works.
That's great. It will remain so in Icehouse cycle (maybe also J?).

But how about long term direction?
Neutron should know/manage such network related resources on compute nodes?
The implementation in Nova will be moved into Neutron like what Cinder did?
any opinions/thoughts?
It seems that not so many Neutron developers are interested in PCI
passthrough at the moment, though.

There are use cases for this, I think.
For example, some compute nodes use OVS plugin, another nodes LB plugin.
(Right now it may not possible easily, but it will be with ML2 plugin and
mechanism driver). User wants their VMs to run on nodes with OVS plugin for
some reason(e.g. performance difference).
Such usage would be handled similarly.

Thanks,
---
Isaku Yamahata


> 
> Thanks
> --jyh
> 
> 
> > * connection params get given to Nova from Neutron/Cinder
> > * nova still has the vif driver or volume driver to make the final 
> > connection
> > * the disk should be formatted/expanded, and network info injected in
> > the same way as before (cloud-init, config drive, DHCP, etc)
> > 
> > John
> > 
> > On 29 October 2013 10:17, Irena Berezovsky 
> > wrote:
> > > Hi Jiang, Robert,
> > >
> > > IRC meeting option works for me.
> > >
> > > If I understand your question below, you are looking for a way to tie up
> > > between requested virtual network(s) and requested PCI device(s). The
> > way we
> > > did it in our solution  is to map a provider:physical_network to an
> > > interface that represents the Physical Function. Every virtual network is
> > > bound to the provider:physical_network, so the PCI device should be
> > > allocated based on this mapping.  We can  map a PCI alias to the
> > > provider:physical_network.
> > >
> > >
> > >
> > > Another topic to discuss is where the mapping between neutron port
> > and PCI
> > > device should be managed. One way to solve it, is to propagate the
> > allocated
> > > PCI device details to neutron on port creation.
> > >
> > > In case  there is no qbg/qbh support, VF networking configuration
> > should be
> > > applied locally on the Host.
> > >
> > > The question is when and how to apply networking configuration on the
> > PCI
> > > device?
> > >
> > > We see the following options:
> > >
> > > * it can be done on port creation.
> > >
> > > * It can be done when nova VIF driver is called for vNIC
> > plugging.
> > > This will require to  have all networking configuration available to the
> > VIF
> > > driver or send request to the neutron server to obtain it.
> > >
> > > * It can be done by  having a dedicated L2 neutron agent on
> > each
> > > Host that scans for allocated PCI devices  and then retrieves networking
> > > configuration from the server and configures the device. The agent will
> > be
> > > also responsible for managing update requests coming from the neutron
> > > server.
> > >
> > >
> > >
> > > For macvtap vNIC type assignment, the networking configuration can be
> > > applied by a dedicated L2 neutron agent.
> > >
> > >
> > >
> > > BR,
> > >
> > > Irena
> > >
> > >
> > >
> > > From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
> > > Sent: Tuesday, October 29, 2013 9:04 AM
> > >
> > >
> > > To: Robert Li (baoli); Irena Berezovsky;
> > prashant.upadhy...@aricent.com;
> > > chris.frie...@windriver.com; He, Yongli; Itzik Brown
> > >
> > >
> > > Cc: OpenStack Development Mailing List; Brian Bowen (brbowen); Kyle
> > Mestery
> > > (kmestery); Sandhya Dasu (sadasu)
> > > Subject: RE: [openstack-dev] [nova] [neutron] PCI pass-through network
> > > support
> > >
> > >
> > >
> > > Robert, is it possible to have a IRC meeting? I'd prefer to IRC meeting
> > > because it's more openstack style and also can keep the minutes
> > clearly.
> > >
> > >
> > >
> > > To your flow, can you give more detailed example. For example, I can
> > > consider user specify the instance with -nic option specify a network id,
> > > and then how nova device the requirement to the PCI device? I assume
> > the
> > > network id should define the swi

Re: [openstack-dev] Keystone TLS Question

2013-10-29 Thread Adam Young
On 10/25/2013 02:31 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
wrote:


Hello again,

It looks to me that TLS is automatically supported by the Keystone 
Havana. I performed the following curl call and it seems to indicate 
that Keystone is using TLS. Can anyone validate that Keystone Havana 
does or does not support TLS?



Yep, but don't take my word for it, Read the docs:

https://github.com/openstack/keystone/blob/master/doc/source/configuration.rst#ssl





Thanks,

Mark

root@build-HP-Compaq-6005-Pro-SFF-PC:/etc/keystone# curl -v --insecure 
https://15.253.58.165:35357/v2.0/certificates/signing


* About to connect() to 15.253.58.165 port 35357 (#0)

* Trying 15.253.58.165... connected

* successfully set certificate verify locations:

* CAfile: none

CApath: /etc/ssl/certs

* SSLv3, TLS handshake, Client hello (1):

* SSLv3, TLS handshake, Server hello (2):

* SSLv3, TLS handshake, CERT (11):

* SSLv3, TLS handshake, Server finished (14):

* SSLv3, TLS handshake, Client key exchange (16):

* SSLv3, TLS change cipher, Client hello (1):

* SSLv3, TLS handshake, Finished (20):

* SSLv3, TLS change cipher, Client hello (1):

* SSLv3, TLS handshake, Finished (20):

* SSL connection using AES256-SHA

* Server certificate:

* subject: C=US; ST=CA; L=Sunnyvale; O=OpenStack; OU=Keystone; 
emailAddress=keyst...@openstack.org; CN=Keystone


* start date: 2013-03-15 01:44:55 GMT

* expire date: 2013-03-15 01:44:55 GMT

* common name: Keystone (does not match '15.253.58.165')

* issuer: serialNumber=5; C=US; ST=CA; L=Sunnyvale; O=OpenStack; 
OU=Keystone; emailAddress=keyst...@openstack.org; CN=Self Signed


* SSL certificate verify result: unable to get local issuer 
certificate (20), continuing anyway.


> GET /v2.0/certificates/signing HTTP/1.1

> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 
OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3


> Host: 15.253.58.165:35357

> Accept: */*

>

< HTTP/1.1 200 OK

< Content-Type: text/html; charset=UTF-8

< Content-Length: 973

< Date: Fri, 25 Oct 2013 18:27:52 GMT

<

-BEGIN CERTIFICATE-

MIICoDCCAgkCAREwDQYJKoZIhvcNAQEFBQAwgZ4xCjAIBgNVBAUTATUxCzAJBgNV

BAYTAlVTMQswCQYDVQQIEwJDQTESMBAGA1UEBxMJU3Vubnl2YWxlMRIwEAYDVQQK

EwlPcGVuU3RhY2sxETAPBgNVBAsTCEtleXN0b25lMSUwIwYJKoZIhvcNAQkBFhZr

ZXlzdG9uZUBvcGVuc3RhY2sub3JnMRQwEgYDVQQDEwtTZWxmIFNpZ25lZDAgFw0x

...

3S9E696tVhWqc+HAW91KgZcIwAgQrxWeC0x5O76Q3MGrxvWwyMHPlsxyL4H67AnI

wq8zJxOFtzvP8rVWrQ3PnzBozXKuU3VLPqAsDI4nDxjqFpVf3LYCFDRueS2EI5xc

5/rt9g==

-END CERTIFICATE-

* Connection #0 to host 15.253.58.165 left intact

* Closing connection #0

* SSLv3, TLS alert, Client hello (1):

root@build-HP-Compaq-6005-Pro-SFF-PC:/etc/keystone#

*From:*Miller, Mark M (EB SW Cloud - R&D - Corvallis)
*Sent:* Friday, October 25, 2013 8:58 AM
*To:* OpenStack Development Mailing List
*Subject:* [openstack-dev] Keystone TLS Question

Hello,

Is there any direct TLS support by Keystone other than using the 
Apache2 front end?


Mark



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Ideas of idempotentcy-client-token

2013-10-29 Thread haruka tanizawa
Hi John!

Thank you for your reply:)
Sorry for inline comment.


We also need something that doesn't clash with the cross-service
> request id, as that is doing something slightly different. Would
> "idempotent-request-id" work better?
>
Oh, yes.
Did you say about this BP(
https://blueprints.launchpad.net/nova/+spec/cross-service-request-id )?
(I am going to go that HK session.)
So, I will user your opinion, and I try to go forward.


Also, I assume we are only adding this into the v3 API? We should
> close the v2 API for additions I guess?
>
Now I only adapt into v2 API, so it is aloso necessary to cope with the v3
API.
Did I answer your question?

Sincerely,
Haruka Tanizawa
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Propose Jay Bryant for core

2013-10-29 Thread Josh Durgin

On 10/29/2013 01:54 PM, John Griffith wrote:

Hey,

I wanted to propose Jay Bryant (AKA jsbryant, AKA jungleboy, AKA
:) ) for core membership on the Cinder team.  Jay has been working on
Cinder for a while now and has really shown some dedication and
provided much needed help with quality reviews.  In addition to his
review activity he's also been very active in IRC and in Cinder
development as well.

I think he'd be a good add to the core team.


+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] new libraries

2013-10-29 Thread Doug Hellmann
On Tue, Oct 29, 2013 at 5:21 PM, Endre Karlson wrote:

> oslo.logging
> oslo.db
>
> Is there any ideas on introducing these libraries post-summit time?
>

We're putting together notes about the state of each part of the Oslo
incubator in https://etherpad.openstack.org/p/icehouse-oslo-status to
prepare for the summit session
http://icehousedesignsummit.sched.org/event/33ecaed0ab0d04d81f17216af59ac4f9

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Proposal for new heat-core member

2013-10-29 Thread Randall Burt
Thanks all. I'm honored to join this great team!


 Original message 
From: Steve Baker 
Date: 10/29/2013 3:24 PM (GMT-06:00)
To: openstack-dev@lists.openstack.org
Cc: Randall Burt 
Subject: Re: [openstack-dev] [heat] Proposal for new heat-core member


I count enough core +1s. Congrats Randall, well deserved!

On 10/26/2013 08:12 AM, Steven Dake wrote:
Hi,

I would like to propose Randall Burt for Heat Core.  He has shown interest in 
Heat by participating in IRC and providing high quality reviews.  The most 
important aspect in my mind of joining Heat Core is output and quality of 
reviews.  Randall has been involved in Heat reviews for atleast 6 months.  He 
has had 172 reviews over the last 6 months staying "in the pack" [1] of core 
heat reviewers.  His 90 day stats are also encouraging, with 97 reviews 
(compared to the top reviewer Steve Hardy with 444 reviews).  Finally his 30 
day stats also look good, beating out 3 core reviewers [2] on output with good 
quality reviews.

Please have a vote +1/-1 and take into consideration: 
https://wiki.openstack.org/wiki/Heat/CoreTeam

Regards,
-steve

[1] http://russellbryant.net/openstack-stats/heat-reviewers-180.txt
[2] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Murano mission change towards Application Catalog

2013-10-29 Thread Georgy Okrokvertskhov
Hi OpenStackers,


I would like to announce a change in Murano project mission. Murano has
been created as Windows DC deployment project focused on Windows Services
deployment automation with Horizon integration.


As the project evolves we start to get more and more requests for adding
non-Windows related services using the same approach as we designed and
implemented in Murano. In addition the way how Murano has been originally
designed provides a solution that closes few gaps in OpenStack: software
orchestration, workflow automation and application catalog.


Now we see a new wave of emerging projects and initiatives focusing on
platform aspects of OpenStack ecosystem. With that Murano team has made a
decision to move towards application catalog mission and contribute
software orchestration and workflow automation portions to the
corresponding OpenStack initiatives.


A new mission for Murano project is to create an application catalog - an
integration layer allowing to publish third-party applications and services
for self-provisioning in OpenStack. An application could be a simple single
instance VM or complex multi-tier application with autoscaling and
self-healing.


Murano will  cover different aspects of application delivery to the
end-user including publishing, distribution control and final provisioning.

You can find a detailed description of the redefined mission and more use
cases on a wiki page
https://wiki.openstack.org/wiki/Murano/ApplicationCatalog.


Murano plans to use Heat for both resource provisioning and software
orchestration. We already use Heat as part of OpenStack infrastructure
provisioning engine. At the same time Murano will integrate with TaskFlow
and Mistral projects to cover specifics of workflows execution. Ceilometer
will be used for tracking usage statistics required for the billing
purposes.



Thanks,
Georgy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS IceHouse summit prep and IRC meeting

2013-10-29 Thread Sumit Naiksatam
Hi All,

Reminder - we will have the Neutron FWaaS IRC meeting tomorrow Wednesday
18:00 UTC (11 AM PDT).

Agenda - https://wiki.openstack.org/wiki/Meetings/FWaaS

Thanks,
~Sumit.


On Wed, Oct 23, 2013 at 12:08 PM, Sumit Naiksatam
wrote:

> Log from today's meeting:
>
>
> http://eavesdrop.openstack.org/meetings/networking_fwaas/2013/networking_fwaas.2013-10-23-18.02.log.html
>
> Action items for some of the folks included.
>
> Please join us for the meeting next week.
>
> Thanks,
> ~Sumit.
>
> On Tue, Oct 22, 2013 at 2:00 PM, Sumit Naiksatam  > wrote:
>
>> Reminder - we will have the Neutron FWaaS IRC meeting tomorrow Wednesday
>> 18:00 UTC (11 AM PDT).
>>
>> Agenda:
>> * Tempest tests
>> * Definition and use of zones
>> * Address Objects
>> * Counts API
>> * Service Objects
>> * Integration with service type framework
>> * Open discussion - any other topics you would like to bring up for
>> discussion during the summit.
>>
>> https://wiki.openstack.org/wiki/Meetings/FWaaS
>>
>> Thanks,
>> ~Sumit.
>>
>>
>> On Sun, Oct 13, 2013 at 1:56 PM, Sumit Naiksatam <
>> sumitnaiksa...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> For the next of phase of FWaaS development we will be considering a
>>> number of features. I am proposing an IRC meeting on Oct 16th Wednesday
>>> 18:00 UTC (11 AM PDT) to discuss this.
>>>
>>> The etherpad for the summit session proposal is here:
>>> https://etherpad.openstack.org/p/icehouse-neutron-fwaas
>>>
>>> and has a high level list of features under consideration.
>>>
>>> Thanks,
>>> ~Sumit.
>>>
>>>
>>>
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object status and admin_state_up

2013-10-29 Thread Ravi Chunduru
Generally loadbalancer will have the following options

enable - configurationally enable
disable - configurationally disable

up - status alive
down - status down

If we have the above it will be meaningful to get actual status of the
object.

Thanks,
-Ravi.




On Tue, Oct 29, 2013 at 4:33 PM, Itsuro ODA  wrote:

> Hi,
>
> I think "INACTIVE" is right for resources with admin_statu_up False.
>
> BTW, there are following requirements:
> * Change to ACTIVE from PENDING_CREATE/UPDATE when the serives
>   is available actually. (ie. after lbaas_agent done the job.)
> * Reflect a member is alive or not to the 'status' attribute of
>   member resource. (ie. if a member is not alive, the status is
>   "DOWN".)
>
> Note that we are planning to implement above requiremants to LVS
> driver.
>
> Thanks,
> Itsuro Oda
>
> On Tue, 29 Oct 2013 13:19:16 +0400
> Eugene Nikanorov  wrote:
>
> > Hi folks,
> >
> > Currently there are two attributes of vips/pools/members that represent a
> > status: 'status' and 'admin_state_up'.
> >
> > The first one is used to represent deployment status and can be
> > PENDING_CREATE, ACTIVE, PENDING_DELETE, ERROR.
> > We also have admin_state_up which could be True or False.
> >
> > I'd like to know your opinion on how to change 'status' attribute based
> on
> > admin_state_up changes.
> > For instance. If admin_state_up is updated to be False, how do you think
> > 'status' should change?
> >
> > Also, speaking of reference implementation (HAProxy), changing vip or
> pool
> > admin_state_up to False effectively destroys the balancer (undeploys it),
> > while the objects remain in ACTIVE state.
> > There are two options to fix this discrepancy:
> > 1) Change status of vip/pool to PENDING_CREATE if admin_state_up changes
> to
> > False
> > 2) Don't destroy the loadbalancer and use HAProxy capability to disable
> > frontend and backend while leave vip/pool in ACTIVE state
> >
> > Please share your opinion.
> >
> > Thanks,
> > Eugene.
>
> --
> Itsuro ODA 
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object status and admin_state_up

2013-10-29 Thread Itsuro ODA
Hi,

I think "INACTIVE" is right for resources with admin_statu_up False.

BTW, there are following requirements:
* Change to ACTIVE from PENDING_CREATE/UPDATE when the serives
  is available actually. (ie. after lbaas_agent done the job.)
* Reflect a member is alive or not to the 'status' attribute of
  member resource. (ie. if a member is not alive, the status is
  "DOWN".)

Note that we are planning to implement above requiremants to LVS
driver.

Thanks,
Itsuro Oda

On Tue, 29 Oct 2013 13:19:16 +0400
Eugene Nikanorov  wrote:

> Hi folks,
> 
> Currently there are two attributes of vips/pools/members that represent a
> status: 'status' and 'admin_state_up'.
> 
> The first one is used to represent deployment status and can be
> PENDING_CREATE, ACTIVE, PENDING_DELETE, ERROR.
> We also have admin_state_up which could be True or False.
> 
> I'd like to know your opinion on how to change 'status' attribute based on
> admin_state_up changes.
> For instance. If admin_state_up is updated to be False, how do you think
> 'status' should change?
> 
> Also, speaking of reference implementation (HAProxy), changing vip or pool
> admin_state_up to False effectively destroys the balancer (undeploys it),
> while the objects remain in ACTIVE state.
> There are two options to fix this discrepancy:
> 1) Change status of vip/pool to PENDING_CREATE if admin_state_up changes to
> False
> 2) Don't destroy the loadbalancer and use HAProxy capability to disable
> frontend and backend while leave vip/pool in ACTIVE state
> 
> Please share your opinion.
> 
> Thanks,
> Eugene.

-- 
Itsuro ODA 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Propose Jay Bryant for core

2013-10-29 Thread Mike Perez
On Tue, Oct 29, 2013 at 1:54 PM, John Griffith
wrote:

> Hey,
>
> I wanted to propose Jay Bryant (AKA jsbryant, AKA jungleboy, AKA
> :) ) for core membership on the Cinder team.  Jay has been working on
> Cinder for a while now and has really shown some dedication and
> provided much needed help with quality reviews.  In addition to his
> review activity he's also been very active in IRC and in Cinder
> development as well.
>
> I think he'd be a good add to the core team.
>
> Thanks,
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


+1


-Mike Perez
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Henry Gessau
On Tue, Oct 29, at 5:52 pm, Jiang, Yunhong  wrote:

>> -Original Message-
>> From: Henry Gessau [mailto:ges...@cisco.com]
>> Sent: Tuesday, October 29, 2013 2:23 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
>> support
>> 
>> On Tue, Oct 29, at 4:31 pm, Jiang, Yunhong 
>> wrote:
>> 
>> > Henry,why do you think the "service VM" need the entire PF instead of a
>> > VF? I think the SR-IOV NIC should provide QoS and performance
>> isolation.
>> 
>> I was speculating. I just thought it might be a good idea to leave open the
>> possibility of assigning a PF to a VM if the need arises.
>> 
>> Neutron service VMs are a new thing. I will be following the discussions
>> and
>> there is a summit session for them. It remains to be seen if there is any
>> desire/need for full PF ownership of NICs. But if a service VM owns the PF
>> and has the right NIC driver it could do some advanced features with it.
>> 
> At least in current PCI implementation, if a device has no SR-IOV
> enabled, then that device will be exposed and can be assigned (is this
> your so-called PF?).

Apologies, this was not clear to me until now. Thanks. I am not aware of a
use-case for a service VM needing to control VFs. So you are right, I should
not have talked about PF but rather just the entire NIC device in
passthrough mode, no SR-IOV needed.

So the admin will need to know: Put a NIC in SR-IOV mode if it is to be used
by multiple VMs. Put a NIC in single device passthrough mode if it is to be
used by one service VM.

> If a device has SR-IOV enabled, then only VF be
> exposed and the PF is hidden from resource tracker. The reason is, when
> SR-IOV enabled, the PF is mostly used to configure and management the
> VFs, and it will be security issue to expose the PF to a guest.

Thanks for bringing up the security issue. If a physical network interface
is connected in a special way to some switch/router with the intention being
for it to be used only by a service VM, then close attention must be paid to
security. The device owner might get some low-level network access that can
be misused.

> I'm not sure if you are talking about the PF, are you talking about the
> PF w/ or w/o SR-IOV enabled.
> 
> I totally agree that assign a PCI NIC to service VM have a lot of benefit
> from both performance and isolation point of view.
> 
> Thanks
> --jyh
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint review process

2013-10-29 Thread Tom Fifield

On 30/10/13 07:58, Russell Bryant wrote:

On 10/29/2013 04:24 PM, Stefano Maffulli wrote:

On 10/28/2013 10:28 AM, Russell Bryant wrote:

2) Setting clearer expectations.  Since we have so many blueprints for
Nova, I feel it's very important to accurately set expectations for how
the priority of different projects compare.  In the last cycle,
priorities were mainly subjectively set by me.  Setting priorities based
on what reviewers are willing to spend time on is a more accurate
reflection of the likelihood of a set of changes making it in to the
release.


I'm all for managing expectations :) I had a conversation with Tom about
this and we agreed that there may be a risk that new contributors with
not much karma in the project would have a harder time getting their
blueprint assigned higher priorities. If a new group proposes a
blueprint, they may need to "court" bp reviewers to convince them to
dedicate attention to their first bp. The risk is that the reviewers of
Blueprints risk of becoming a sort of gatekeeper, or what other projects
call 'committers'.

I think this is a concrete risk, it exists but I don't know if it's
possible to eliminate it. I don't think we have to eliminate it but we
need to manage it to minimize it in order to keep our promise of being
'open' as in open to new contributors, even the ones with low karma.

What do you think?


I think you're right, but it's actually no different than things have
been in the past.  It's just blueprints better reflecting how things
actually work.

>

However, people that have a proven track record of producing high
quality work are going to have an easier time getting attention, because
it takes less work overall to get their patches in.  With that said, if
the blueprint is good, it should get priority based on its merit, even
if the submitter has lower karma in the community.

Where we seem to hit the most friction is actually when merit alone
doesn't grant higher priority (only relevant to a small number of users,
for example), and submitter hasn't built up karma, either.  Those are
the ones that have a hard time, but it's not that surprising.


The user committee might be able to help here. Through the user survey, 
and engagement with communities around the world, they have an idea of 
what affects what number of users and how.


So, how would you feel about giving some priority manipulation abilities 
to the user committee? :)



Regards,


Tom







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Yathiraj Udupi (yudupi)
Hi Alex,



I thought the value in Nova was established right from the time the initial 
blueprint was established.  Group of VMs with policies as described in the 
document.

Once we have described and registered the group using these apis, it adds value 
to be schedule them as a whole.



With the bigger roadmap in mind we have to start somewhere.



Thanks,

Yathi.







Sent from my LG G2, an AT&T 4G LTE smartphone





-- Original message--

From: Alex Glikson

Date: Tue, 10/29/2013 3:11 PM

To: OpenStack Development Mailing List (not for usage questions);

Subject:Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - 
Updated document with an example request payload



If we envision the main benefits only after (parts of) this logic moves outside 
of Nova (and starts addressing other resources) -- would it be still worth 
maintaining an order of 5K LOC in Nova to support this feature? Why not going 
for the 'ultimate' solution in the first place then, keeping in Nova only the 
mandatory enablement (TBD)?
Alternatively, if we think that there is value in having this just in Nova -- 
would be good to understand the exact scenarios which do not require awareness 
of other resources (and see if they are important enough to maintain those 5K 
LOC), and how exactly this can gradually evolve into the 'ultimate' solution.
Or am I missing something?

Alex




From:"Yathiraj Udupi (yudupi)" 
To:"OpenStack Development Mailing List (not for usage questions)" 
,
Date:29/10/2013 11:46 PM
Subject:Re: [openstack-dev] [nova][scheduler] Instance Group Model and 
APIs - Updated document with an example request payload




Thanks Alex, Mike, Andrew, Russel for your comments.  This ongoing API 
discussion started in our scheduler meetings, as a first step to tackle in the 
Smarter resource placement ideas - See the doc for reference - 
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit
  This roadmap calls for a unified resource placement decisions to be taken 
covering resources across services, starting from a complete topology request 
with all the necessary nodes/instances/resources, their connections, and the 
policies.

However we agreed that we will first address the defining of the required APIs, 
and start the effort to make this happen within Nova,  using VM instances 
groups, with policies.
Hence this proposal for the instance groups.

The entire group needs to be placed as a whole, at least the first step is to 
find an ideal placement choices for the entire group.  Once the placement has 
been identified (using a smart resource placement engine that addresses solving 
the entire group), we then focus on ways to schedule them as a whole.  This is 
not part of the API discussion, however important for the smart resource 
placement ideas.  This definitely involves concepts such as reservation, etc.  
Heat or Heat APIs could be a choice to enable the final orchestration, but I am 
not commenting on that here.

The APIs effort here is an attempt to provide clean interfaces now to be able 
to represent this instance group, and save them, and also define apis to create 
them.  The actual implementation will have to rely on one or more services to - 
1. to make the resource placement decisions, 2. then actually provision them, 
orchestrate them in the right order, etc.

The placement decisions itself can happen in a module that can be a separate 
service, and can be reused by different services, and it also needs to have a 
global vision of all the resources.  (Again all of this part of the scope of 
smart resource placement topic).

Thanks,
Yathi.


On 10/29/13, 2:14 PM, "Andrew Laski" 
mailto:andrew.la...@rackspace.com>> wrote:

On 10/29/13 at 04:05pm, Mike Spreitzer wrote:
Alex Glikson mailto:glik...@il.ibm.com>> wrote on 
10/29/2013 03:37:41 AM:

1. I assume that the motivation for rack-level anti-affinity is to
survive a rack failure. Is this indeed the case?
This is a very interesting and important scenario, but I am curious
about your assumptions regarding all the other OpenStack resources
and services in this respect.

Remember we are just starting on the roadmap.  Nova in Icehouse, holistic
later

2. What exactly do you mean by "network reachibility" between the
two groups? Remember that we are in Nova (at least for now), so we
don't have much visibility to the topology of the physical or
virtual networks. Do you have some concrete thoughts on how such
policy can be enforced, in presence of potentially complex
environment managed by Neutron?

I am aiming for the holistic future, and Yathi copied that from an example
I drew with the holistic future in mind.  While we are only addressing
Nova, I think a network reachability policy is inapproprite.

3. The JSON somewhat reminds me the interface of Heat, and I would
assume that certain capabilities that would be required to implement
it wo

Re: [openstack-dev] [Cinder] Propose Jay Bryant for core

2013-10-29 Thread Patil, Tushar
+1

Tpatil.

>-Original Message-
>From: John Griffith [mailto:john.griff...@solidfire.com]
>Sent: Tuesday, October 29, 2013 1:54 PM
>To: OpenStack Development Mailing List
>Subject: [openstack-dev] [Cinder] Propose Jay Bryant for core
>
>Hey,
>
>I wanted to propose Jay Bryant (AKA jsbryant, AKA jungleboy, AKA
>:) ) for core membership on the Cinder team.  Jay has been working on
>Cinder for a while now and has really shown some dedication and provided
>much needed help with quality reviews.  In addition to his review
>activity he's also been very active in IRC and in Cinder development as
>well.
>
>I think he'd be a good add to the core team.
>
>Thanks,
>John
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Propose Jay Bryant for core

2013-10-29 Thread Eric Harney
On 10/29/2013 04:54 PM, John Griffith wrote:
> Hey,
> 
> I wanted to propose Jay Bryant (AKA jsbryant, AKA jungleboy, AKA
> :) ) for core membership on the Cinder team.  Jay has been working on
> Cinder for a while now and has really shown some dedication and
> provided much needed help with quality reviews.  In addition to his
> review activity he's also been very active in IRC and in Cinder
> development as well.
> 
> I think he'd be a good add to the core team.
> 
> Thanks,
> John
> 

+1 from me, he's been actively contributing for a while now.

Eric


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Shared network between specific tenants, but not all tenants?

2013-10-29 Thread Mike Wilson
+1

I also have tenants asking for this :-). I'm interested to see a blueprint.

-Mike


On Tue, Oct 29, 2013 at 1:24 PM, Jay Pipes  wrote:

> On 10/29/2013 02:25 PM, Justin Hammond wrote:
>
>> We have been considering this and have some notes on our concept, but we
>> haven't made a blueprint for it. I will speak amongst my group and find
>> out what they think of making it more public.
>>
>
> OK, cool, glad to know I'm not the only one with tenants asking for this :)
>
> Looking forward to a possible blueprint on this.
>
> Best,
> -jay
>
>
>  On 10/29/13 12:26 PM, "Jay Pipes"  wrote:
>>
>>  Hi Neutron devs,
>>>
>>> Are there any plans to support networks that are shared/routed only
>>> between certain tenants (not all tenants)?
>>>
>>> Thanks,
>>> -jay
>>>
>>> __**_
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.**org 
>>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>>
>>
>>
>> __**_
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.**org 
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>
>>
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Alex Glikson
If we envision the main benefits only after (parts of) this logic moves 
outside of Nova (and starts addressing other resources) -- would it be 
still worth maintaining an order of 5K LOC in Nova to support this 
feature? Why not going for the 'ultimate' solution in the first place 
then, keeping in Nova only the mandatory enablement (TBD)?
Alternatively, if we think that there is value in having this just in Nova 
-- would be good to understand the exact scenarios which do not require 
awareness of other resources (and see if they are important enough to 
maintain those 5K LOC), and how exactly this can gradually evolve into the 
'ultimate' solution.
Or am I missing something?

Alex




From:   "Yathiraj Udupi (yudupi)" 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   29/10/2013 11:46 PM
Subject:Re: [openstack-dev] [nova][scheduler] Instance Group Model 
and APIs - Updated document with an example request payload



Thanks Alex, Mike, Andrew, Russel for your comments.  This ongoing API 
discussion started in our scheduler meetings, as a first step to tackle in 
the Smarter resource placement ideas - See the doc for reference - 
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit
 
 This roadmap calls for a unified resource placement decisions to be taken 
covering resources across services, starting from a complete topology 
request with all the necessary nodes/instances/resources, their 
connections, and the policies. 

However we agreed that we will first address the defining of the required 
APIs, and start the effort to make this happen within Nova,  using VM 
instances groups, with policies. 
Hence this proposal for the instance groups. 

The entire group needs to be placed as a whole, at least the first step is 
to find an ideal placement choices for the entire group.  Once the 
placement has been identified (using a smart resource placement engine 
that addresses solving the entire group), we then focus on ways to 
schedule them as a whole.  This is not part of the API discussion, however 
important for the smart resource placement ideas.  This definitely 
involves concepts such as reservation, etc.  Heat or Heat APIs could be a 
choice to enable the final orchestration, but I am not commenting on that 
here.

The APIs effort here is an attempt to provide clean interfaces now to be 
able to represent this instance group, and save them, and also define apis 
to create them.  The actual implementation will have to rely on one or 
more services to - 1. to make the resource placement decisions, 2. then 
actually provision them, orchestrate them in the right order, etc. 

The placement decisions itself can happen in a module that can be a 
separate service, and can be reused by different services, and it also 
needs to have a global vision of all the resources.  (Again all of this 
part of the scope of smart resource placement topic). 

Thanks,
Yathi. 


On 10/29/13, 2:14 PM, "Andrew Laski"  wrote:

On 10/29/13 at 04:05pm, Mike Spreitzer wrote:
Alex Glikson  wrote on 10/29/2013 03:37:41 AM:

1. I assume that the motivation for rack-level anti-affinity is to
survive a rack failure. Is this indeed the case?
This is a very interesting and important scenario, but I am curious
about your assumptions regarding all the other OpenStack resources
and services in this respect.

Remember we are just starting on the roadmap.  Nova in Icehouse, holistic
later

2. What exactly do you mean by "network reachibility" between the
two groups? Remember that we are in Nova (at least for now), so we
don't have much visibility to the topology of the physical or
virtual networks. Do you have some concrete thoughts on how such
policy can be enforced, in presence of potentially complex
environment managed by Neutron?

I am aiming for the holistic future, and Yathi copied that from an example
I drew with the holistic future in mind.  While we are only addressing
Nova, I think a network reachability policy is inapproprite.

3. The JSON somewhat reminds me the interface of Heat, and I would
assume that certain capabilities that would be required to implement
it would be similar too. What is the proposed approach to
'harmonize' between the two, in environments that include Heat? What
would be end-to-end flow? For example, who would do the
orchestration of individual provisioning steps? Would "create"
operation delegate back to Heat for that? Also, how other
relationships managed by Heat (e.g., links to storage and network)
would be incorporated in such an end-to-end scenario?

You raised a few interesting issues.

1. Heat already has a way to specify resources, I do not see why we should
invent another.

2. Should Nova call Heat to do the orchestration?  I would like to see an
example where ordering is an issue.  IMHO, since OpenStack already has a
solution for creating resources in the right order, I do not see why we
should invent another.

Having Nova cal

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Jiang, Yunhong


> -Original Message-
> From: Henry Gessau [mailto:ges...@cisco.com]
> Sent: Tuesday, October 29, 2013 2:23 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
> support
> 
> On Tue, Oct 29, at 4:31 pm, Jiang, Yunhong 
> wrote:
> 
> > Henry,why do you think the "service VM" need the entire PF instead of a
> > VF? I think the SR-IOV NIC should provide QoS and performance
> isolation.
> 
> I was speculating. I just thought it might be a good idea to leave open the
> possibility of assigning a PF to a VM if the need arises.
> 
> Neutron service VMs are a new thing. I will be following the discussions
> and
> there is a summit session for them. It remains to be seen if there is any
> desire/need for full PF ownership of NICs. But if a service VM owns the PF
> and has the right NIC driver it could do some advanced features with it.
> 
At least in current PCI implementation, if a device has no SR-IOV enabled, then 
that device will be exposed and can be assigned (is this your so-called PF?). 
If a device has SR-IOV enabled, then only VF be exposed and the PF is hidden 
from resource tracker. The reason is, when SR-IOV enabled, the PF is mostly 
used to configure and management the VFs, and it will be security issue to 
expose the PF to a guest.

I'm not sure if you are talking about the PF, are you talking about the PF w/ 
or w/o SR-IOV enabled. 

I totally agree that assign a PCI NIC to service VM have a lot of benefit from 
both performance and isolation point of view.

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Yathiraj Udupi (yudupi)

On 10/29/13, 2:26 PM, "Chris Friesen"  wrote:

>On 10/29/2013 03:14 PM, Andrew Laski wrote:
>
>>
> > Nova has placement concerns that extend
>> to finding a capable hypervisor for the VM that someone would like to
>> boot, and then just slightly beyond.  If there are higher level
>> decisions to be made about placement decisions I think that belongs
>> outside of Nova, and then just tell Nova where to put it.
>
>Not sure about this part.  I think that with instance groups Nova should
>be aware of the placement of the group as a whole, not just single
>instances.
>
>As soon as we start trying to do placement logic outside of Nova it
>becomes trickier to deal with race conditions when competing against
>other API users trying to acquire resources at the same time.


APIs are designed to be 2-stage as per this proposal.  First register the
entire instance group, and then trigger another api to create them.  These
create calls can be scheduled one by one if necessary.
Agreed that these creates will create race conditions, hence it definitely
needs some concept of a reservation to be really accurate.  It is not
enough to just find the ideal optimal resource placement at any point of
time, there should also be guarantee that it can be eventually scheduled.
Hence the actual implementation of these apis, will need such additional
support. 

Thanks,
Yathi. 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Yathiraj Udupi (yudupi)
Thanks Alex, Mike, Andrew, Russel for your comments.  This ongoing API 
discussion started in our scheduler meetings, as a first step to tackle in the 
Smarter resource placement ideas - See the doc for reference - 
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit
  This roadmap calls for a unified resource placement decisions to be taken 
covering resources across services, starting from a complete topology request 
with all the necessary nodes/instances/resources, their connections, and the 
policies.

However we agreed that we will first address the defining of the required APIs, 
and start the effort to make this happen within Nova,  using VM instances 
groups, with policies.
Hence this proposal for the instance groups.

The entire group needs to be placed as a whole, at least the first step is to 
find an ideal placement choices for the entire group.  Once the placement has 
been identified (using a smart resource placement engine that addresses solving 
the entire group), we then focus on ways to schedule them as a whole.  This is 
not part of the API discussion, however important for the smart resource 
placement ideas.  This definitely involves concepts such as reservation, etc.  
Heat or Heat APIs could be a choice to enable the final orchestration, but I am 
not commenting on that here.

The APIs effort here is an attempt to provide clean interfaces now to be able 
to represent this instance group, and save them, and also define apis to create 
them.  The actual implementation will have to rely on one or more services to - 
1. to make the resource placement decisions, 2. then actually provision them, 
orchestrate them in the right order, etc.

The placement decisions itself can happen in a module that can be a separate 
service, and can be reused by different services, and it also needs to have a 
global vision of all the resources.  (Again all of this part of the scope of 
smart resource placement topic).

Thanks,
Yathi.


On 10/29/13, 2:14 PM, "Andrew Laski" 
mailto:andrew.la...@rackspace.com>> wrote:

On 10/29/13 at 04:05pm, Mike Spreitzer wrote:
Alex Glikson mailto:glik...@il.ibm.com>> wrote on 
10/29/2013 03:37:41 AM:

1. I assume that the motivation for rack-level anti-affinity is to
survive a rack failure. Is this indeed the case?
This is a very interesting and important scenario, but I am curious
about your assumptions regarding all the other OpenStack resources
and services in this respect.

Remember we are just starting on the roadmap.  Nova in Icehouse, holistic
later

2. What exactly do you mean by "network reachibility" between the
two groups? Remember that we are in Nova (at least for now), so we
don't have much visibility to the topology of the physical or
virtual networks. Do you have some concrete thoughts on how such
policy can be enforced, in presence of potentially complex
environment managed by Neutron?

I am aiming for the holistic future, and Yathi copied that from an example
I drew with the holistic future in mind.  While we are only addressing
Nova, I think a network reachability policy is inapproprite.

3. The JSON somewhat reminds me the interface of Heat, and I would
assume that certain capabilities that would be required to implement
it would be similar too. What is the proposed approach to
'harmonize' between the two, in environments that include Heat? What
would be end-to-end flow? For example, who would do the
orchestration of individual provisioning steps? Would "create"
operation delegate back to Heat for that? Also, how other
relationships managed by Heat (e.g., links to storage and network)
would be incorporated in such an end-to-end scenario?

You raised a few interesting issues.

1. Heat already has a way to specify resources, I do not see why we should
invent another.

2. Should Nova call Heat to do the orchestration?  I would like to see an
example where ordering is an issue.  IMHO, since OpenStack already has a
solution for creating resources in the right order, I do not see why we
should invent another.

Having Nova call into Heat is backwards IMO.  If there are specific
pieces of information that Nova can expose, or API capabilities to help
with orchestration/placement that Heat or some other service would like
to use then let's look at that.  Nova has placement concerns that extend
to finding a capable hypervisor for the VM that someone would like to
boot, and then just slightly beyond.  If there are higher level
decisions to be made about placement decisions I think that belongs
outside of Nova, and then just tell Nova where to put it.



Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
h

Re: [openstack-dev] [Cinder] Propose Jay Bryant for core

2013-10-29 Thread Walter A. Boring IV

On 10/29/2013 01:54 PM, John Griffith wrote:

Hey,

I wanted to propose Jay Bryant (AKA jsbryant, AKA jungleboy, AKA
:) ) for core membership on the Cinder team.  Jay has been working on
Cinder for a while now and has really shown some dedication and
provided much needed help with quality reviews.  In addition to his
review activity he's also been very active in IRC and in Cinder
development as well.

I think he'd be a good add to the core team.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1

Walt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Network topologies

2013-10-29 Thread Fox, Kevin M
I'm not sure how you could avoid dependencies in any network configuration 
worth dumping and restoring.

One case that I'd like to use the functionality you list is the following:

I have an external network, and I create a private network per tenant and 
attach it via a router to the public network. This is the "tenant public 
network"

I need to create this network per tenant. Having a tool to save and repeat this 
activity would be very nice.

But even being this simple, the network needs to be created before the router 
can be created/attached to it.

How about extended neutron services? There will probably some day (if not 
already) be some that need things started in order. LBaaS? Network first then 
LB?

Heat already supports all of this.

Thanks,
Kevin

From: Edgar Magana [emag...@plumgrid.com]
Sent: Tuesday, October 29, 2013 11:33 AM
To: OpenStack List
Subject: Re: [openstack-dev] [Heat] Network topologies

Tim,

You statement "building an api that manages a network topology more than
one that needs to build out the dependencies between resources to help
create the network topology"
Is exactly what we are proposing, and this is why we believe this is not
under Heat domain.

This is why we are NOT proposing to manage any dependency between network
elements, that part is what I call "intelligence" of the orchestration and
we are not proposing any orchestration system, you are already have that
in place :-)

So, we simple want an API that tenats may use to "save", "retrieve" and
"share" topologies. For instance, tenant A creates a topology with two
networks (192.168.0.0/24 and 192.168.1.0/24) both with dhcp enabled and a
router connecting them. So, we first create it using CLI commands or
Horizon and then we call the API to save the topology for that tenant,
that topology can be also share  between tenants if the owner wanted to do
that, the same concept that we have in Neutron for "share networks", So
Tenant B or any other Tenants, don't need to re-create the whole topology,
just "open" the shared topology from tenant A. Obviously, overlapping IPs
will be a "must" requirement.

I am including in this thread to Mark McClain who is the Neutron PTL and
the main guy expressing concerns in not  having overlapping
functionalities between Neutron and Heat or any other project.

I am absolutely, happy to discuss further with you but if you are ok with
this approach we could start the development in Neutron umbrella, final
thoughts?

Thanks,

Edgar

On 10/29/13 8:23 AM, "Tim Schnell"  wrote:

>Hi Edgar,
>
>It seems like this blueprint is related more to building an api that
>manages a network topology more than one that needs to build out the
>dependencies between resources to help create the network topology. If we
>are talking about just an api to "save", "duplicate", and "share" these
>network topologies then I would agree that this is not something that Heat
>currently does or should do necessarily.
>
>I have been focusing primarily on front-end work for Heat so I apologize
>if these questions have already been answered. How is this API related to
>the existing network topology in Horizon? The existing network topology
>can already define the relationships and dependencies using Neutron I'm
>assuming so there is no apparent need to use Heat to gather this
>information. I'm a little confused as to the scope of the discussion, is
>that something that you are potentially interested in changing?
>
>Steve, Clint and Zane can better answer whether or not Heat wants to be in
>the business of managing existing network topologies but from my
>perspective I tend to agree with your statement that if you needed Heat to
>help describe the relationships between network resources then that might
>be duplicated effort but if don't need Heat to do that then this blueprint
>belongs in Neutron.
>
>Thanks,
>Tim
>
>
>
>
>
>On 10/29/13 1:32 AM, "Steven Hardy"  wrote:
>
>>On Mon, Oct 28, 2013 at 01:19:13PM -0700, Edgar Magana wrote:
>>> Hello Folks,
>>>
>>> Thank you Zane, Steven and Clint for you input.
>>>
>>> Our main goal in this BP is to provide networking users such as Heat
>>>(we
>>> consider it as a neutron user) a better and consolidated network
>>>building
>>> block in terms of an API that you could use for orchestration of
>>> application-driven requirements. This building block does not add any
>>> "intelligence" to the network topology because it does not have it and
>>> this is why I think this BP is different from the work that you are
>>>doing
>>> in Heat.
>>
>>So how do you propose to handle dependencies between elements in the
>>topology, e.g where things need to be created/deleted in a particular
>>order, or where one resource must be in a particular state before another
>>can be created?
>>
>>> The network topologies BP is not related to the Neutron Network Service
>>> Insertion BP:
>>>
>>>https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertio
>>>n

Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Alex Glikson
Andrew Laski  wrote on 29/10/2013 11:14:03 PM:
> [...]
> Having Nova call into Heat is backwards IMO.  If there are specific 
> pieces of information that Nova can expose, or API capabilities to help 
> with orchestration/placement that Heat or some other service would like 
> to use then let's look at that.  Nova has placement concerns that extend 

> to finding a capable hypervisor for the VM that someone would like to 
> boot, and then just slightly beyond.

+1

>  If there are higher level 
> decisions to be made about placement decisions I think that belongs 
> outside of Nova, and then just tell Nova where to put it.

I wonder whether it is possible to find an approach that takes into 
account cross-resource placement considerations (VM-to-VM communicating 
over the application network, or VM-to-volume communicating over storage 
network), but does not require delivering all the intimate details of the 
entire environment to a single place -- which probably can not be either 
of Nova/Cinder/Neutron/etc.. but can we still use the individual 
schedulers in each of them with partial view of the environment to drive a 
placement decision which is consistently better than random?

Regards,
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Network topologies

2013-10-29 Thread Aaron Rosen
On Tue, Oct 29, 2013 at 1:06 PM, Edgar Magana  wrote:

> Aaron,
>
> Moving the management of topology?
>

As in you are going to have to call the neutron api from within your api to
create the topology from the template (what heat already does).


> I am not proposing nothing like that, actually could you explain me the
> current workflow to save a network topology created by Neutron APIs, in
> order to use it by a different tenant or the owner itself in a different
> time?
>

Correct there is nothing that does this today (unless you write a script to
do it). That said, I think Zane said it best as the part that should be
implemented should be something that dumps out the existing network
configuration that you could use as a heat template.  Is there any reason
why this approach won't work in your opinion?


Possibly, that is the part that I am missing and it will help me to improve
> current proposal.
>
> Thanks,
>
> Edgar
>
> From: Aaron Rosen 
> Reply-To: OpenStack List 
> Date: Tuesday, October 29, 2013 12:48 PM
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Heat] Network topologies
>
> Hi Edgar,
>
> I definitely see the usecase for the idea that you propose. In my opinion,
> I don't see the reason for moving the management of topology into neutron,
>  Heat already provides this functionality (besides for the part of taking
> an existing deployment and generating a template file). Also, I wanted to
> point out that in a way you will have to do orchestration as you're
> topology manager will have to call the neutron api in order to create the
> topology and tear it down.
>
> Best,
>
> Aaron
>
>
> On Tue, Oct 29, 2013 at 11:33 AM, Edgar Magana wrote:
>
>> Tim,
>>
>> You statement "building an api that manages a network topology more than
>> one that needs to build out the dependencies between resources to help
>> create the network topology"
>> Is exactly what we are proposing, and this is why we believe this is not
>> under Heat domain.
>>
>> This is why we are NOT proposing to manage any dependency between network
>> elements, that part is what I call "intelligence" of the orchestration and
>> we are not proposing any orchestration system, you are already have that
>> in place :-)
>>
>> So, we simple want an API that tenats may use to "save", "retrieve" and
>> "share" topologies. For instance, tenant A creates a topology with two
>> networks (192.168.0.0/24 and 192.168.1.0/24) both with dhcp enabled and a
>> router connecting them. So, we first create it using CLI commands or
>> Horizon and then we call the API to save the topology for that tenant,
>> that topology can be also share  between tenants if the owner wanted to do
>> that, the same concept that we have in Neutron for "share networks", So
>> Tenant B or any other Tenants, don't need to re-create the whole topology,
>> just "open" the shared topology from tenant A. Obviously, overlapping IPs
>> will be a "must" requirement.
>>
>> I am including in this thread to Mark McClain who is the Neutron PTL and
>> the main guy expressing concerns in not  having overlapping
>> functionalities between Neutron and Heat or any other project.
>>
>> I am absolutely, happy to discuss further with you but if you are ok with
>> this approach we could start the development in Neutron umbrella, final
>> thoughts?
>>
>> Thanks,
>>
>> Edgar
>>
>> On 10/29/13 8:23 AM, "Tim Schnell"  wrote:
>>
>> >Hi Edgar,
>> >
>> >It seems like this blueprint is related more to building an api that
>> >manages a network topology more than one that needs to build out the
>> >dependencies between resources to help create the network topology. If we
>> >are talking about just an api to "save", "duplicate", and "share" these
>> >network topologies then I would agree that this is not something that
>> Heat
>> >currently does or should do necessarily.
>> >
>> >I have been focusing primarily on front-end work for Heat so I apologize
>> >if these questions have already been answered. How is this API related to
>> >the existing network topology in Horizon? The existing network topology
>> >can already define the relationships and dependencies using Neutron I'm
>> >assuming so there is no apparent need to use Heat to gather this
>> >information. I'm a little confused as to the scope of the discussion, is
>> >that something that you are potentially interested in changing?
>> >
>> >Steve, Clint and Zane can better answer whether or not Heat wants to be
>> in
>> >the business of managing existing network topologies but from my
>> >perspective I tend to agree with your statement that if you needed Heat
>> to
>> >help describe the relationships between network resources then that might
>> >be duplicated effort but if don't need Heat to do that then this
>> blueprint
>> >belongs in Neutron.
>> >
>> >Thanks,
>> >Tim
>> >
>> >
>> >
>> >
>> >
>> >On 10/29/13 1:32 AM, "Steven Hardy"  wrote:
>> >
>> >>On Mon, Oct 28, 2013 at 01:19:13PM -0700, Edgar Magana wrote:
>> >>> Hello Folks,
>> >>>
>> >>> 

Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Chris Friesen

On 10/29/2013 03:14 PM, Andrew Laski wrote:


Having Nova call into Heat is backwards IMO.


Agreed.


 If there are specific
pieces of information that Nova can expose, or API capabilities to help
with orchestration/placement that Heat or some other service would like
to use then let's look at that.


Agreed.

> Nova has placement concerns that extend

to finding a capable hypervisor for the VM that someone would like to
boot, and then just slightly beyond.  If there are higher level
decisions to be made about placement decisions I think that belongs
outside of Nova, and then just tell Nova where to put it.


Not sure about this part.  I think that with instance groups Nova should 
be aware of the placement of the group as a whole, not just single 
instances.


As soon as we start trying to do placement logic outside of Nova it 
becomes trickier to deal with race conditions when competing against 
other API users trying to acquire resources at the same time.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Chris Friesen

On 10/29/2013 03:23 PM, Henry Gessau wrote:

On Tue, Oct 29, at 4:31 pm, Jiang, Yunhong  wrote:



As to assign entire PCI device to a guest, that should be ok since
usually PF and VF has different device ID, the tricky thing is, at least
for some PCI devices, you can't configure that some NIC will have SR-IOV
enabled while others not.


Thanks for the warning. :) Perhaps the cloud admin might plug in an extra
NIC in just a few nodes (one or two per rack, maybe) for the purpose of
running service VMs there. Again, just speculating. I don't know how hard it
is to manage non-homogenous nodes.


Perhaps those nodes could be identified using a host-aggregate with 
suitable metadata?


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] HK Neutron Icehouse Design Meetings

2013-10-29 Thread Sergey Lukjanov
Here is the link to it - 
http://icehousedesignsummit.sched.org/overview/type/neutron

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Oct 29, 2013, at 1:47 PM, Hemanth Ravi  wrote:

> Hi,
> 
> Is there a schedule published for the Neutron Icehouse design meetings listed 
> at http://summit.openstack.org/
> 
> Thanks,
> -hemanth
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Henry Gessau
On Tue, Oct 29, at 4:31 pm, Jiang, Yunhong  wrote:

> Henry,why do you think the "service VM" need the entire PF instead of a
> VF? I think the SR-IOV NIC should provide QoS and performance isolation.

I was speculating. I just thought it might be a good idea to leave open the
possibility of assigning a PF to a VM if the need arises.

Neutron service VMs are a new thing. I will be following the discussions and
there is a summit session for them. It remains to be seen if there is any
desire/need for full PF ownership of NICs. But if a service VM owns the PF
and has the right NIC driver it could do some advanced features with it.

> As to assign entire PCI device to a guest, that should be ok since
> usually PF and VF has different device ID, the tricky thing is, at least
> for some PCI devices, you can't configure that some NIC will have SR-IOV
> enabled while others not.

Thanks for the warning. :) Perhaps the cloud admin might plug in an extra
NIC in just a few nodes (one or two per rack, maybe) for the purpose of
running service VMs there. Again, just speculating. I don't know how hard it
is to manage non-homogenous nodes.

> 
> Thanks
> --jyh
> 
>> -Original Message-
>> From: Henry Gessau [mailto:ges...@cisco.com]
>> Sent: Tuesday, October 29, 2013 8:10 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
>> support
>> 
>> Lots of great info and discussion going on here.
>> 
>> One additional thing I would like to mention is regarding PF and VF usage.
>> 
>> Normally VFs will be assigned to instances, and the PF will either not be
>> used at all, or maybe some agent in the host of the compute node might
>> have
>> access to the PF for something (management?).
>> 
>> There is a neutron design track around the development of "service VMs".
>> These are dedicated instances that run neutron services like routers,
>> firewalls, etc. It is plausible that a service VM would like to use PCI
>> passthrough and get the entire PF. This would allow it to have complete
>> control over a physical link, which I think will be wanted in some cases.
>> 
>> --
>> Henry
>> 
>> On Tue, Oct 29, at 10:23 am, Irena Berezovsky 
>> wrote:
>> 
>> > Hi,
>> >
>> > I would like to share some details regarding the support provided by
>> > Mellanox plugin. It enables networking via SRIOV pass-through devices
>> or
>> > macvtap interfaces.  It plugin is available here:
>> >
>> https://github.com/openstack/neutron/tree/master/neutron/plugins/mln
>> x.
>> >
>> > To support either PCI pass-through device and macvtap interface type of
>> > vNICs, we set neutron port profile:vnic_type according to the required
>> VIF
>> > type and then use the created port to 'nova boot' the VM.
>> >
>> > To  overcome the missing scheduler awareness for PCI devices which
>> was not
>> > part of the Havana release yet, we
>> >
>> > have an additional service (embedded switch Daemon) that runs on each
>> > compute node.
>> >
>> > This service manages the SRIOV resources allocation,  answers vNICs
>> > discovery queries and applies VLAN/MAC configuration using standard
>> Linux
>> > APIs (code is here:
>> https://github.com/mellanox-openstack/mellanox-eswitchd
>> > ).  The embedded switch Daemon serves as a glue layer between VIF
>> Driver and
>> > Neutron Agent.
>> >
>> > In the Icehouse Release when SRIOV resources allocation is already part
>> of
>> > the Nova, we plan to eliminate the need in embedded switch daemon
>> service.
>> > So what is left to figure out is how to tie up between neutron port and
>> PCI
>> > device and invoke networking configuration.
>> >
>> >
>> >
>> > In our case what we have is actually the Hardware VEB that is not
>> programmed
>> > via either 802.1Qbg or 802.1Qbh, but configured locally by Neutron
>> Agent. We
>> > also support both Ethernet and InfiniBand physical network L2
>> technology.
>> > This means that we apply different configuration commands  to set
>> > configuration on VF.
>> >
>> >
>> >
>> > I guess what we have to figure out is how to support the generic case for
>> > the PCI device networking support, for HW VEB, 802.1Qbg and
>> 802.1Qbh cases.
>> >
>> >
>> >
>> > BR,
>> >
>> > Irena
>> >
>> >
>> >
>> > *From:*Robert Li (baoli) [mailto:ba...@cisco.com]
>> > *Sent:* Tuesday, October 29, 2013 3:31 PM
>> > *To:* Jiang, Yunhong; Irena Berezovsky;
>> prashant.upadhy...@aricent.com;
>> > chris.frie...@windriver.com; He, Yongli; Itzik Brown
>> > *Cc:* OpenStack Development Mailing List; Brian Bowen (brbowen);
>> Kyle
>> > Mestery (kmestery); Sandhya Dasu (sadasu)
>> > *Subject:* Re: [openstack-dev] [nova] [neutron] PCI pass-through
>> network support
>> >
>> >
>> >
>> > Hi Yunhong,
>> >
>> >
>> >
>> > I haven't looked at Mellanox in much detail. I think that we'll get more
>> > details from Irena down the road. Regarding your question, I can only
>> answer
>> > based on my experience with Cisco's VM-FEX. In a nutshell:
>

Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Yathiraj Udupi (yudupi)
Thanks Khanh for your questions and Mike for adding your inputs.  Some more 
inline comments.

On 10/29/13, 1:23 PM, "Mike Spreitzer" 
mailto:mspre...@us.ibm.com>> wrote:

Khanh-Toan Tran 
mailto:khanh-toan.t...@cloudwatt.com>> wrote on 
10/29/2013 09:10:00 AM:
> ...
> 1) Member of a group is recursive. A member can be group or an
> instance. In this case there are two different declaration formats
> for members, as with http-server-group-1 ("name, "policy", "edge")
> and Http-Server-1 ("name", "request_spec", "type"). Would it be
> better if group-typed member also have "type" field to better
> interpret the member? Like policy which has "type" field to declare
> that's a egde-typed policy or group-typed policy.

I have no strong opinion on this.

Yeah some of this might have missed in my example, the purpose here was to 
provide an example to express it.  type field for group-type member will be 
included in the request payload


> 2) The "edge" is not clear to me. It seems to me that "edge" is just
> a place holder for the edge policy. Does it have some particular
> configuration like group members (e.g. group-typed member is
> described by its "member","edge" and "policy", while instance-typed
> member is described by its "request_spec") ?

Yes, an edge is just a way to apply a policy to an ordered pair of groups.

If you read earlier in the doc – an edge represents the 
InstanceGroupMemberConnection, which describes the edge connecting two instance 
group members.  This is used to apply a policy on this edge.   This is also 
used to represent complex topologies where members could be different groups.
Again, the request payload is based on what all data can be provided to 
instantiate the objects in DB based on the model class diagram provided in the 
document.  This is just an example spec, so some fields may be missing in this 
example.



> 3) Members & groups have policy declaration nested in them. Why is
> edge-policy is declared outside of edge's declaration?

I agree, it would be more natural to write an edge's policy references inside 
the edge object itself.

Like we discussed earlier and also mentioned in the document,  the policy has a 
lifecycle of its own and defined outside with all the required parameters.  
InstanceGroupPolicy is a reference to that policy object.



Thanks,
Mike


Thanks,
Yathi.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] new libraries

2013-10-29 Thread Endre Karlson
oslo.logging
oslo.db

Is there any ideas on introducing these libraries post-summit time?

Endre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Andrew Laski

On 10/29/13 at 04:05pm, Mike Spreitzer wrote:

Alex Glikson  wrote on 10/29/2013 03:37:41 AM:


1. I assume that the motivation for rack-level anti-affinity is to
survive a rack failure. Is this indeed the case?
This is a very interesting and important scenario, but I am curious
about your assumptions regarding all the other OpenStack resources
and services in this respect.


Remember we are just starting on the roadmap.  Nova in Icehouse, holistic
later.


2. What exactly do you mean by "network reachibility" between the
two groups? Remember that we are in Nova (at least for now), so we
don't have much visibility to the topology of the physical or
virtual networks. Do you have some concrete thoughts on how such
policy can be enforced, in presence of potentially complex
environment managed by Neutron?


I am aiming for the holistic future, and Yathi copied that from an example
I drew with the holistic future in mind.  While we are only addressing
Nova, I think a network reachability policy is inapproprite.


3. The JSON somewhat reminds me the interface of Heat, and I would
assume that certain capabilities that would be required to implement
it would be similar too. What is the proposed approach to
'harmonize' between the two, in environments that include Heat? What
would be end-to-end flow? For example, who would do the
orchestration of individual provisioning steps? Would "create"
operation delegate back to Heat for that? Also, how other
relationships managed by Heat (e.g., links to storage and network)
would be incorporated in such an end-to-end scenario?


You raised a few interesting issues.

1. Heat already has a way to specify resources, I do not see why we should
invent another.

2. Should Nova call Heat to do the orchestration?  I would like to see an
example where ordering is an issue.  IMHO, since OpenStack already has a
solution for creating resources in the right order, I do not see why we
should invent another.


Having Nova call into Heat is backwards IMO.  If there are specific 
pieces of information that Nova can expose, or API capabilities to help 
with orchestration/placement that Heat or some other service would like 
to use then let's look at that.  Nova has placement concerns that extend 
to finding a capable hypervisor for the VM that someone would like to 
boot, and then just slightly beyond.  If there are higher level 
decisions to be made about placement decisions I think that belongs 
outside of Nova, and then just tell Nova where to put it.





Thanks,
Mike



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Support for external authentication (i.e. REMOTE_USER) in Havana

2013-10-29 Thread Adam Young

On 10/29/2013 12:18 PM, Tim Bell wrote:

We also need some standardisation on the command line options for the client 
portion (such as --os-auth-method, --os-x509-cert etc.) . Unfortunately, this 
is not yet in Oslo so there would be multiple packages to be enhanced.
There is a OS client talk on Wednesday that you should atend. Getting 
the Auth options striaght in a common client will be a huge benefit.





Tim


-Original Message-
From: Alan Sill [mailto:kilohoku...@gmail.com]
Sent: 29 October 2013 16:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] Support for external authentication 
(i.e. REMOTE_USER) in Havana

+1

(except possibly for the environmental variables portion, which could and 
perhaps should be handled through provisioning).
THis is the Apache ENV dictionary,  not system environemnte variables.  
This means that Apache modules can potentially pas on more than just the 
username or comparable authentication value.





On Oct 29, 2013, at 8:35 AM, David Chadwick  wrote:


Whilst on this topic, perhaps we should also expand it to discuss support for 
external authz as well. I know that Adam at Red Hat is

working on adding additional authz attributes as env variables so that these 
can be used for authorising the user in keystone. It should be
the same module in Keystone that handles the incoming request, regardless of 
whether it has only the remote user env variable, or has
this plus a number of authz attribute env variables as well. I should like this 
module to end by returning the identity of the remote user in
a standard internal keystone format (i.e. as a set of identity attributes), 
which can then be passed to the next phase of processing (which
will include attribute mapping). In this way, we can have a common processing 
pipeline for incoming requests, regardless of how the end
user was authenticated, ie. whether the request contains SAML assertions, env 
variables, OpenID assertions etc. Different endpoints could
be used for the different incoming protocols, or a common endpoint could be 
used, with JSON parameters containing the different
protocol information.

Love this idea.  We can discuss in the Federation session.



regards

David

On 29/10/2013 12:59, Álvaro López García wrote:

Hi there,

I've been working on this bug [1,2] related with the pluggable
external authentication support in Havana. For those not familiar
with it, Keystone can rely on the usage of the REMOTE_USER env
variable, assuming that the user has been authenticated upstream (by
an httpd server). This REMOTE_USER variable is supposed to store the
username information that Keystone is going to use.

In the Havana external authentication plugins, the REMOTE_USER
variable is *always* split by the "@" character, assuming that the @
is being used as the domain separator (i.e. REMOTE_USER=username@domain).

Now there are two plugins available:

- ExternalDefault: Only the leftmost part of the REMOTE_USER after the
   split is considered. The domain information is obtainted from the
   default domain configured in keystone.conf.

- ExternalDomain: The rightmost part is considered the domain, and the
   leftover is considered the username.

The change in [2] aims to solve this problem: ExternalDefault will
not split the username by an "@" since we are going to use the
default domain so we assume that no domain will be appended.

However, this will work only if we are using a WSGI filter that is
aware of the semantics: the filter should know if ExternalDefault is
used so that the domain information is not appended, but append it if
ExternalDomain is used. Moreover, if somebody is using directly the
REMOTE_USER variable from Apache without any WSGI filter (for example
using X509 auth with mod_ssl and the SSLUsername directive [3]) the
REMOTE_USER will contain only the username and no domain at all.

Does anybody have any concerns about this? Should we pass down the
domain information by any other mean?

[1] https://bugs.launchpad.net/keystone/+bug/1211233
[2] https://review.openstack.org/#/c/50362/
[3] http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslusername


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint review process

2013-10-29 Thread Russell Bryant
On 10/29/2013 04:24 PM, Stefano Maffulli wrote:
> On 10/28/2013 10:28 AM, Russell Bryant wrote:
>> 2) Setting clearer expectations.  Since we have so many blueprints for
>> Nova, I feel it's very important to accurately set expectations for how
>> the priority of different projects compare.  In the last cycle,
>> priorities were mainly subjectively set by me.  Setting priorities based
>> on what reviewers are willing to spend time on is a more accurate
>> reflection of the likelihood of a set of changes making it in to the
>> release.
> 
> I'm all for managing expectations :) I had a conversation with Tom about
> this and we agreed that there may be a risk that new contributors with
> not much karma in the project would have a harder time getting their
> blueprint assigned higher priorities. If a new group proposes a
> blueprint, they may need to "court" bp reviewers to convince them to
> dedicate attention to their first bp. The risk is that the reviewers of
> Blueprints risk of becoming a sort of gatekeeper, or what other projects
> call 'committers'.
> 
> I think this is a concrete risk, it exists but I don't know if it's
> possible to eliminate it. I don't think we have to eliminate it but we
> need to manage it to minimize it in order to keep our promise of being
> 'open' as in open to new contributors, even the ones with low karma.
> 
> What do you think?

I think you're right, but it's actually no different than things have
been in the past.  It's just blueprints better reflecting how things
actually work.

However, people that have a proven track record of producing high
quality work are going to have an easier time getting attention, because
it takes less work overall to get their patches in.  With that said, if
the blueprint is good, it should get priority based on its merit, even
if the submitter has lower karma in the community.

Where we seem to hit the most friction is actually when merit alone
doesn't grant higher priority (only relevant to a small number of users,
for example), and submitter hasn't built up karma, either.  Those are
the ones that have a hard time, but it's not that surprising.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Propose Jay Bryant for core

2013-10-29 Thread John Griffith
On Tue, Oct 29, 2013 at 2:54 PM, John Griffith
 wrote:
> Hey,
>
> I wanted to propose Jay Bryant (AKA jsbryant, AKA jungleboy, AKA
> :) ) for core membership on the Cinder team.  Jay has been working on
> Cinder for a while now and has really shown some dedication and
> provided much needed help with quality reviews.  In addition to his
> review activity he's also been very active in IRC and in Cinder
> development as well.
>
> I think he'd be a good add to the core team.
>
> Thanks,
> John
For those that would like to just click rather than type:

http://russellbryant.net/openstack-stats/cinder-reviewers-180.txt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Propose Jay Bryant for core

2013-10-29 Thread John Griffith
Hey,

I wanted to propose Jay Bryant (AKA jsbryant, AKA jungleboy, AKA
:) ) for core membership on the Cinder team.  Jay has been working on
Cinder for a while now and has really shown some dedication and
provided much needed help with quality reviews.  In addition to his
review activity he's also been very active in IRC and in Cinder
development as well.

I think he'd be a good add to the core team.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Russell Bryant
On 10/29/2013 04:32 PM, Mike Spreitzer wrote:
> I should clarify my comment about invoking Heat to do the orchestration.
>  I think we have a choice between designing a 1-stage API vs a 2-stage
> API.  The 2-stage API goes like this: first the client defines the
> top-level group and everything inside it, then the client makes more
> calls to create the resources (with reference to the groups and
> policies).  In the 2-stage API, there is no need to use Heat for
> orchestration --- the client can orchestrate in any way it wants (BTW,
> we should eventually get around to talking about what happens if the
> client is the heat engine).  In the 1-stage API, the client just makes
> one call to define the top-level group and everything inside it; the
> implementation takes care of all the rest; in this style of API, I think
> it would be natural for the implementation to call Heat to do the
> orchestration.

Nova calling heat to orchestrate Nova seems fundamentally wrong.

Based on your description, the 2-stage bits belong in Nova, and the
1-stage part should just be talking to the Heat API directly, not Nova.
 Once Nova has instance groups and policies for those groups, Heat
should be updated to allow you to include those in your stacks.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] HK Neutron Icehouse Design Meetings

2013-10-29 Thread Hemanth Ravi
Hi,

Is there a schedule published for the Neutron Icehouse design meetings
listed at http://summit.openstack.org/

Thanks,
-hemanth
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Mike Spreitzer
I should clarify my comment about invoking Heat to do the orchestration. I 
think we have a choice between designing a 1-stage API vs a 2-stage API. 
The 2-stage API goes like this: first the client defines the top-level 
group and everything inside it, then the client makes more calls to create 
the resources (with reference to the groups and policies).  In the 2-stage 
API, there is no need to use Heat for orchestration --- the client can 
orchestrate in any way it wants (BTW, we should eventually get around to 
talking about what happens if the client is the heat engine).  In the 
1-stage API, the client just makes one call to define the top-level group 
and everything inside it; the implementation takes care of all the rest; 
in this style of API, I think it would be natural for the implementation 
to call Heat to do the orchestration.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Jiang, Yunhong
Henry,why do you think the "service VM" need the entire PF instead of a VF? I 
think the SR-IOV NIC should provide QoS and performance isolation.

As to assign entire PCI device to a guest, that should be ok since usually PF 
and VF has different device ID, the tricky thing is, at least for some PCI 
devices, you can't configure that some NIC will have SR-IOV enabled while 
others not.

Thanks
--jyh

> -Original Message-
> From: Henry Gessau [mailto:ges...@cisco.com]
> Sent: Tuesday, October 29, 2013 8:10 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
> support
> 
> Lots of great info and discussion going on here.
> 
> One additional thing I would like to mention is regarding PF and VF usage.
> 
> Normally VFs will be assigned to instances, and the PF will either not be
> used at all, or maybe some agent in the host of the compute node might
> have
> access to the PF for something (management?).
> 
> There is a neutron design track around the development of "service VMs".
> These are dedicated instances that run neutron services like routers,
> firewalls, etc. It is plausible that a service VM would like to use PCI
> passthrough and get the entire PF. This would allow it to have complete
> control over a physical link, which I think will be wanted in some cases.
> 
> --
> Henry
> 
> On Tue, Oct 29, at 10:23 am, Irena Berezovsky 
> wrote:
> 
> > Hi,
> >
> > I would like to share some details regarding the support provided by
> > Mellanox plugin. It enables networking via SRIOV pass-through devices
> or
> > macvtap interfaces.  It plugin is available here:
> >
> https://github.com/openstack/neutron/tree/master/neutron/plugins/mln
> x.
> >
> > To support either PCI pass-through device and macvtap interface type of
> > vNICs, we set neutron port profile:vnic_type according to the required
> VIF
> > type and then use the created port to 'nova boot' the VM.
> >
> > To  overcome the missing scheduler awareness for PCI devices which
> was not
> > part of the Havana release yet, we
> >
> > have an additional service (embedded switch Daemon) that runs on each
> > compute node.
> >
> > This service manages the SRIOV resources allocation,  answers vNICs
> > discovery queries and applies VLAN/MAC configuration using standard
> Linux
> > APIs (code is here:
> https://github.com/mellanox-openstack/mellanox-eswitchd
> > ).  The embedded switch Daemon serves as a glue layer between VIF
> Driver and
> > Neutron Agent.
> >
> > In the Icehouse Release when SRIOV resources allocation is already part
> of
> > the Nova, we plan to eliminate the need in embedded switch daemon
> service.
> > So what is left to figure out is how to tie up between neutron port and
> PCI
> > device and invoke networking configuration.
> >
> >
> >
> > In our case what we have is actually the Hardware VEB that is not
> programmed
> > via either 802.1Qbg or 802.1Qbh, but configured locally by Neutron
> Agent. We
> > also support both Ethernet and InfiniBand physical network L2
> technology.
> > This means that we apply different configuration commands  to set
> > configuration on VF.
> >
> >
> >
> > I guess what we have to figure out is how to support the generic case for
> > the PCI device networking support, for HW VEB, 802.1Qbg and
> 802.1Qbh cases.
> >
> >
> >
> > BR,
> >
> > Irena
> >
> >
> >
> > *From:*Robert Li (baoli) [mailto:ba...@cisco.com]
> > *Sent:* Tuesday, October 29, 2013 3:31 PM
> > *To:* Jiang, Yunhong; Irena Berezovsky;
> prashant.upadhy...@aricent.com;
> > chris.frie...@windriver.com; He, Yongli; Itzik Brown
> > *Cc:* OpenStack Development Mailing List; Brian Bowen (brbowen);
> Kyle
> > Mestery (kmestery); Sandhya Dasu (sadasu)
> > *Subject:* Re: [openstack-dev] [nova] [neutron] PCI pass-through
> network support
> >
> >
> >
> > Hi Yunhong,
> >
> >
> >
> > I haven't looked at Mellanox in much detail. I think that we'll get more
> > details from Irena down the road. Regarding your question, I can only
> answer
> > based on my experience with Cisco's VM-FEX. In a nutshell:
> >
> >  -- a vNIC is connected to an external switch. Once the host is
> booted
> > up, all the PFs and VFs provisioned on the vNIC will be created, as well as
> > all the corresponding ethernet interfaces .
> >
> >  -- As far as Neutron is concerned, a neutron port can be
> associated
> > with a VF. One way to do so is to specify this requirement in the -nic
> > option, providing information such as:
> >
> >. PCI alias (this is the same alias as defined in your nova
> > blueprints)
> >
> >. direct pci-passthrough/macvtap
> >
> >. port profileid that is compliant with 802.1Qbh
> >
> >  -- similar to how you translate the nova flavor with PCI
> requirements
> > to PCI requests for scheduling purpose, Nova API (the nova api
> component)
> > can translate the above to PCI requests for scheduling purpose. I c

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Jiang, Yunhong
Your explanation of the virtual network and physical network is quite clear and 
should work well. We need change nova code to achieve it, including get the 
physical network for the virtual network, passing the physical network 
requirement to the filter properties etc.

For your port method, so you mean we are sure to passing network id to 'nova 
boot' and nova will create the port during VM boot, am I right?  Also, how can 
nova knows that it need allocate the PCI device for the port? I'd suppose that 
in SR-IOV NIC environment, user don't need specify the PCI requirement. 
Instead, the PCI requirement should come from the network configuration and 
image property. Or you think user still need passing flavor with pci request?

--jyh


From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Tuesday, October 29, 2013 3:17 AM
To: Jiang, Yunhong; Robert Li (baoli); prashant.upadhy...@aricent.com; 
chris.frie...@windriver.com; He, Yongli; Itzik Brown
Cc: OpenStack Development Mailing List; Brian Bowen (brbowen); Kyle Mestery 
(kmestery); Sandhya Dasu (sadasu)
Subject: RE: [openstack-dev] [nova] [neutron] PCI pass-through network support

Hi Jiang, Robert,
IRC meeting option works for me.
If I understand your question below, you are looking for a way to tie up 
between requested virtual network(s) and requested PCI device(s). The way we 
did it in our solution  is to map a provider:physical_network to an interface 
that represents the Physical Function. Every virtual network is bound to the 
provider:physical_network, so the PCI device should be allocated based on this 
mapping.  We can  map a PCI alias to the provider:physical_network.

Another topic to discuss is where the mapping between neutron port and PCI 
device should be managed. One way to solve it, is to propagate the allocated 
PCI device details to neutron on port creation.
In case  there is no qbg/qbh support, VF networking configuration should be 
applied locally on the Host.
The question is when and how to apply networking configuration on the PCI 
device?
We see the following options:

* it can be done on port creation.

* It can be done when nova VIF driver is called for vNIC plugging. This 
will require to  have all networking configuration available to the VIF driver 
or send request to the neutron server to obtain it.

* It can be done by  having a dedicated L2 neutron agent on each Host 
that scans for allocated PCI devices  and then retrieves networking 
configuration from the server and configures the device. The agent will be also 
responsible for managing update requests coming from the neutron server.


For macvtap vNIC type assignment, the networking configuration can be applied 
by a dedicated L2 neutron agent.

BR,
Irena

From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Tuesday, October 29, 2013 9:04 AM

To: Robert Li (baoli); Irena Berezovsky; 
prashant.upadhy...@aricent.com; 
chris.frie...@windriver.com; He, Yongli; 
Itzik Brown
Cc: OpenStack Development Mailing List; Brian Bowen (brbowen); Kyle Mestery 
(kmestery); Sandhya Dasu (sadasu)
Subject: RE: [openstack-dev] [nova] [neutron] PCI pass-through network support

Robert, is it possible to have a IRC meeting? I'd prefer to IRC meeting because 
it's more openstack style and also can keep the minutes clearly.

To your flow, can you give more detailed example. For example, I can consider 
user specify the instance with -nic option specify a network id, and then how 
nova device the requirement to the PCI device? I assume the network id should 
define the switches that the device can connect to , but how is that 
information translated to the PCI property requirement? Will this translation 
happen before the nova scheduler make host decision?

Thanks
--jyh

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, October 28, 2013 12:22 PM
To: Irena Berezovsky; 
prashant.upadhy...@aricent.com; Jiang, 
Yunhong; chris.frie...@windriver.com; He, 
Yongli; Itzik Brown
Cc: OpenStack Development Mailing List; Brian Bowen (brbowen); Kyle Mestery 
(kmestery); Sandhya Dasu (sadasu)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Hi Irena,

Thank you very much for your comments. See inline.

--Robert

On 10/27/13 3:48 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert,
Thank you very much for sharing the information regarding your efforts. Can you 
please share your idea of the end to end flow? How do you suggest  to bind Nova 
and Neutron?

The end to end flow is actually encompassed in the blueprints in a nutshell. I 
will reiterate it in below. The binding between Nova and Neutron occurs with 
the neutron v2 API that nova invokes in order to provision the neutron 
services. The vif driver is responsible for plugging in an instance onto the 
networki

Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Mike Spreitzer
John Garbutt  wrote on 10/29/2013 07:29:19 AM:
> ...
> Its looking good, but I was thinking about a slightly different 
approach:
> 
> * I would like to see instance groups be used to describe all
> scheduler hints (including, please run on cell X, or please run on
> hypervisor Y)

I think Yathi's proposal is open in the sense that any type of policy can 
appear (we only have to define the policy types :-).  Removing old 
features from the existing API is something that would have to be done 
over time, if at all.

> * passing old scheduler hints to the API will just create a new
> instance group to persist the request

Yes, implementation re-org is easier that retiring old API.

> * ensure live-migrate/migrate never lets you violate the rules in the
> user hints, at least don't allow it to happen by accident

Right, that's why we are persisting the policy information.

> * I was expecting to see hard and soft constraints/hints, like: try
> keep in same switch, but make sure on separate servers

Good point, I forgot to mention that in my earlier reviews of the model!

> * Would be nice to have admin defined global options, like: "ensure
> tenant does note have two servers on the same hypervisor" or soft

That's the second time I have seen that idea in a week, there might be 
something to it.

> * I expected to see the existing boot server command simply have the
> addition of a reference to a group, keeping the existing methods of
> specifying multiple instances

That was my expectation too, for how a 2-stage API would work.  (A 1-stage 
API would not have the client making distinct calls to create the 
instances.)

> * I aggree you can't change a group's spec once you have started some
> VMs in that group, but you could then simply launch more VMs keeping
> to the same policy

Not if a joint decision was already made based on the totality of the 
group.

> ...
> 
> * augment the server details (and group?) with more location
> information saying where the scheduler actually put things, obfuscated
> on per tenant basis. So imagine nova, cinder, neutron exposing ordered
> (arbitrary tagged) location metadata like nova: (("host_id", "foo"),
> ("switch_group_id": "bar"), ("power_group": "bas"))

+1

> * the above should help us define the "scope" of a constraint relative
> to either a nova, cinder or neutron resource.

I am lost.  What "above", what scope definition problem?

> * Consider a constraint that includes constraints about groups, like
> must be separate to group X, in the scope of the switch, or something
> like that

I think Yathi's proposal, with the policy types I suggested, already does 
a lot of stuff like that.  But I do not know what you mean by "in the 
scope of the switch".  I think you mean a location constraint, but am not 
sure which switch you have in mind.  I would approach this perhaps a 
little more abstractly, as a collocation constraint between two resources 
that are known to and meaningful to the client (yes, we are starting with 
Nova only in Icehouse, hope to go holistic later).

> * Need more thought on constraints between volumes, servers and
> networks, I don't think edges are the right way to state that, I think
> it would be better as a cross group constraint, where the scope of the
> constraint is related to neutron.

I need more explanation or concrete examples to understand what problem(s) 
you are thinking of.  We are explicitly limiting ourselves to Nova at 
first, later will add in other services.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Blueprint review process

2013-10-29 Thread Stefano Maffulli
On 10/28/2013 10:28 AM, Russell Bryant wrote:
> 2) Setting clearer expectations.  Since we have so many blueprints for
> Nova, I feel it's very important to accurately set expectations for how
> the priority of different projects compare.  In the last cycle,
> priorities were mainly subjectively set by me.  Setting priorities based
> on what reviewers are willing to spend time on is a more accurate
> reflection of the likelihood of a set of changes making it in to the
> release.

I'm all for managing expectations :) I had a conversation with Tom about
this and we agreed that there may be a risk that new contributors with
not much karma in the project would have a harder time getting their
blueprint assigned higher priorities. If a new group proposes a
blueprint, they may need to "court" bp reviewers to convince them to
dedicate attention to their first bp. The risk is that the reviewers of
Blueprints risk of becoming a sort of gatekeeper, or what other projects
call 'committers'.

I think this is a concrete risk, it exists but I don't know if it's
possible to eliminate it. I don't think we have to eliminate it but we
need to manage it to minimize it in order to keep our promise of being
'open' as in open to new contributors, even the ones with low karma.

What do you think?

> We don't really have a specific system for filing feature requests.

yep, thanks, this is a different conversation to have. Let's focus on
blueprint review first.

Thanks,
/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Proposal for new heat-core member

2013-10-29 Thread Steve Baker
I count enough core +1s. Congrats Randall, well deserved!

On 10/26/2013 08:12 AM, Steven Dake wrote:
> Hi,
>
> I would like to propose Randall Burt for Heat Core.  He has shown
> interest in Heat by participating in IRC and providing high quality
> reviews.  The most important aspect in my mind of joining Heat Core is
> output and quality of reviews.  Randall has been involved in Heat
> reviews for atleast 6 months.  He has had 172 reviews over the last 6
> months staying "in the pack" [1] of core heat reviewers.  His 90 day
> stats are also encouraging, with 97 reviews (compared to the top
> reviewer Steve Hardy with 444 reviews).  Finally his 30 day stats also
> look good, beating out 3 core reviewers [2] on output with good
> quality reviews.
>
> Please have a vote +1/-1 and take into consideration:
> https://wiki.openstack.org/wiki/Heat/CoreTeam
>
> Regards,
> -steve
>
> [1]http://russellbryant.net/openstack-stats/heat-reviewers-180.txt
> 
> [2]http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
> 
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Templates in Trove

2013-10-29 Thread Robert Myers
I hear you Clint. I'm not an expert on heat templates so it is possible to
do this all with one. However I don't want to use Jinja to replace the heat
template logic just for sane template loading. We are already using Jinja
templates to load custom config files. So it makes sense to re-use the same
loading mechanism to allow administrators to add their own custom heat
templates in a well known location.

I don't want to re-invent the wheel either.

Robert

On Tue, Oct 29, 2013 at 12:24 PM, Clint Byrum  wrote:

> Excerpts from Robert Myers's message of 2013-10-29 07:54:59 -0700:
> > I'm pulling this conversation out of the gerrit review as I think it
> needs
> > more discussion.
> >
> > https://review.openstack.org/#/c/53499/
> >
>
> After reading the comments in that review, it seems to me that you
> don't need a client side template for your Heat template.
>
> The only argument for templating is "If I want some things to be
> custom I can't have them custom."
>
> You may not realize this, but Heat templates already have basic string
> replacement facilities and mappings, which is _all_ you need here.
>
> Use parameters. Pass _EVERYTHING_ into the stacks you're creating as
> parameters. Then let admins customize using Heat, not _another_
> language.
>
> For instance, somebody brought up wanting to have UserData be
> customizable. It is like this now:
>
> UserData:
>   Fn::Base64:
> Fn::Join:
> - ''
> - ["#!/bin/bash -v\n",
> "/opt/aws/bin/cfn-init\n",
> "sudo service trove-guest start\n"]
>
> Since you're using yaml, you don't have to se Fn::Join like in json,
> so simplify to this first:
>
> UserData:
>   Fn::Base64: |
> #!/bin/bash -v
> /opt/aws/bin/cfn-init
> sudo service trove-guest start
>
> Now, the suggestion was that users might want to do a different prep
> per service_type. First, we need to make service_type a parameter
>
>
> Parameters:
>   service_type:
> Type: String
> Default: mysql
>
> Now we need to shove it in where needed:
>
> Metadata:
>   AWS::CloudFormation::Init:
> config:
>   files:
> /etc/guest_info:
>   content:
> Fn::Join:
> - ''
> - ["[DEFAULT]\nguest_id=", {Ref: InstanceId},
>   "\nservice_type=", {Ref: service_type}, "]"
>   mode: '000644'
>   owner: root
>   group: root
>
> Now, if a user wants to have a different script:
>
> Mappings:
>   ServiceToScript:
> mysql:
>   script: |
> #!/bin/bash -v
> /opt/aws/bin/cfn-init
> sudo service trove-guest start
> galera:
>   script: |
> #!/bin/bash
> /opt/aws/bin/cfn-init
> galera-super-thingy
> sudo service trove-guest start
>
>
> And then they replace the userdata as such:
>
> UserData:
>   Fn::FindInMap:
> - ServiceToScript
> - {Ref: service_type}
> - script
>
> Please can we at least _try_ not to reinvent things!
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Mike Spreitzer
Khanh-Toan Tran  wrote on 10/29/2013 
09:10:00 AM:
> ...
> 1) Member of a group is recursive. A member can be group or an 
> instance. In this case there are two different declaration formats 
> for members, as with http-server-group-1 ("name, "policy", "edge") 
> and Http-Server-1 ("name", "request_spec", "type"). Would it be 
> better if group-typed member also have "type" field to better 
> interpret the member? Like policy which has "type" field to declare 
> that's a egde-typed policy or group-typed policy.

I have no strong opinion on this.

> 2) The "edge" is not clear to me. It seems to me that "edge" is just
> a place holder for the edge policy. Does it have some particular 
> configuration like group members (e.g. group-typed member is 
> described by its "member","edge" and "policy", while instance-typed 
> member is described by its "request_spec") ?

Yes, an edge is just a way to apply a policy to an ordered pair of groups.

> 3) Members & groups have policy declaration nested in them. Why is 
> edge-policy is declared outside of edge's declaration?

I agree, it would be more natural to write an edge's policy references 
inside the edge object itself.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Network topologies

2013-10-29 Thread Zane Bitter

On 29/10/13 19:33, Edgar Magana wrote:

Tim,

You statement "building an api that manages a network topology more than
one that needs to build out the dependencies between resources to help
create the network topology"
Is exactly what we are proposing, and this is why we believe this is not
under Heat domain.

This is why we are NOT proposing to manage any dependency between network
elements, that part is what I call "intelligence" of the orchestration and
we are not proposing any orchestration system, you are already have that
in place :-)


Well, if you don't manage the dependencies then it won't work. 
Dependencies are not dependencies by definition unless it won't work 
without them. What I think you mean is that you infer the dependencies 
internally to Neutron and don't include them in the artefact that you 
give to the user. Although actually you probably do, it's just not quite 
as explicit.


So it's like Heat, but it only handles networks and the templates are 
harder to read and write.



So, we simple want an API that tenats may use to "save", "retrieve" and
"share" topologies. For instance, tenant A creates a topology with two
networks (192.168.0.0/24 and 192.168.1.0/24) both with dhcp enabled and a
router connecting them. So, we first create it using CLI commands or
Horizon and then we call the API to save the topology for that tenant,
that topology can be also share  between tenants if the owner wanted to do
that, the same concept that we have in Neutron for "share networks", So
Tenant B or any other Tenants, don't need to re-create the whole topology,
just "open" the shared topology from tenant A. Obviously, overlapping IPs
will be a "must" requirement.


So, to be clear, my interpretation is that in this case you will spit 
out a JSON file to the user in tenantA that says "two networks 
(192.168.0.0/24 and 192.168.1.0/24) both with dhcp enabled and a router 
connecting them" (BTW does "networks" here mean "subnets"?) and a user 
in tenant B loads the JSON file and it creates to *different* networks 
(192.168.0.0/24 and 192.168.1.0/24) both with dhcp enabled and a router 
connecting them in tenantB.


I just want to confirm that, because parts of your preceding paragraph 
could be read as implying that you just open up access to tenant A's 
networks from tenant B, rather than creating new ones.



I am including in this thread to Mark McClain who is the Neutron PTL and
the main guy expressing concerns in not  having overlapping
functionalities between Neutron and Heat or any other project.


I think he is absolutely right.


I am absolutely, happy to discuss further with you but if you are ok with
this approach we could start the development in Neutron umbrella, final
thoughts?


I stand by my original analysis that the input part of the API is 
basically just a subset of Heat reimplemented inside Neutron.


As a consumer of the Neutron API, this is something that we really 
wouldn't want to interact with, because it duplicates what we do in a 
different way and that just makes everything difficult for our model.


I strongly recommend that you only implement the output side of the 
proposed API, and that the output be in the format of a Heat template. 
As Clint already pointed out, when combined with the proposed stack 
adopt/abandon features (http://summit.openstack.org/cfp/details/200) 
this will give you a very tidy interface to exactly the functionality 
you want without reinventing half of Heat inside Neutron.


We would definitely love to discuss this stuff with you at the Design 
Summit. So far, however you don't seem to have convinced anybody that 
this does not overlap with Heat. That would appear to forecast a high 
probability of wasted effort were you to embark on implementing the 
blueprint as written before then.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Jiang, Yunhong

> * describe resource external to nova that is attached to VM in the API
> (block device mapping and/or vif references)
> * ideally the nova scheduler needs to be aware of the local capacity,
> and how that relates to the above information (relates to the cross
> service scheduling issues)

I think this possibly a bit different. For volume, it's sure managed by Cinder, 
but for PCI devices, currently
It ;s managed by nova. So we possibly need nova to translate the information 
(possibly before nova scheduler).

> * state of the device should be stored by Neutron/Cinder
> (attached/detached, capacity, IP, etc), but still exposed to the
> "scheduler"

I'm not sure if we can keep the state of the device in Neutron. Currently nova 
manage all PCI devices.

Thanks
--jyh


> * connection params get given to Nova from Neutron/Cinder
> * nova still has the vif driver or volume driver to make the final connection
> * the disk should be formatted/expanded, and network info injected in
> the same way as before (cloud-init, config drive, DHCP, etc)
> 
> John
> 
> On 29 October 2013 10:17, Irena Berezovsky 
> wrote:
> > Hi Jiang, Robert,
> >
> > IRC meeting option works for me.
> >
> > If I understand your question below, you are looking for a way to tie up
> > between requested virtual network(s) and requested PCI device(s). The
> way we
> > did it in our solution  is to map a provider:physical_network to an
> > interface that represents the Physical Function. Every virtual network is
> > bound to the provider:physical_network, so the PCI device should be
> > allocated based on this mapping.  We can  map a PCI alias to the
> > provider:physical_network.
> >
> >
> >
> > Another topic to discuss is where the mapping between neutron port
> and PCI
> > device should be managed. One way to solve it, is to propagate the
> allocated
> > PCI device details to neutron on port creation.
> >
> > In case  there is no qbg/qbh support, VF networking configuration
> should be
> > applied locally on the Host.
> >
> > The question is when and how to apply networking configuration on the
> PCI
> > device?
> >
> > We see the following options:
> >
> > * it can be done on port creation.
> >
> > * It can be done when nova VIF driver is called for vNIC
> plugging.
> > This will require to  have all networking configuration available to the
> VIF
> > driver or send request to the neutron server to obtain it.
> >
> > * It can be done by  having a dedicated L2 neutron agent on
> each
> > Host that scans for allocated PCI devices  and then retrieves networking
> > configuration from the server and configures the device. The agent will
> be
> > also responsible for managing update requests coming from the neutron
> > server.
> >
> >
> >
> > For macvtap vNIC type assignment, the networking configuration can be
> > applied by a dedicated L2 neutron agent.
> >
> >
> >
> > BR,
> >
> > Irena
> >
> >
> >
> > From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
> > Sent: Tuesday, October 29, 2013 9:04 AM
> >
> >
> > To: Robert Li (baoli); Irena Berezovsky;
> prashant.upadhy...@aricent.com;
> > chris.frie...@windriver.com; He, Yongli; Itzik Brown
> >
> >
> > Cc: OpenStack Development Mailing List; Brian Bowen (brbowen); Kyle
> Mestery
> > (kmestery); Sandhya Dasu (sadasu)
> > Subject: RE: [openstack-dev] [nova] [neutron] PCI pass-through network
> > support
> >
> >
> >
> > Robert, is it possible to have a IRC meeting? I'd prefer to IRC meeting
> > because it's more openstack style and also can keep the minutes
> clearly.
> >
> >
> >
> > To your flow, can you give more detailed example. For example, I can
> > consider user specify the instance with -nic option specify a network id,
> > and then how nova device the requirement to the PCI device? I assume
> the
> > network id should define the switches that the device can connect to ,
> but
> > how is that information translated to the PCI property requirement? Will
> > this translation happen before the nova scheduler make host decision?
> >
> >
> >
> > Thanks
> >
> > --jyh
> >
> >
> >
> > From: Robert Li (baoli) [mailto:ba...@cisco.com]
> > Sent: Monday, October 28, 2013 12:22 PM
> > To: Irena Berezovsky; prashant.upadhy...@aricent.com; Jiang, Yunhong;
> > chris.frie...@windriver.com; He, Yongli; Itzik Brown
> > Cc: OpenStack Development Mailing List; Brian Bowen (brbowen); Kyle
> Mestery
> > (kmestery); Sandhya Dasu (sadasu)
> > Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
> > support
> >
> >
> >
> > Hi Irena,
> >
> >
> >
> > Thank you very much for your comments. See inline.
> >
> >
> >
> > --Robert
> >
> >
> >
> > On 10/27/13 3:48 AM, "Irena Berezovsky" 
> wrote:
> >
> >
> >
> > Hi Robert,
> >
> > Thank you very much for sharing the information regarding your efforts.
> Can
> > you please share your idea of the end to end flow? How do you suggest
> to
> > bind Nova and Neutron?
> >
> >
> >
> > The end to end flow is actually encompassed in the bluep

Re: [openstack-dev] [Heat] Network topologies

2013-10-29 Thread Edgar Magana
Aaron,

Moving the management of topology?
I am not proposing nothing like that, actually could you explain me the
current workflow to save a network topology created by Neutron APIs, in
order to use it by a different tenant or the owner itself in a different
time?
Possibly, that is the part that I am missing and it will help me to improve
current proposal.

Thanks,

Edgar

From:  Aaron Rosen 
Reply-To:  OpenStack List 
Date:  Tuesday, October 29, 2013 12:48 PM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [Heat] Network topologies

Hi Edgar, 

I definitely see the usecase for the idea that you propose. In my opinion, I
don't see the reason for moving the management of topology into neutron,
Heat already provides this functionality (besides for the part of taking an
existing deployment and generating a template file). Also, I wanted to point
out that in a way you will have to do orchestration as you're topology
manager will have to call the neutron api in order to create the topology
and tear it down. 

Best, 

Aaron


On Tue, Oct 29, 2013 at 11:33 AM, Edgar Magana  wrote:
> Tim,
> 
> You statement "building an api that manages a network topology more than
> one that needs to build out the dependencies between resources to help
> create the network topology"
> Is exactly what we are proposing, and this is why we believe this is not
> under Heat domain.
> 
> This is why we are NOT proposing to manage any dependency between network
> elements, that part is what I call "intelligence" of the orchestration and
> we are not proposing any orchestration system, you are already have that
> in place :-)
> 
> So, we simple want an API that tenats may use to "save", "retrieve" and
> "share" topologies. For instance, tenant A creates a topology with two
> networks (192.168.0.0/24   and 192.168.1.0/24
>  ) both with dhcp enabled and a
> router connecting them. So, we first create it using CLI commands or
> Horizon and then we call the API to save the topology for that tenant,
> that topology can be also share  between tenants if the owner wanted to do
> that, the same concept that we have in Neutron for "share networks", So
> Tenant B or any other Tenants, don't need to re-create the whole topology,
> just "open" the shared topology from tenant A. Obviously, overlapping IPs
> will be a "must" requirement.
> 
> I am including in this thread to Mark McClain who is the Neutron PTL and
> the main guy expressing concerns in not  having overlapping
> functionalities between Neutron and Heat or any other project.
> 
> I am absolutely, happy to discuss further with you but if you are ok with
> this approach we could start the development in Neutron umbrella, final
> thoughts?
> 
> Thanks,
> 
> Edgar
> 
> On 10/29/13 8:23 AM, "Tim Schnell"  wrote:
> 
>> >Hi Edgar,
>> >
>> >It seems like this blueprint is related more to building an api that
>> >manages a network topology more than one that needs to build out the
>> >dependencies between resources to help create the network topology. If we
>> >are talking about just an api to "save", "duplicate", and "share" these
>> >network topologies then I would agree that this is not something that Heat
>> >currently does or should do necessarily.
>> >
>> >I have been focusing primarily on front-end work for Heat so I apologize
>> >if these questions have already been answered. How is this API related to
>> >the existing network topology in Horizon? The existing network topology
>> >can already define the relationships and dependencies using Neutron I'm
>> >assuming so there is no apparent need to use Heat to gather this
>> >information. I'm a little confused as to the scope of the discussion, is
>> >that something that you are potentially interested in changing?
>> >
>> >Steve, Clint and Zane can better answer whether or not Heat wants to be in
>> >the business of managing existing network topologies but from my
>> >perspective I tend to agree with your statement that if you needed Heat to
>> >help describe the relationships between network resources then that might
>> >be duplicated effort but if don't need Heat to do that then this blueprint
>> >belongs in Neutron.
>> >
>> >Thanks,
>> >Tim
>> >
>> >
>> >
>> >
>> >
>> >On 10/29/13 1:32 AM, "Steven Hardy"  wrote:
>> >
>>> >>On Mon, Oct 28, 2013 at 01:19:13PM -0700, Edgar Magana wrote:
 >>> Hello Folks,
 >>>
 >>> Thank you Zane, Steven and Clint for you input.
 >>>
 >>> Our main goal in this BP is to provide networking users such as Heat
 >>>(we
 >>> consider it as a neutron user) a better and consolidated network
 >>>building
 >>> block in terms of an API that you could use for orchestration of
 >>> application-driven requirements. This building block does not add any
 >>> "intelligence" to the network topology because it does not have it and
 >>> this is why I think this BP is different from the work that you are
 >>>doing
 >>> in Hea

Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Mike Spreitzer
Alex Glikson  wrote on 10/29/2013 03:37:41 AM:

> 1. I assume that the motivation for rack-level anti-affinity is to 
> survive a rack failure. Is this indeed the case? 
> This is a very interesting and important scenario, but I am curious 
> about your assumptions regarding all the other OpenStack resources 
> and services in this respect.

Remember we are just starting on the roadmap.  Nova in Icehouse, holistic 
later.

> 2. What exactly do you mean by "network reachibility" between the 
> two groups? Remember that we are in Nova (at least for now), so we 
> don't have much visibility to the topology of the physical or 
> virtual networks. Do you have some concrete thoughts on how such 
> policy can be enforced, in presence of potentially complex 
> environment managed by Neutron?

I am aiming for the holistic future, and Yathi copied that from an example 
I drew with the holistic future in mind.  While we are only addressing 
Nova, I think a network reachability policy is inapproprite.

> 3. The JSON somewhat reminds me the interface of Heat, and I would 
> assume that certain capabilities that would be required to implement
> it would be similar too. What is the proposed approach to 
> 'harmonize' between the two, in environments that include Heat? What
> would be end-to-end flow? For example, who would do the 
> orchestration of individual provisioning steps? Would "create" 
> operation delegate back to Heat for that? Also, how other 
> relationships managed by Heat (e.g., links to storage and network) 
> would be incorporated in such an end-to-end scenario? 

You raised a few interesting issues.

1. Heat already has a way to specify resources, I do not see why we should 
invent another.

2. Should Nova call Heat to do the orchestration?  I would like to see an 
example where ordering is an issue.  IMHO, since OpenStack already has a 
solution for creating resources in the right order, I do not see why we 
should invent another.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Network topologies

2013-10-29 Thread Aaron Rosen
Hi Edgar,

I definitely see the usecase for the idea that you propose. In my opinion,
I don't see the reason for moving the management of topology into neutron,
 Heat already provides this functionality (besides for the part of taking
an existing deployment and generating a template file). Also, I wanted to
point out that in a way you will have to do orchestration as you're
topology manager will have to call the neutron api in order to create the
topology and tear it down.

Best,

Aaron


On Tue, Oct 29, 2013 at 11:33 AM, Edgar Magana  wrote:

> Tim,
>
> You statement "building an api that manages a network topology more than
> one that needs to build out the dependencies between resources to help
> create the network topology"
> Is exactly what we are proposing, and this is why we believe this is not
> under Heat domain.
>
> This is why we are NOT proposing to manage any dependency between network
> elements, that part is what I call "intelligence" of the orchestration and
> we are not proposing any orchestration system, you are already have that
> in place :-)
>
> So, we simple want an API that tenats may use to "save", "retrieve" and
> "share" topologies. For instance, tenant A creates a topology with two
> networks (192.168.0.0/24 and 192.168.1.0/24) both with dhcp enabled and a
> router connecting them. So, we first create it using CLI commands or
> Horizon and then we call the API to save the topology for that tenant,
> that topology can be also share  between tenants if the owner wanted to do
> that, the same concept that we have in Neutron for "share networks", So
> Tenant B or any other Tenants, don't need to re-create the whole topology,
> just "open" the shared topology from tenant A. Obviously, overlapping IPs
> will be a "must" requirement.
>
> I am including in this thread to Mark McClain who is the Neutron PTL and
> the main guy expressing concerns in not  having overlapping
> functionalities between Neutron and Heat or any other project.
>
> I am absolutely, happy to discuss further with you but if you are ok with
> this approach we could start the development in Neutron umbrella, final
> thoughts?
>
> Thanks,
>
> Edgar
>
> On 10/29/13 8:23 AM, "Tim Schnell"  wrote:
>
> >Hi Edgar,
> >
> >It seems like this blueprint is related more to building an api that
> >manages a network topology more than one that needs to build out the
> >dependencies between resources to help create the network topology. If we
> >are talking about just an api to "save", "duplicate", and "share" these
> >network topologies then I would agree that this is not something that Heat
> >currently does or should do necessarily.
> >
> >I have been focusing primarily on front-end work for Heat so I apologize
> >if these questions have already been answered. How is this API related to
> >the existing network topology in Horizon? The existing network topology
> >can already define the relationships and dependencies using Neutron I'm
> >assuming so there is no apparent need to use Heat to gather this
> >information. I'm a little confused as to the scope of the discussion, is
> >that something that you are potentially interested in changing?
> >
> >Steve, Clint and Zane can better answer whether or not Heat wants to be in
> >the business of managing existing network topologies but from my
> >perspective I tend to agree with your statement that if you needed Heat to
> >help describe the relationships between network resources then that might
> >be duplicated effort but if don't need Heat to do that then this blueprint
> >belongs in Neutron.
> >
> >Thanks,
> >Tim
> >
> >
> >
> >
> >
> >On 10/29/13 1:32 AM, "Steven Hardy"  wrote:
> >
> >>On Mon, Oct 28, 2013 at 01:19:13PM -0700, Edgar Magana wrote:
> >>> Hello Folks,
> >>>
> >>> Thank you Zane, Steven and Clint for you input.
> >>>
> >>> Our main goal in this BP is to provide networking users such as Heat
> >>>(we
> >>> consider it as a neutron user) a better and consolidated network
> >>>building
> >>> block in terms of an API that you could use for orchestration of
> >>> application-driven requirements. This building block does not add any
> >>> "intelligence" to the network topology because it does not have it and
> >>> this is why I think this BP is different from the work that you are
> >>>doing
> >>> in Heat.
> >>
> >>So how do you propose to handle dependencies between elements in the
> >>topology, e.g where things need to be created/deleted in a particular
> >>order, or where one resource must be in a particular state before another
> >>can be created?
> >>
> >>> The network topologies BP is not related to the Neutron Network Service
> >>> Insertion BP:
> >>>
> >>>
> https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertio
> >>>n
> >>>-c
> >>> haining-steering
> >>
> >>So I wasn't saying they were related, only that they both, arguably, may
> >>have some scope overlap with what Heat is doing.
> >>
> >>> I do agree with Steven that the insertion work add "inte

Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-10-29 Thread Mike Spreitzer
"Yathiraj Udupi (yudupi)"  wrote on 10/29/2013 02:46:30 
AM:

> The Instance Group API document is now updated with a simple example
> request payload of a nested group, and some description of how the 
> API implementation should handle the registration of the components 
> of a nested instance group. 
> https://docs.google.com/document/d/
> 17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit 

Thanks!  I tried viewing that JSON by copying into a file and using the 
JSONView extension to FireFox, but it complained about the JSON being 
malformed.  So I used Jackson to parse, and it found some odd closing 
quotes.  After fixing them I used Jackson to write out a normalized 
version (I find the indenting in the original odd).  I have attached the 
result (hope it makes it through the mailing list).



Thanks,
Mike

2tier-normalized.json
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Shared network between specific tenants, but not all tenants?

2013-10-29 Thread Jay Pipes

On 10/29/2013 02:25 PM, Justin Hammond wrote:

We have been considering this and have some notes on our concept, but we
haven't made a blueprint for it. I will speak amongst my group and find
out what they think of making it more public.


OK, cool, glad to know I'm not the only one with tenants asking for this :)

Looking forward to a possible blueprint on this.

Best,
-jay


On 10/29/13 12:26 PM, "Jay Pipes"  wrote:


Hi Neutron devs,

Are there any plans to support networks that are shared/routed only
between certain tenants (not all tenants)?

Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance] Support of v1 and v2 glance APIs in Nova

2013-10-29 Thread Mark Washenberger
> > I am not a fan of all the specific talk to glance code we have in
> > nova, moving more of that into glanceclient can only be a good thing.
> > For the XenServer itegration, for efficiency reasons, we need glance
> > to talk from dom0, so it has dom0 making the final HTTP call. So we
> > would need a way of extracting that info from the glance client. But
> > that seems better than having that code in nova.
>
> I know in Glance we've largely taken the view that the client should be as
> thin and lightweight as possible so users of the client can make use of it
> however they best see fit. There was an earlier patch that would have moved
> the whole image service layer into glanceclient that was rejected. So I
> think there is a division in philosophies here as well.



Indeed, I think I was being a bit of a stinker on this issue. Mea culpa.

I've had some time to think and I realized that there is a bit of
complexity here that needs to be untangled. Historically, the glance client
(and I think *most* openstack clients) have had versioned directories that
attempt to be as faithful a representation of the given version of an API
as possible. That was a history I was trying to maintain for continuity's
sake in the past.

However, with some more thought, this historical objective seems literally
insane to me. In fact, it makes it basically impossible to publish a useful
client library because such a library has no control to smooth over
backwards incompatibility from one major version to the next.

At this point I'm a lot more interested in Ghe's patch (
https://review.openstack.org/#/c/33327/)

I'm a bit concerned that we might need to make the image client interface
even more stripped down in order to focus support on the intersection of v1
and v2 of the image api. In particular, I'm not sure how well the old nova
image service api will deal with invalid property values (v2 has property
schemas). And there's no support AFAICT for image sharing, and image
sharing should not be used in v1 for security reasons.

On the other hand, maybe we don't really want to move forward based on how
nova viewed the image repository in the past. There might be a better image
client api waiting to be discovered by some intrepid openstacker. This
could make sense as well if there is some traction for eventually
deprecating the v1 api. But in any case, it does sound like we need an
image client with its own proper api that can be ported from version to
version.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Network topologies

2013-10-29 Thread Edgar Magana
Tim,

You statement "building an api that manages a network topology more than
one that needs to build out the dependencies between resources to help
create the network topology"
Is exactly what we are proposing, and this is why we believe this is not
under Heat domain.

This is why we are NOT proposing to manage any dependency between network
elements, that part is what I call "intelligence" of the orchestration and
we are not proposing any orchestration system, you are already have that
in place :-)

So, we simple want an API that tenats may use to "save", "retrieve" and
"share" topologies. For instance, tenant A creates a topology with two
networks (192.168.0.0/24 and 192.168.1.0/24) both with dhcp enabled and a
router connecting them. So, we first create it using CLI commands or
Horizon and then we call the API to save the topology for that tenant,
that topology can be also share  between tenants if the owner wanted to do
that, the same concept that we have in Neutron for "share networks", So
Tenant B or any other Tenants, don't need to re-create the whole topology,
just "open" the shared topology from tenant A. Obviously, overlapping IPs
will be a "must" requirement.

I am including in this thread to Mark McClain who is the Neutron PTL and
the main guy expressing concerns in not  having overlapping
functionalities between Neutron and Heat or any other project.

I am absolutely, happy to discuss further with you but if you are ok with
this approach we could start the development in Neutron umbrella, final
thoughts?

Thanks,

Edgar

On 10/29/13 8:23 AM, "Tim Schnell"  wrote:

>Hi Edgar,
>
>It seems like this blueprint is related more to building an api that
>manages a network topology more than one that needs to build out the
>dependencies between resources to help create the network topology. If we
>are talking about just an api to "save", "duplicate", and "share" these
>network topologies then I would agree that this is not something that Heat
>currently does or should do necessarily.
>
>I have been focusing primarily on front-end work for Heat so I apologize
>if these questions have already been answered. How is this API related to
>the existing network topology in Horizon? The existing network topology
>can already define the relationships and dependencies using Neutron I'm
>assuming so there is no apparent need to use Heat to gather this
>information. I'm a little confused as to the scope of the discussion, is
>that something that you are potentially interested in changing?
>
>Steve, Clint and Zane can better answer whether or not Heat wants to be in
>the business of managing existing network topologies but from my
>perspective I tend to agree with your statement that if you needed Heat to
>help describe the relationships between network resources then that might
>be duplicated effort but if don't need Heat to do that then this blueprint
>belongs in Neutron.
>
>Thanks,
>Tim
>
>
>
>
>
>On 10/29/13 1:32 AM, "Steven Hardy"  wrote:
>
>>On Mon, Oct 28, 2013 at 01:19:13PM -0700, Edgar Magana wrote:
>>> Hello Folks,
>>> 
>>> Thank you Zane, Steven and Clint for you input.
>>> 
>>> Our main goal in this BP is to provide networking users such as Heat
>>>(we
>>> consider it as a neutron user) a better and consolidated network
>>>building
>>> block in terms of an API that you could use for orchestration of
>>> application-driven requirements. This building block does not add any
>>> "intelligence" to the network topology because it does not have it and
>>> this is why I think this BP is different from the work that you are
>>>doing
>>> in Heat.
>>
>>So how do you propose to handle dependencies between elements in the
>>topology, e.g where things need to be created/deleted in a particular
>>order, or where one resource must be in a particular state before another
>>can be created?
>>
>>> The network topologies BP is not related to the Neutron Network Service
>>> Insertion BP:
>>> 
>>>https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertio
>>>n
>>>-c
>>> haining-steering
>>
>>So I wasn't saying they were related, only that they both, arguably, may
>>have some scope overlap with what Heat is doing.
>>
>>> I do agree with Steven that the insertion work add "intelligence"
>>> (explicit management of dependencies, state and workflow) to the
>>>network
>>> orchestration simply because user will need to know the insertion
>>> mechanism and dependencies between Network Advances Services, that work
>>>is
>>> more into Heat space that the BP that I am proposing but that is just
>>>my
>>> opinion.
>>
>>This seems a good reason to leverage the work we're doing rather than
>>reinventing it.  I'm not arguing that Heat should necessarily be the
>>primary interface to such functionality, only that Heat could (and
>>possibly
>>should) be used to do the orchestration aspects.
>>
>>> However, is there a session where I can discuss this BP with you guys?,
>>> the session that I proposed in Neutron has been reject

Re: [openstack-dev] [Neutron] Shared network between specific tenants, but not all tenants?

2013-10-29 Thread Justin Hammond
We have been considering this and have some notes on our concept, but we
haven't made a blueprint for it. I will speak amongst my group and find
out what they think of making it more public.

On 10/29/13 12:26 PM, "Jay Pipes"  wrote:

>Hi Neutron devs,
>
>Are there any plans to support networks that are shared/routed only
>between certain tenants (not all tenants)?
>
>Thanks,
>-jay
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Common requirements for services' discussion

2013-10-29 Thread Sumit Naiksatam
Hi,

Here is the log from today's meeting:
http://eavesdrop.openstack.org/meetings/networking_advanced_services/2013/networking_advanced_services.2013-10-29-15.33.log.html

(some action items for a few folks who were present, we will follow up in
the next meeting.)

Thanks,
~Sumit.



On Mon, Oct 28, 2013 at 3:14 PM, Sumit Naiksatam
wrote:

> Hi All,
>
> This is a reminder for the next IRC meeting on Tuesday (Oct 29th) 15.30
> UTC (8.30 AM PDT) on the #openstack-meeting-alt channel.
>
> Meeting agenda: https://wiki.openstack.org/wiki/Meetings/AdvancedServices
>
> Thanks,
> ~Sumit.
>
> On Tue, Oct 22, 2013 at 9:55 AM, Sumit Naiksatam  > wrote:
>
>> Hi All,
>>
>> Here is a log of today's discussion:
>>
>>
>> http://eavesdrop.openstack.org/meetings/networking_advanced_services/2013/networking_advanced_services.2013-10-22-15.32.log.html
>>
>> (some action items for a few folks who were present, we will follow up in
>> the next meeting.)
>>
>> Thanks,
>>
>> ~Sumit.
>>
>>
>>
>>
>> On Mon, Oct 21, 2013 at 11:08 PM, Sumit Naiksatam <
>> sumitnaiksa...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> This is a reminder for the next IRC meeting on Tuesday (Oct 22nd) 15.30
>>> UTC (8.30 AM PDT) on the #openstack-meeting-alt channel.
>>>
>>> The proposed agenda is:
>>> * Service insertion and chaining
>>> * Service agents
>>> * Service VMs - mechanism
>>> * Service VMs - policy
>>> * Extensible APIs for services
>>> and anything else you may want to discuss in this context.
>>>
>>> Meeting wiki page (has pointer to the first meeting logs):
>>> https://wiki.openstack.org/wiki/Meetings/AdvancedServices
>>>
>>> Thanks,
>>> ~Sumit.
>>>
>>> On Thu, Oct 17, 2013 at 12:02 AM, Sumit Naiksatam <
>>> sumitnaiksa...@gmail.com> wrote:
>>>
 Hi All,

 We will have the "advanced services" and the common requirements IRC
 meeting on Tuesdays 15.30 UTC (8.30 AM PDT) on the
 #openstack-meeting-alt channel. The meeting time was chosen to
 accommodate requests by folks in Asia and will hopefully suit most people
 involved. Please note that this is the alternate meeting channel.

 The agenda will be a continuation of discussion from the previous
 meeting with some additional agenda items based on the sessions already
 proposed for the summit. The current discussion is being captured in this
 etherpad:
 https://etherpad.openstack.org/p/NeutronAdvancedServices

 Hope you can make it and participate.

 Thanks,
 ~Sumit.


 On Mon, Oct 14, 2013 at 8:15 PM, Sumit Naiksatam <
 sumitnaiksa...@gmail.com> wrote:

> Thanks all for attending the IRC meeting today for the Neutron
> "advanced services" discussion. We have an etherpad for this:
> https://etherpad.openstack.org/p/NeutronAdvancedServices
>
> It was also felt that we need to have more ongoing discussions, so we
> will have follow up meetings. We will try to propose a more convenient 
> time
> for everyone involved for a meeting next week. Meanwhile, we can continue
> to use the mailing list, etherpad, and/or comment on the specific 
> proposals.
>
> Thanks,
> ~Sumit.
>
>
> On Tue, Oct 8, 2013 at 8:30 PM, Sumit Naiksatam <
> sumitnaiksa...@gmail.com> wrote:
>
>> Hi All,
>>
>> We had a VPNaaS meeting yesterday and it was felt that we should have
>> a separate meeting to discuss the topics common to all services. So, in
>> preparation for the Icehouse summit, I am proposing an IRC meeting on Oct
>> 14th 22:00 UTC (immediately after the Neutron meeting) to discuss common
>> aspects related to the FWaaS, LBaaS, and VPNaaS.
>>
>> We will begin with service insertion and chaining discussion, and I
>> hope we can collect requirements for other common aspects such as service
>> agents, services instances, etc. as well.
>>
>> Etherpad for service insertion & chaining can be found here:
>>
>> https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining
>>
>> Hope you all can join.
>>
>> Thanks,
>> ~Sumit.
>>
>>
>>
>

>>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements ...

2013-10-29 Thread Qing He
Sandy,

Does the framework account for customizable events?:
In my use case, I have a network (or some other proprietary way, e.g., some 
special BUS protocol)  attached device that does not fit into openstack node 
structure. It can generate events. According to these events, Openstack 
orchestration layer (HEAT?) may need to do thing like failing over all vms on 
one system to another system (compute node).

If I would like to use the alarm/notification system, I would need to add a
Customized collector, and my events would need to be routed to, say HEAT, and I 
would need to define the action ( a plugging/callback?) corresponding to the 
event for HEAT to take on my behalf.

Is my approach right/supported under the framework and the current HEAT (or 
some other components) release?

Thanks,

Qing

-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] 
Sent: Tuesday, October 29, 2013 6:34 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Ceilometer] Suggestions for alarm improvements ...

Hey y'all,

Here are a few notes I put together around some ideas for alarm improvements. 
In order to set it up I spent a little time talking about the Ceilometer 
architecture in general, including some of the things we have planned for 
IceHouse. 

I think Parts 1-3 will be useful to anyone looking into Ceilometer. Part 4 is 
where the meat of it is. 

https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements

Look forward to feedback from everyone and chatting about it at the summit.

If I missed something obvious, please mark it up so we can address it.

-S

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Templates in Trove

2013-10-29 Thread Denis Makogon
Also using one template for every life situations is a mess, someday it
would become himera. And if i don't want to inject something but template
relays on some kind of parameter, heat would return exception on attempt to
validate template with part of expected parameters.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Please create and list etherpads for summit sessions

2013-10-29 Thread Chris Jones
Hi

Done for the CI/CD automation session.

https://etherpad.openstack.org/p/icehouse-tripleo-deployment-ci-and-cd-automation

(and linked from the wiki)

Cheers,

Chris


On 28 October 2013 21:26, Robert Collins  wrote:

> If you proposed a summit session and it's been accepted you need to
> create etherpads for the sessions, and list them on the summit
> etherpad list -
> https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads
>
> I will copy links from that list into the summit descriptions on
> Wednesday; if it's not done by then then it won't get done.
>
> Also, I'd like to do a pre-summit review of the etherpads to ensure we
> have enough fodder to have successful discussions, so please do get
> them in place by Wednesday. Thanks!
>
> http://icehousedesignsummit.sched.org/ has the schedule - we're all on
> Tuesday - and please note we have multiple sessions in each slot, so
> be sure to read the full detailed description to see if your session
> is there.
>
> Cheers,
> Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Templates in Trove

2013-10-29 Thread Denis Makogon
Only trove administrator has access to heat templates, user cannot
manipulate templates, if he could we would not need trove at all. And idea
is to store only one template per datastore. And, i know that templates
could be parametrized. But, keep in mind, that we cannot inject in
templates anything (like custom userdata) without modifying project code.
For now there is no need to do this, and i suppose we would not need it
untill some super-custom modifications are needed - userdata is not one of
them.

In trove we have working template which would prepare instance and deliver
code on VM - this is main goal, nothing else.


2013/10/29 Clint Byrum 

> Excerpts from Robert Myers's message of 2013-10-29 07:54:59 -0700:
> > I'm pulling this conversation out of the gerrit review as I think it
> needs
> > more discussion.
> >
> > https://review.openstack.org/#/c/53499/
> >
>
> After reading the comments in that review, it seems to me that you
> don't need a client side template for your Heat template.
>
> The only argument for templating is "If I want some things to be
> custom I can't have them custom."
>
> You may not realize this, but Heat templates already have basic string
> replacement facilities and mappings, which is _all_ you need here.
>
> Use parameters. Pass _EVERYTHING_ into the stacks you're creating as
> parameters. Then let admins customize using Heat, not _another_
> language.
>
> For instance, somebody brought up wanting to have UserData be
> customizable. It is like this now:
>
> UserData:
>   Fn::Base64:
> Fn::Join:
> - ''
> - ["#!/bin/bash -v\n",
> "/opt/aws/bin/cfn-init\n",
> "sudo service trove-guest start\n"]
>
> Since you're using yaml, you don't have to se Fn::Join like in json,
> so simplify to this first:
>
> UserData:
>   Fn::Base64: |
> #!/bin/bash -v
> /opt/aws/bin/cfn-init
> sudo service trove-guest start
>
> Now, the suggestion was that users might want to do a different prep
> per service_type. First, we need to make service_type a parameter
>
>
> Parameters:
>   service_type:
> Type: String
> Default: mysql
>
> Now we need to shove it in where needed:
>
> Metadata:
>   AWS::CloudFormation::Init:
> config:
>   files:
> /etc/guest_info:
>   content:
> Fn::Join:
> - ''
> - ["[DEFAULT]\nguest_id=", {Ref: InstanceId},
>   "\nservice_type=", {Ref: service_type}, "]"
>   mode: '000644'
>   owner: root
>   group: root
>
> Now, if a user wants to have a different script:
>
> Mappings:
>   ServiceToScript:
> mysql:
>   script: |
> #!/bin/bash -v
> /opt/aws/bin/cfn-init
> sudo service trove-guest start
> galera:
>   script: |
> #!/bin/bash
> /opt/aws/bin/cfn-init
> galera-super-thingy
> sudo service trove-guest start
>
>
> And then they replace the userdata as such:
>
> UserData:
>   Fn::FindInMap:
> - ServiceToScript
> - {Ref: service_type}
> - script
>
> Please can we at least _try_ not to reinvent things!
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Shared network between specific tenants, but not all tenants?

2013-10-29 Thread Jay Pipes

Hi Neutron devs,

Are there any plans to support networks that are shared/routed only 
between certain tenants (not all tenants)?


Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Templates in Trove

2013-10-29 Thread Denis Makogon
 +1 for idea suggested in https://review.openstack.org/#/c/54315/.

We discussed to store heat templates at separate idea - so, here it is.

I think it's ok to store configs in separate dirs: one for configurations
and one for heat templates.


2013/10/29 Daniel Salinas 

> I like simple.  To me it is easier to break them out into dirs like Robert
> suggested rather than eventually having a folder full of
> num_datastore_types * 2 files.  As for the structure, I find it more
> intuitive to have the directories named for the datastore type.  within
> that dir you can have the templates for each.
>
>
> On Tue, Oct 29, 2013 at 11:54 AM, Robert Myers  wrote:
>
>> So I guess the only contention point is to either store the templates by
>> type or by datastore. I don't see the use case where you'd have completely
>> different paths for templates, so there is really no need for two separate
>> template paths. My idea is to group the templates by data_store because as
>> we add more data_stores the flat file structure will get harder to manage.
>> So either:
>>
>> - templates/{data_store}/config
>> - templates/{data_store}/heat
>>
>> or
>>
>> - templates/config/{data_store}.config
>> - templates/heat/{data_store}.heat
>>
>> During lookup of the templates it is either:
>>
>> config_template = '%s/config.template' % service_type
>> heat_template = '%s/heat.template' % service_type
>>
>> or
>>
>> config_template = 'config/%s.config.template' % service_type
>> heat_template = 'heat/%s.heat.template' % service_type
>>
>> My perference is to group by data_store type, but I'm curious to what
>> others think.
>>
>> Robert
>>
>>
>> On Tue, Oct 29, 2013 at 10:15 AM, Denis Makogon wrote:
>>
>>> Robert,  i also have thoughts about templates.
>>>
>>> Your suggestion is rather complex. Let me explain why is it so:
>>>  With new datastore support you should update PackageLoader and
>>> FilesystemLoader with new filesystem path and package path. I would prefe
>>> more easy configuration and store it in next way:
>>>  - templates/configuration/{datastore}.config.template;
>>>  - templates/heat/{datastore}.heat.template.
>>>
>>> Heat templates would be static until in trove become super-complex in
>>> instance configuration like Savanna (Hadoop on OpenStack).
>>>
>>> What about jinja - ok , i agree to use it, but (!!!) we would not use it
>>> for heat template rendering, because templates are static. Trove is not so
>>> complex in instance configuration that is why it doesn't need to
>>> genereate/modify heat templates on-the-go.
>>>
>>> Please take a look at this one https://review.openstack.org/#/c/54315/
>>>
>>>
>>> 2013/10/29 Robert Myers 
>>>
 I'm pulling this conversation out of the gerrit review as I think it
 needs more discussion.

 https://review.openstack.org/#/c/53499/

 I want to discuss the design decision to not use Jinja templates for
 the heat templates. My arguments for using Jinja for heat as well are:

 1. We have to rewrite all the template loading logic. The current
 implementation is pretty simple but in order to make in production worthy
 it will need to handle many more edge cases as we use develop this feature.
 The main argument I have heard against using the existing ENV is that the
 path is hard coded. (This can and should be turned into a config flag)
 2. We are already using Jinja templates for config files so it will be
 less confusing for a new person starting up. Why do these custom templates
 go here but these over here? Having one place to override defaults makes
 sense.
 3. Looking at the current heat templates I could easily see some areas
 that could take advantage of being a real Jinja template, an admin could
 create a base template and extend that for each different service and just
 change a few values in each.
 4. The default templates could be package with trove (using the Jijna
 PackageLoader) so the initial setup out of the box will work.

 If we go this route it would also be a good time to discuss the
 origination of the templates. Currently the templates are just in

 - trove/templates/{data_store}.config.template
 - trove/templates/{data_store}.heat.template


 I suggest that we move this into a folder structure like so:

 - trove/template/{data_store}/config.template
 - trove/template/{data_store}/heat.template
 - trove/template/{data_store}/the_next.template

 Thanks!
 Robert

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 

Re: [openstack-dev] [Trove] Templates in Trove

2013-10-29 Thread Clint Byrum
Excerpts from Robert Myers's message of 2013-10-29 07:54:59 -0700:
> I'm pulling this conversation out of the gerrit review as I think it needs
> more discussion.
> 
> https://review.openstack.org/#/c/53499/
> 

After reading the comments in that review, it seems to me that you
don't need a client side template for your Heat template.

The only argument for templating is "If I want some things to be
custom I can't have them custom."

You may not realize this, but Heat templates already have basic string
replacement facilities and mappings, which is _all_ you need here.

Use parameters. Pass _EVERYTHING_ into the stacks you're creating as
parameters. Then let admins customize using Heat, not _another_
language.

For instance, somebody brought up wanting to have UserData be
customizable. It is like this now:

UserData:
  Fn::Base64:
Fn::Join:
- ''
- ["#!/bin/bash -v\n",
"/opt/aws/bin/cfn-init\n",
"sudo service trove-guest start\n"]

Since you're using yaml, you don't have to se Fn::Join like in json,
so simplify to this first:

UserData:
  Fn::Base64: |
#!/bin/bash -v
/opt/aws/bin/cfn-init
sudo service trove-guest start

Now, the suggestion was that users might want to do a different prep
per service_type. First, we need to make service_type a parameter


Parameters:
  service_type:
Type: String
Default: mysql

Now we need to shove it in where needed:

Metadata:
  AWS::CloudFormation::Init:
config:
  files:
/etc/guest_info:
  content:
Fn::Join:
- ''
- ["[DEFAULT]\nguest_id=", {Ref: InstanceId},
  "\nservice_type=", {Ref: service_type}, "]"
  mode: '000644'
  owner: root
  group: root

Now, if a user wants to have a different script:

Mappings:
  ServiceToScript:
mysql:
  script: |
#!/bin/bash -v
/opt/aws/bin/cfn-init
sudo service trove-guest start
galera:
  script: |
#!/bin/bash
/opt/aws/bin/cfn-init
galera-super-thingy
sudo service trove-guest start


And then they replace the userdata as such:

UserData:
  Fn::FindInMap:
- ServiceToScript
- {Ref: service_type}
- script

Please can we at least _try_ not to reinvent things!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Templates in Trove

2013-10-29 Thread Daniel Salinas
I like simple.  To me it is easier to break them out into dirs like Robert
suggested rather than eventually having a folder full of
num_datastore_types * 2 files.  As for the structure, I find it more
intuitive to have the directories named for the datastore type.  within
that dir you can have the templates for each.


On Tue, Oct 29, 2013 at 11:54 AM, Robert Myers  wrote:

> So I guess the only contention point is to either store the templates by
> type or by datastore. I don't see the use case where you'd have completely
> different paths for templates, so there is really no need for two separate
> template paths. My idea is to group the templates by data_store because as
> we add more data_stores the flat file structure will get harder to manage.
> So either:
>
> - templates/{data_store}/config
> - templates/{data_store}/heat
>
> or
>
> - templates/config/{data_store}.config
> - templates/heat/{data_store}.heat
>
> During lookup of the templates it is either:
>
> config_template = '%s/config.template' % service_type
> heat_template = '%s/heat.template' % service_type
>
> or
>
> config_template = 'config/%s.config.template' % service_type
> heat_template = 'heat/%s.heat.template' % service_type
>
> My perference is to group by data_store type, but I'm curious to what
> others think.
>
> Robert
>
>
> On Tue, Oct 29, 2013 at 10:15 AM, Denis Makogon wrote:
>
>> Robert,  i also have thoughts about templates.
>>
>> Your suggestion is rather complex. Let me explain why is it so:
>>  With new datastore support you should update PackageLoader and
>> FilesystemLoader with new filesystem path and package path. I would prefe
>> more easy configuration and store it in next way:
>>  - templates/configuration/{datastore}.config.template;
>>  - templates/heat/{datastore}.heat.template.
>>
>> Heat templates would be static until in trove become super-complex in
>> instance configuration like Savanna (Hadoop on OpenStack).
>>
>> What about jinja - ok , i agree to use it, but (!!!) we would not use it
>> for heat template rendering, because templates are static. Trove is not so
>> complex in instance configuration that is why it doesn't need to
>> genereate/modify heat templates on-the-go.
>>
>> Please take a look at this one https://review.openstack.org/#/c/54315/
>>
>>
>> 2013/10/29 Robert Myers 
>>
>>> I'm pulling this conversation out of the gerrit review as I think it
>>> needs more discussion.
>>>
>>> https://review.openstack.org/#/c/53499/
>>>
>>> I want to discuss the design decision to not use Jinja templates for the
>>> heat templates. My arguments for using Jinja for heat as well are:
>>>
>>> 1. We have to rewrite all the template loading logic. The current
>>> implementation is pretty simple but in order to make in production worthy
>>> it will need to handle many more edge cases as we use develop this feature.
>>> The main argument I have heard against using the existing ENV is that the
>>> path is hard coded. (This can and should be turned into a config flag)
>>> 2. We are already using Jinja templates for config files so it will be
>>> less confusing for a new person starting up. Why do these custom templates
>>> go here but these over here? Having one place to override defaults makes
>>> sense.
>>> 3. Looking at the current heat templates I could easily see some areas
>>> that could take advantage of being a real Jinja template, an admin could
>>> create a base template and extend that for each different service and just
>>> change a few values in each.
>>> 4. The default templates could be package with trove (using the Jijna
>>> PackageLoader) so the initial setup out of the box will work.
>>>
>>> If we go this route it would also be a good time to discuss the
>>> origination of the templates. Currently the templates are just in
>>>
>>> - trove/templates/{data_store}.config.template
>>> - trove/templates/{data_store}.heat.template
>>>
>>>
>>> I suggest that we move this into a folder structure like so:
>>>
>>> - trove/template/{data_store}/config.template
>>> - trove/template/{data_store}/heat.template
>>> - trove/template/{data_store}/the_next.template
>>>
>>> Thanks!
>>> Robert
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Support for external authentication (i.e. REMOTE_USER) in Havana

2013-10-29 Thread David Chadwick
What is the semantic of "domain" in the current implementation? Until we 
know this we cant devise a solution.


Will the developed solution cater for me logging in via Google using my 
kent email address (as opposed to my gmail one)? In this case there 
could be 2 domains (depending upon the semantic of domain)


regards

David


On 29/10/2013 15:52, Fox, Kevin M wrote:

Has the case been considered where REMOTE_USER is used with
authentication mechanisms where the username is an email address? It
will have to keep the @domain part because that's the only thing that
makes it unique.

Thanks, Kevin  From: Álvaro
López García [alvaro.lopez.gar...@cern.ch] Sent: Tuesday, October 29,
2013 5:59 AM To: OpenStack dev Subject: [openstack-dev] [keystone]
Support for external authentication (i.e. REMOTE_USER) in Havana

Hi there,

I've been working on this bug [1,2] related with the pluggable
external authentication support in Havana. For those not familiar
with it, Keystone can rely on the usage of the REMOTE_USER env
variable, assuming that the user has been authenticated upstream (by
an httpd server). This REMOTE_USER variable is supposed to store the
username information that Keystone is going to use.

In the Havana external authentication plugins, the REMOTE_USER
variable is *always* split by the "@" character, assuming that the @
is being used as the domain separator (i.e.
REMOTE_USER=username@domain).

Now there are two plugins available:

- ExternalDefault: Only the leftmost part of the REMOTE_USER after
the split is considered. The domain information is obtainted from
the default domain configured in keystone.conf.

- ExternalDomain: The rightmost part is considered the domain, and
the leftover is considered the username.

The change in [2] aims to solve this problem: ExternalDefault will
not split the username by an "@" since we are going to use the
default domain so we assume that no domain will be appended.

However, this will work only if we are using a WSGI filter that is
aware of the semantics: the filter should know if ExternalDefault is
used so that the domain information is not appended, but append it
if ExternalDomain is used. Moreover, if somebody is using directly
the REMOTE_USER variable from Apache without any WSGI filter (for
example using X509 auth with mod_ssl and the SSLUsername directive
[3]) the REMOTE_USER will contain only the username and no domain at
all.

Does anybody have any concerns about this? Should we pass down the
domain information by any other mean?

[1] https://bugs.launchpad.net/keystone/+bug/1211233 [2]
https://review.openstack.org/#/c/50362/ [3]
http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslusername --
Álvaro López García
al...@ifca.unican.es Instituto de Física de Cantabria
http://alvarolopez.github.io Ed. Juan Jordá, Campus UC
tel: (+34) 942 200 969 Avda. de los Castros s/n 39005 Santander
(SPAIN)
_



"Everyone knows that debugging is twice as hard as writing a program in

the first place. So if you are as clever as you can be when you write
it, how will you ever debug it?" -- Brian Kernighan

___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___ OpenStack-dev mailing
list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Templates in Trove

2013-10-29 Thread Robert Myers
So I guess the only contention point is to either store the templates by
type or by datastore. I don't see the use case where you'd have completely
different paths for templates, so there is really no need for two separate
template paths. My idea is to group the templates by data_store because as
we add more data_stores the flat file structure will get harder to manage.
So either:

- templates/{data_store}/config
- templates/{data_store}/heat

or

- templates/config/{data_store}.config
- templates/heat/{data_store}.heat

During lookup of the templates it is either:

config_template = '%s/config.template' % service_type
heat_template = '%s/heat.template' % service_type

or

config_template = 'config/%s.config.template' % service_type
heat_template = 'heat/%s.heat.template' % service_type

My perference is to group by data_store type, but I'm curious to what
others think.

Robert


On Tue, Oct 29, 2013 at 10:15 AM, Denis Makogon wrote:

> Robert,  i also have thoughts about templates.
>
> Your suggestion is rather complex. Let me explain why is it so:
>  With new datastore support you should update PackageLoader and
> FilesystemLoader with new filesystem path and package path. I would prefe
> more easy configuration and store it in next way:
>  - templates/configuration/{datastore}.config.template;
>  - templates/heat/{datastore}.heat.template.
>
> Heat templates would be static until in trove become super-complex in
> instance configuration like Savanna (Hadoop on OpenStack).
>
> What about jinja - ok , i agree to use it, but (!!!) we would not use it
> for heat template rendering, because templates are static. Trove is not so
> complex in instance configuration that is why it doesn't need to
> genereate/modify heat templates on-the-go.
>
> Please take a look at this one https://review.openstack.org/#/c/54315/
>
>
> 2013/10/29 Robert Myers 
>
>> I'm pulling this conversation out of the gerrit review as I think it
>> needs more discussion.
>>
>> https://review.openstack.org/#/c/53499/
>>
>> I want to discuss the design decision to not use Jinja templates for the
>> heat templates. My arguments for using Jinja for heat as well are:
>>
>> 1. We have to rewrite all the template loading logic. The current
>> implementation is pretty simple but in order to make in production worthy
>> it will need to handle many more edge cases as we use develop this feature.
>> The main argument I have heard against using the existing ENV is that the
>> path is hard coded. (This can and should be turned into a config flag)
>> 2. We are already using Jinja templates for config files so it will be
>> less confusing for a new person starting up. Why do these custom templates
>> go here but these over here? Having one place to override defaults makes
>> sense.
>> 3. Looking at the current heat templates I could easily see some areas
>> that could take advantage of being a real Jinja template, an admin could
>> create a base template and extend that for each different service and just
>> change a few values in each.
>> 4. The default templates could be package with trove (using the Jijna
>> PackageLoader) so the initial setup out of the box will work.
>>
>> If we go this route it would also be a good time to discuss the
>> origination of the templates. Currently the templates are just in
>>
>> - trove/templates/{data_store}.config.template
>> - trove/templates/{data_store}.heat.template
>>
>>
>> I suggest that we move this into a folder structure like so:
>>
>> - trove/template/{data_store}/config.template
>> - trove/template/{data_store}/heat.template
>> - trove/template/{data_store}/the_next.template
>>
>> Thanks!
>> Robert
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unable to logging to guest console on XCP/xenserver

2013-10-29 Thread Bob Ball
The trace below seems to be wanting to access the console log, rather than the 
VNC console.

I believe there is a blog post about how to enable this, but basically you need 
a cronjob to run in dom0 to rotate the logs.  This is because the Xen console 
logging is not ring-based, and therefore without a cronjob it will fill the 
logging up.  Therefore it is not enabled by default.

To enable the console log, check out 
https://github.com/openstack/nova/blob/master/tools/xenserver/rotate_xen_guest_logs.sh
 and add crontab entries for that script as described at the top of the file.

The first invocation of the script will enable the console logs for your VMs 
and you then won't see the error log.

Thanks,

Bob


From: Rajshree Thorat [rajshree.tho...@gslab.com]
Sent: 29 October 2013 09:25
To: openstack Users; OpenStack Development Mailing List
Subject: [openstack-dev] Unable to logging to guest console on XCP/xenserver

Hi,

I am trying to use Openstack Havana to control XCP hypervisor with neutron OVS 
plugin. I can launch instances normally but unable to logging to guest console. 
Please see the below log from nova compute to get a clear idea.

Log:

2013-10-29 14:09:10.203 954 AUDIT nova.compute.manager 
[req-157fb348-69fb-4b7f-b50c-44ef85e85f11 42ffb12172244726a1a15d044167de86 
2c4acdf64eed40f7a9efcf3b7dd13259] [instance: 
cf5678ec-5284-48cc-a8d6-005eac42118e] Get console output
2013-10-29 14:09:10.296 954 ERROR nova.virt.xenapi.vmops 
[req-157fb348-69fb-4b7f-b50c-44ef85e85f11 42ffb12172244726a1a15d044167de86 
2c4acdf64eed40f7a9efcf3b7dd13259] ['XENAPI_PLUGIN_FAILURE', 'get_console_log', 
'IOError', "[Errno 2] No such file or directory: 
'/var/log/xen/guest/console.7'"]
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops Traceback (most recent 
call last):
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops   File 
"/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 1446, in 
get_console_output
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops 'get_console_log', 
{'dom_id': dom_id})
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops   File 
"/usr/lib/python2.7/dist-packages/nova/virt/xenapi/driver.py", line 796, in 
call_plugin
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops host, plugin, fn, 
args)
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops   File 
"/usr/lib/python2.7/dist-packages/nova/virt/xenapi/driver.py", line 851, in 
_unwrap_plugin_exceptions
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops return func(*args, 
**kwargs)
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops   File 
"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 229, in __call__
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops return 
self.__send(self.__name, args)
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops   File 
"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in xenapi_request
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops result = 
_parse_result(getattr(self, methodname)(*full_params))
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops   File 
"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 203, in _parse_result
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops raise 
Failure(result['ErrorDescription'])
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops Failure: 
['XENAPI_PLUGIN_FAILURE', 'get_console_log', 'IOError', "[Errno 2] No such file 
or directory: '/var/log/xen/guest/console.7'"]
2013-10-29 14:09:10.296 954 TRACE nova.virt.xenapi.vmops
2013-10-29 14:09:10.358 954 ERROR nova.openstack.common.rpc.amqp 
[req-157fb348-69fb-4b7f-b50c-44ef85e85f11 42ffb12172244726a1a15d044167de86 
2c4acdf64eed40f7a9efcf3b7dd13259] Exception during message handling
2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp **args)
2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 90, in wrapped
2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp payload)
2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 73, in wrapped
2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
2013-10-29 14:09:10.358 954 TRACE nova.openstack.common.rpc.amqp   Fil

Re: [openstack-dev] [keystone] Support for external authentication (i.e. REMOTE_USER) in Havana

2013-10-29 Thread Tim Bell

We also need some standardisation on the command line options for the client 
portion (such as --os-auth-method, --os-x509-cert etc.) . Unfortunately, this 
is not yet in Oslo so there would be multiple packages to be enhanced.

Tim

> -Original Message-
> From: Alan Sill [mailto:kilohoku...@gmail.com]
> Sent: 29 October 2013 16:36
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [keystone] Support for external authentication 
> (i.e. REMOTE_USER) in Havana
> 
> +1
> 
> (except possibly for the environmental variables portion, which could and 
> perhaps should be handled through provisioning).
> 
> On Oct 29, 2013, at 8:35 AM, David Chadwick  wrote:
> 
> > Whilst on this topic, perhaps we should also expand it to discuss support 
> > for external authz as well. I know that Adam at Red Hat is
> working on adding additional authz attributes as env variables so that these 
> can be used for authorising the user in keystone. It should be
> the same module in Keystone that handles the incoming request, regardless of 
> whether it has only the remote user env variable, or has
> this plus a number of authz attribute env variables as well. I should like 
> this module to end by returning the identity of the remote user in
> a standard internal keystone format (i.e. as a set of identity attributes), 
> which can then be passed to the next phase of processing (which
> will include attribute mapping). In this way, we can have a common processing 
> pipeline for incoming requests, regardless of how the end
> user was authenticated, ie. whether the request contains SAML assertions, env 
> variables, OpenID assertions etc. Different endpoints could
> be used for the different incoming protocols, or a common endpoint could be 
> used, with JSON parameters containing the different
> protocol information.
> >
> > regards
> >
> > David
> >
> > On 29/10/2013 12:59, Álvaro López García wrote:
> >> Hi there,
> >>
> >> I've been working on this bug [1,2] related with the pluggable
> >> external authentication support in Havana. For those not familiar
> >> with it, Keystone can rely on the usage of the REMOTE_USER env
> >> variable, assuming that the user has been authenticated upstream (by
> >> an httpd server). This REMOTE_USER variable is supposed to store the
> >> username information that Keystone is going to use.
> >>
> >> In the Havana external authentication plugins, the REMOTE_USER
> >> variable is *always* split by the "@" character, assuming that the @
> >> is being used as the domain separator (i.e. REMOTE_USER=username@domain).
> >>
> >> Now there are two plugins available:
> >>
> >> - ExternalDefault: Only the leftmost part of the REMOTE_USER after the
> >>   split is considered. The domain information is obtainted from the
> >>   default domain configured in keystone.conf.
> >>
> >> - ExternalDomain: The rightmost part is considered the domain, and the
> >>   leftover is considered the username.
> >>
> >> The change in [2] aims to solve this problem: ExternalDefault will
> >> not split the username by an "@" since we are going to use the
> >> default domain so we assume that no domain will be appended.
> >>
> >> However, this will work only if we are using a WSGI filter that is
> >> aware of the semantics: the filter should know if ExternalDefault is
> >> used so that the domain information is not appended, but append it if
> >> ExternalDomain is used. Moreover, if somebody is using directly the
> >> REMOTE_USER variable from Apache without any WSGI filter (for example
> >> using X509 auth with mod_ssl and the SSLUsername directive [3]) the
> >> REMOTE_USER will contain only the username and no domain at all.
> >>
> >> Does anybody have any concerns about this? Should we pass down the
> >> domain information by any other mean?
> >>
> >> [1] https://bugs.launchpad.net/keystone/+bug/1211233
> >> [2] https://review.openstack.org/#/c/50362/
> >> [3] http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslusername
> >>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object status and admin_state_up

2013-10-29 Thread Eugene Nikanorov
That's right, it is driver-specific, but we can come up with a generic
guideline for this.
Also, I'm interested in preferred solution for HAProxy.

Thanks,
Eugene.


On Tue, Oct 29, 2013 at 6:43 PM, Avishay Balderman wrote:

>  Hi
>
> It feels like a driver specific topic.
>
> So I am not sure we  can come to a generic solution in the lbaas core code.
> 
>
> Thanks
>
> Avishay
>
> ** **
>
> *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
> *Sent:* Tuesday, October 29, 2013 11:19 AM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [Neutron][LBaaS] Object status and
> admin_state_up
>
> ** **
>
> Hi folks,
>
> ** **
>
> Currently there are two attributes of vips/pools/members that represent a
> status: 'status' and 'admin_state_up'.
>
> ** **
>
> The first one is used to represent deployment status and can be
> PENDING_CREATE, ACTIVE, PENDING_DELETE, ERROR.
>
> We also have admin_state_up which could be True or False.
>
> ** **
>
> I'd like to know your opinion on how to change 'status' attribute based on
> admin_state_up changes.
>
> For instance. If admin_state_up is updated to be False, how do you think
> 'status' should change?
>
> ** **
>
> Also, speaking of reference implementation (HAProxy), changing vip or pool
> admin_state_up to False effectively destroys the balancer (undeploys it),
> while the objects remain in ACTIVE state.
>
> There are two options to fix this discrepancy:
>
> 1) Change status of vip/pool to PENDING_CREATE if admin_state_up changes
> to False
>
> 2) Don't destroy the loadbalancer and use HAProxy capability to disable
> frontend and backend while leave vip/pool in ACTIVE state
>
> ** **
>
> Please share your opinion.
>
> ** **
>
> Thanks,
>
> Eugene.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] reason for python-novaclient revert

2013-10-29 Thread Joshua Harlow
Zuul just caused my brain to overload ;) thx for the detailed explanation.

Sent from my really tiny device...

> On Oct 29, 2013, at 3:42 AM, "Sean Dague"  wrote:
> 
> Andrew Laskicorrectly called us out for not really proving enough 
> information n the python-novaclient revert yesterday - 
> https://review.openstack.org/#/c/54108/. Appologies there. At the time we 
> were dealing with a gate that grenade was failing every change (for the prior 
> 6 hours), we were all on our first cup of coffee, and while we got to 
> resolution, we did so with an entirely unuseful commit message to explain it.
> 
> Here's what happened. python-novaclient landed a change that changed the user 
> interface. This change meant that devstack exercises failed on validating the 
> details on getting aggregates.
> 
> However, upgrade testing is hard, and we had a loophole, that led us to a 
> wedge in the gate.
> 
> For the grenade jobs we prep 2 versions of the OpenStack codebase, grizzly 
> and master (yes, still grizzly and master, we're working on that). The 
> grizzly tree is grizzly devstack, which means it's grizzly on all the core 
> servers, but master on all the clients. However, the grizzly tree doesn't get 
> "zuulified", which was the crux of the issue.
> 
> By zuulified I mean think about the zuul queue. How do we actually test a 
> change 15 deep in the gate? We aren't testing just that change, but all the 
> gerrit proposed changes above it. That means that zuul needs to go through 
> and update relevant git trees beyond master, but to the proposed change sets 
> for all the jobs in front of it. This is accross projects, and should be 
> across branches.
> 
> But we'd not gotten the system to do this correctly on the "old" side yet. 
> Which means that python-novaclient landed a breaking change, but the "old" 
> side built a grizzly cloud with only master, not master + gerrit. It passed 
> the verification of the "old" cloud, then moved to the new cloud, then ran a 
> different set of tests to verify the new cloud, which passed.
> 
> However, by threading the needle in this way, it meant no one else could ever 
> pass grenade again. The quick fix was the python-novaclient revert. The real 
> fix is probably this - https://review.openstack.org/#/c/53940/ which we were 
> actually working on last week, to both update the set of trees we are using, 
> and update the zuul refs on the "old" side of the equation. Once that lands 
> I'll attempt to revert the revert, and ensure that it actually gets caught in 
> the system. Then we can work on updating tests so it can get through. But 
> right now it's a perfect test case to proove that we did this right, so 
> leaving it in the reverted state is critical.
> 
> This also highlights one of the reasons I've been hard on folks recently 
> about some alternative upgrade or mixed version testing models, and doing it 
> outside of grenade. Everything is simple when you talk about a single change. 
> But when you are 15 or 20 deep in zuul gate, and have to handle 3 proposed 
> stable nova changes, 5 proposed master nova changes, a keystone stable, a 
> keystone master, and a few cinder master changes in front of you to build the 
> environments you need to test in the gate this gets complicated fast. 
> Basically you aren't allowed to use git inside your upgrade tool for this 
> reason, because your tool has no idea what it's supposed to actually test, 
> only ZUUL knows. And, as you can see, we've yet to get this whole thing 
> mapped out the first time. :)
> 
>-Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Nova] summit overlap

2013-10-29 Thread Devananda van der Veen
Based on the current descriptions, I think Instance Group Model looks more
important for me. Current schedule appears fine for that.

-D

On Tue, Oct 29, 2013 at 5:59 AM, Russell Bryant  wrote:

> On 10/28/2013 06:41 PM, Devananda van der Veen wrote:
> > I can't make the Wed 2:50 session -- I'm presenting at that time -- but
> > I don't think it's essential for me to be there anyway. Everything else
> > LGTM. Thanks!
>
> Would you rather be able to attend that one (Smarter resource placement
> for intense workloads) or Instance Group Model and API Extension?  I
> could move them.
>
> --
> Russell Bryant
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Templates in Trove

2013-10-29 Thread Craig Vyvial
+1 to Robert's suggestion

I think it makes sense to keep all data store templates that are used in
the same location. ie. templates/{data-store}/*.template
As trove expands its data stores then we have all the templates next to
each other. I think it would make it easier to remove/add support for new
data stores this way.

Denis
I think we could see in the not so distant future where the heat templates
*could* be dynamic in nature. (clusters and such)



On Tue, Oct 29, 2013 at 10:15 AM, Denis Makogon wrote:

> Robert,  i also have thoughts about templates.
>
> Your suggestion is rather complex. Let me explain why is it so:
>  With new datastore support you should update PackageLoader and
> FilesystemLoader with new filesystem path and package path. I would prefe
> more easy configuration and store it in next way:
>  - templates/configuration/{datastore}.config.template;
>  - templates/heat/{datastore}.heat.template.
>
> Heat templates would be static until in trove become super-complex in
> instance configuration like Savanna (Hadoop on OpenStack).
>
> What about jinja - ok , i agree to use it, but (!!!) we would not use it
> for heat template rendering, because templates are static. Trove is not so
> complex in instance configuration that is why it doesn't need to
> genereate/modify heat templates on-the-go.
>
> Please take a look at this one https://review.openstack.org/#/c/54315/
>
>
> 2013/10/29 Robert Myers 
>
>> I'm pulling this conversation out of the gerrit review as I think it
>> needs more discussion.
>>
>> https://review.openstack.org/#/c/53499/
>>
>> I want to discuss the design decision to not use Jinja templates for the
>> heat templates. My arguments for using Jinja for heat as well are:
>>
>> 1. We have to rewrite all the template loading logic. The current
>> implementation is pretty simple but in order to make in production worthy
>> it will need to handle many more edge cases as we use develop this feature.
>> The main argument I have heard against using the existing ENV is that the
>> path is hard coded. (This can and should be turned into a config flag)
>> 2. We are already using Jinja templates for config files so it will be
>> less confusing for a new person starting up. Why do these custom templates
>> go here but these over here? Having one place to override defaults makes
>> sense.
>> 3. Looking at the current heat templates I could easily see some areas
>> that could take advantage of being a real Jinja template, an admin could
>> create a base template and extend that for each different service and just
>> change a few values in each.
>> 4. The default templates could be package with trove (using the Jijna
>> PackageLoader) so the initial setup out of the box will work.
>>
>> If we go this route it would also be a good time to discuss the
>> origination of the templates. Currently the templates are just in
>>
>> - trove/templates/{data_store}.config.template
>> - trove/templates/{data_store}.heat.template
>>
>>
>> I suggest that we move this into a folder structure like so:
>>
>> - trove/template/{data_store}/config.template
>> - trove/template/{data_store}/heat.template
>> - trove/template/{data_store}/the_next.template
>>
>> Thanks!
>> Robert
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance] Support of v1 and v2 glance APIs in Nova

2013-10-29 Thread Eddie Sheffield

"John Garbutt"  said:

> Going back to Joe's comment:
>> Can both of these cases be covered by configuring the keystone catalog?
> +1
> 
> If both v1 and v2 are present, pick v2, otherwise just pick what is in
> the catalogue. That seems cool. Not quite sure how the multiple glance
> endpoints works in the keystone catalog, but should work I assume.
> 
> We hard code nova right now, and so we probably want to keep that route too?

Nova doesn't use the catalog from Keystone when talking to Glance. There is a 
config value "glance_api_servers" which defines a list of Glance servers that 
gets randomized and cycled through. I assume that's what you're referring to 
with "we hard code nova." But currently there's nowhere in this path (internal 
nova to glance) where the keystone catalog is available.

I think some of the confusion may be that Glanceclient at the programmatic 
client level doesn't talk to keystone. That happens happens higher in the CLI 
level which doesn't come into play here.

> From: "Russell Bryant" 
>> On 10/17/2013 03:12 PM, Eddie Sheffield wrote:
>>> Might I propose a compromise?
>>>
>>> 1) For the VERY short term, keep the config value and get the change 
>>> otherwise
>>> reviewed and hopefully accepted.
>>>
>>> 2) Immediately file two blueprints:
>>>- python-glanceclient - expose a way to discover available versions
>>>- nova - depends on the glanceclient bp and allowing autodiscovery of 
>>> glance
>>> version
>>> and making the config value optional (tho not deprecated / 
>>> removed)
>>
>> Supporting both seems reasonable.  At least then *most* people don't
>> need to worry about it and it "just works", but the override is there if
>> necessary, since multiple people seem to be expressing a desire to have
>> it available.
> 
> +1
> 
>> Can we just do this all at once?  Adding this to glanceclient doesn't
>> seem like a huge task.
> 
> I worry about us never getting the full solution, but it seems to have
> got complicated.

The glanceclient side is done, as far as allowing access to the list of 
available API versions on a given server. It's getting Nova to use this info 
that's a bit sticky.

> On 28 October 2013 15:13, Eddie Sheffield  
> wrote:
>> So...I've been working on this some more and hit a bit of a snag. The
>> Glanceclient change was easy, but I see now that doing this in nova will 
>> require
>> a pretty huge change in the way things work. Currently, the API version is
>> grabbed from the config value, the appropriate driver is instantiated, and 
>> calls
>> go through that. The problem comes in that the actually glance server isn't
>> communicated with until very late in the process. Nothing "sees" the servers 
>> at
>> the level where the driver is determined. Also there isn't a single glance 
>> server
>> but a list of them, and in the even of certain communication failures the 
>> list is
>> cycled through until success or a number of retries has passed.
>>
>> So to change this to auto configuring will require turning this upside down,
>> cycling through the servers at a higher level, choosing the appropriate 
>> driver
>> for that server, and handling retries at that same level.
>>
>> Doable, but a much larger task than I first was thinking.
>>
>> Also, I don't really want the added overhead of getting the api versions 
>> before
>> every call, so I'm thinking that going through the list of servers at 
>> startup and
>> discovering the versions then and caching that somehow would be helpful as 
>> well.
>>
>> Thoughts?
> 
> I do worry about that overhead. But with Joe's comment, does it not
> just boil down to caching the keystone catalog in the context?
> 
> I am not a fan of all the specific talk to glance code we have in
> nova, moving more of that into glanceclient can only be a good thing.
> For the XenServer itegration, for efficiency reasons, we need glance
> to talk from dom0, so it has dom0 making the final HTTP call. So we
> would need a way of extracting that info from the glance client. But
> that seems better than having that code in nova.

I know in Glance we've largely taken the view that the client should be as thin 
and lightweight as possible so users of the client can make use of it however 
they best see fit. There was an earlier patch that would have moved the whole 
image service layer into glanceclient that was rejected. So I think there is a 
division in philosophies here as well.

Eddie


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Support for external authentication (i.e. REMOTE_USER) in Havana

2013-10-29 Thread Fox, Kevin M
Has the case been considered where REMOTE_USER is used with authentication 
mechanisms where the username is an email address? It will have to keep the 
@domain part because that's the only thing that makes it unique.

Thanks,
Kevin

From: Álvaro López García [alvaro.lopez.gar...@cern.ch]
Sent: Tuesday, October 29, 2013 5:59 AM
To: OpenStack dev
Subject: [openstack-dev] [keystone] Support for external authentication (i.e. 
REMOTE_USER) in Havana

Hi there,

I've been working on this bug [1,2] related with the pluggable external
authentication support in Havana. For those not familiar with it,
Keystone can rely on the usage of the REMOTE_USER env variable, assuming
that the user has been authenticated upstream (by an httpd server). This
REMOTE_USER variable is supposed to store the username information that
Keystone is going to use.

In the Havana external authentication plugins, the REMOTE_USER variable
is *always* split by the "@" character, assuming that the @ is being
used as the domain separator (i.e. REMOTE_USER=username@domain).

Now there are two plugins available:

- ExternalDefault: Only the leftmost part of the REMOTE_USER after the
  split is considered. The domain information is obtainted from the
  default domain configured in keystone.conf.

- ExternalDomain: The rightmost part is considered the domain, and the
  leftover is considered the username.

The change in [2] aims to solve this problem: ExternalDefault will not
split the username by an "@" since we are going to use the default
domain so we assume that no domain will be appended.

However, this will work only if we are using a WSGI filter that is aware
of the semantics: the filter should know if ExternalDefault is used so
that the domain information is not appended, but append it if
ExternalDomain is used. Moreover, if somebody is using directly the
REMOTE_USER variable from Apache without any WSGI filter (for example
using X509 auth with mod_ssl and the SSLUsername directive [3]) the
REMOTE_USER will contain only the username and no domain at all.

Does anybody have any concerns about this? Should we pass down the
domain information by any other mean?

[1] https://bugs.launchpad.net/keystone/+bug/1211233
[2] https://review.openstack.org/#/c/50362/
[3] http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslusername
--
Álvaro López García  al...@ifca.unican.es
Instituto de Física de Cantabria http://alvarolopez.github.io
Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
Avda. de los Castros s/n
39005 Santander (SPAIN)
_
"Everyone knows that debugging is twice as hard as writing a program in
 the first place. So if you are as clever as you can be when you write it,
 how will you ever debug it?" -- Brian Kernighan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting canceled for today.

2013-10-29 Thread Peter Pouliot
Hi Everyone.

Many of key individuals are preparing for the summit, and will not be able to 
attend today, so it's best to postpone the meeting until after the summit.

p

Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research & Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Comments on Steve Baker's Proposal on HOT Software Config

2013-10-29 Thread Steven Hardy
On Tue, Oct 29, 2013 at 01:50:59PM +0100, Zane Bitter wrote:
> On 28/10/13 14:53, Steven Hardy wrote:
> >On Sun, Oct 27, 2013 at 11:23:20PM -0400, Lakshminaraya Renganarayana wrote:
> >>A few us at IBM studied Steve Baker's proposal on HOT Software
> >>Configuration. Overall the proposed constructs and syntax are great -- we
> >>really like the clean syntax and concise specification of components. We
> >>would like to propose a few minor extensions that help with better
> >>expression of dependencies among components and resources, and in-turn
> >>enable cross-vm coordination. We have captured our thoughts on this on the
> >>following Wiki page
> >>
> >>https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-ibm-response
> >
> >Thanks for putting this together.  I'll post inline below with cut/paste
> >from the wiki followed by my response/question:
> >
> >>E2: Allow usage of component outputs (similar to resources):
> >>There are fundamental differences between components and resources...
> >
> >So... lately I've been thinking this is not actually true, and that
> >components are really just another type of resource.  If we can implement
> >the software-config functionality without inventing a new template
> >abstraction, IMO a lot of the issues described in your wiki page no longer
> >exist.
> >
> >Can anyone provide me with a clear argument for what the "fundamental
> >differences" actually are?
> 
> Here's an argument: Component deployments exist within a server
> resource, so the dependencies don't work in the same way. The static
> part of the configuration has to happen before the server is
> created, but the actual runtime part is created after. So the
> dependencies are inherently circular.

So, good point, but I've been thinking about it from the other perspective:
- Server gets created
- SoftwareConfig is applied to server

So the dependencies need not be circular, you have a SoftwareConfig
resource, which references the Server and the SoftwareConfig definition.

I guess this links back to stevebaker's previous comment tha the thing
applying the software config would need a hosted_on property rather than
OS::Nova::Server specifying a list of configs to apply.

> >My opinion is we could do the following:
> >- Implement software config "components" as ordinary resources, using the
> >   existing interfaces (perhaps with some enhancements to dependency
> >   declaration)
> >- Give OS::Nova::Server a components property, which simply takes a list of
> >   resources which describe the software configuration(s) to be applied
> 
> I think to overcome the problem described above, we would also need
> to create a third type of resource. So we'd have a Configuration, a
> Server and a Deployment. (In dependency terms, these are analogous
> to WaitConditionHandle, Server and WaitCondition, or possibly EIP,
> Server and EIPAssociation.) The deployment would reference the
> server and the configuration, you could pass parameters into it get
> attributes out of it, add explicit dependencies on it &c.

Yep, so that's what I've been thinking too, Angus and I were chatting about
ie earlier, and it seems like aligning with the existing pattern, used by
EIP's and volumes would potentially solve the problem.

However, you can cut out the "ConfigAttachment/ConfigApplier" resource, if
you move to the hosted_on pattern suggested by Steve;

> What I'm not clear on in this model is how much of the configuration
> needs to be built to go onto the server (in the UserData?) before
> the server is created, and how that would be represented in such a
> way as to inherently create the correct dependency relationship
> (i.e. get the _Server_ as well as the Deployment to depend on the
> configuration).

Yeah, so I think at some point, folks will have to make a choice about
their preferred CM tool, and map things in their environment, such that the
server gets built with the stuff required to e.g install the puppet agent
into the instance, which then listens for the config from the config
applier (which would know how to push the config to a puppet master I
guess).

So your environment could be:

resource_registry:
OS::Heat::Server: OS::Heat::PuppetSlaveServer
OS::Heat::SoftwareConfig: OS::Heat::PuppetSoftwareConfig

OS::Heat::PuppetSlaveServer could just be a provider resource, which
creates a OS::Nova::Server and installs/configures the puppet client
agents (or chef/salt/whatever).

We could still maintain the status-quo by making the default to use
heat-cfntools, where the initial UserData configures the things needed to
make cfn-hup work, then cfn-hup polls for metadata updates which specify
the config to be applied.

The config you want to deploy could be defined via a (config tool
agnositic) OS::Heat::SoftwareConfig resource, and the server it's applied
to is either explicitly specified (via a parameter to the SoftwareConfig
resource), or derived via allowing constraints to be specified for
resources, si

Re: [openstack-dev] [Heat] Comments on Steve Baker's Proposal on HOT Software Config

2013-10-29 Thread Georgy Okrokvertskhov
Hi Steve,

I am sorry for my confusing message.
Just for clarification, I am against adding new abstracts to the HOT
template. I just wanted to highlight that in Lakshminarayana proposal there
are multiple steps which represent the same component in different stages.
This might be confusing, because in you initial proposal you refer one
component section for the whole component description, if I am not mistaken.

Thanks
Georgy


On Tue, Oct 29, 2013 at 12:23 AM, Steven Hardy  wrote:

> On Mon, Oct 28, 2013 at 02:34:44PM -0700, Georgy Okrokvertskhov wrote:
> > I believe we had a discussion about difference between declarative
> approach
> > and workflows. A component approach is consistent with declarative format
> > as all actions\operations are hidden inside the service. If you want to
> use
> > actions and operations explicitly you will have to add a workflows
> specific
> > language to HOT format. You will need to have some conditions and other
> > control structures.
>
> Please don't confuse the component/resource discussion further by adding
> all these unrelated terms into the mix:
>
> - Resources are declarative, components aren't in any way more declarative
>
> - The resource/component discussion is unrelated to workflows, we're
>   discussing the template level interfaces.
>
> - Adding imperative control-flow interfaces to the template is the opposite
>   of a declarative approach
>
> > I also want to highlight that in most of examples on wiki pages there are
> > actions instead of components. Just check names: install_mysql,
> > configure_app.
>
> Having descriptions of the actions required to do configure an application
> is not declarative.  Having a resource define the properties of the
> application is.
>
> > I think you revealed the major difference between resource and component.
> > While the first has a fixed API and Heat already knows how to work with
> it,
>
> A resource doesn't have a fixed API as such - it has flexible,
> user-definable
> interfaces (inputs/properties and outputs/attributes)
>
> > components are not determined and Heat does not know what this component
> > actually does.
>
> Heat doesn't need to know what a resource or component actually does, it
> needs to know what do do with the inputs/properties, and how to obtain the
> outputs/attributes.
>
> > I remember the first draft for Software components and it
> > had a specific examples for yum invocation for package installation. This
> > is a good example of declarative component. When scripts and recipes
> > appeared a component definition was blurred.
>
> This makes no sense, scripts defining platform specific installation
> methods are the exact opposite of a declarative component.
>
> The blurred component definition you refer to is a very good reason not to
> add a new abstraction IMO - we should focus on adding the functionality via
> the existing, well understood interfaces.
>
> Steve
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Support for external authentication (i.e. REMOTE_USER) in Havana

2013-10-29 Thread Alan Sill
+1

(except possibly for the environmental variables portion, which could and 
perhaps should be handled through provisioning).

On Oct 29, 2013, at 8:35 AM, David Chadwick  wrote:

> Whilst on this topic, perhaps we should also expand it to discuss support for 
> external authz as well. I know that Adam at Red Hat is working on adding 
> additional authz attributes as env variables so that these can be used for 
> authorising the user in keystone. It should be the same module in Keystone 
> that handles the incoming request, regardless of whether it has only the 
> remote user env variable, or has this plus a number of authz attribute env 
> variables as well. I should like this module to end by returning the identity 
> of the remote user in a standard internal keystone format (i.e. as a set of 
> identity attributes), which can then be passed to the next phase of 
> processing (which will include attribute mapping). In this way, we can have a 
> common processing pipeline for incoming requests, regardless of how the end 
> user was authenticated, ie. whether the request contains SAML assertions, env 
> variables, OpenID assertions etc. Different endpoints could be used for the 
> different incoming protocols, or a common endpoint could be used, with JSON 
> parameters containing the different protocol information.
> 
> regards
> 
> David
> 
> On 29/10/2013 12:59, Álvaro López García wrote:
>> Hi there,
>> 
>> I've been working on this bug [1,2] related with the pluggable external
>> authentication support in Havana. For those not familiar with it,
>> Keystone can rely on the usage of the REMOTE_USER env variable, assuming
>> that the user has been authenticated upstream (by an httpd server). This
>> REMOTE_USER variable is supposed to store the username information that
>> Keystone is going to use.
>> 
>> In the Havana external authentication plugins, the REMOTE_USER variable
>> is *always* split by the "@" character, assuming that the @ is being
>> used as the domain separator (i.e. REMOTE_USER=username@domain).
>> 
>> Now there are two plugins available:
>> 
>> - ExternalDefault: Only the leftmost part of the REMOTE_USER after the
>>   split is considered. The domain information is obtainted from the
>>   default domain configured in keystone.conf.
>> 
>> - ExternalDomain: The rightmost part is considered the domain, and the
>>   leftover is considered the username.
>> 
>> The change in [2] aims to solve this problem: ExternalDefault will not
>> split the username by an "@" since we are going to use the default
>> domain so we assume that no domain will be appended.
>> 
>> However, this will work only if we are using a WSGI filter that is aware
>> of the semantics: the filter should know if ExternalDefault is used so
>> that the domain information is not appended, but append it if
>> ExternalDomain is used. Moreover, if somebody is using directly the
>> REMOTE_USER variable from Apache without any WSGI filter (for example
>> using X509 auth with mod_ssl and the SSLUsername directive [3]) the
>> REMOTE_USER will contain only the username and no domain at all.
>> 
>> Does anybody have any concerns about this? Should we pass down the
>> domain information by any other mean?
>> 
>> [1] https://bugs.launchpad.net/keystone/+bug/1211233
>> [2] https://review.openstack.org/#/c/50362/
>> [3] http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslusername
>> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Proposal for new heat-core member

2013-10-29 Thread Jeff Peeler

On 10/25/2013 03:12 PM, Steven Dake wrote:

I would like to propose Randall Burt for Heat Core.
Please have a vote +1/-1 and take into consideration: 
https://wiki.openstack.org/wiki/Heat/CoreTeam


[1]http://russellbryant.net/openstack-stats/heat-reviewers-180.txt 

[2]http://russellbryant.net/openstack-stats/heat-reviewers-30.txt 


+1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Network topologies

2013-10-29 Thread Tim Schnell
Hi Edgar,

It seems like this blueprint is related more to building an api that
manages a network topology more than one that needs to build out the
dependencies between resources to help create the network topology. If we
are talking about just an api to "save", "duplicate", and "share" these
network topologies then I would agree that this is not something that Heat
currently does or should do necessarily.

I have been focusing primarily on front-end work for Heat so I apologize
if these questions have already been answered. How is this API related to
the existing network topology in Horizon? The existing network topology
can already define the relationships and dependencies using Neutron I'm
assuming so there is no apparent need to use Heat to gather this
information. I'm a little confused as to the scope of the discussion, is
that something that you are potentially interested in changing?

Steve, Clint and Zane can better answer whether or not Heat wants to be in
the business of managing existing network topologies but from my
perspective I tend to agree with your statement that if you needed Heat to
help describe the relationships between network resources then that might
be duplicated effort but if don't need Heat to do that then this blueprint
belongs in Neutron.

Thanks,
Tim





On 10/29/13 1:32 AM, "Steven Hardy"  wrote:

>On Mon, Oct 28, 2013 at 01:19:13PM -0700, Edgar Magana wrote:
>> Hello Folks,
>> 
>> Thank you Zane, Steven and Clint for you input.
>> 
>> Our main goal in this BP is to provide networking users such as Heat (we
>> consider it as a neutron user) a better and consolidated network
>>building
>> block in terms of an API that you could use for orchestration of
>> application-driven requirements. This building block does not add any
>> "intelligence" to the network topology because it does not have it and
>> this is why I think this BP is different from the work that you are
>>doing
>> in Heat.
>
>So how do you propose to handle dependencies between elements in the
>topology, e.g where things need to be created/deleted in a particular
>order, or where one resource must be in a particular state before another
>can be created?
>
>> The network topologies BP is not related to the Neutron Network Service
>> Insertion BP:
>> 
>>https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion
>>-c
>> haining-steering
>
>So I wasn't saying they were related, only that they both, arguably, may
>have some scope overlap with what Heat is doing.
>
>> I do agree with Steven that the insertion work add "intelligence"
>> (explicit management of dependencies, state and workflow) to the network
>> orchestration simply because user will need to know the insertion
>> mechanism and dependencies between Network Advances Services, that work
>>is
>> more into Heat space that the BP that I am proposing but that is just my
>> opinion.
>
>This seems a good reason to leverage the work we're doing rather than
>reinventing it.  I'm not arguing that Heat should necessarily be the
>primary interface to such functionality, only that Heat could (and
>possibly
>should) be used to do the orchestration aspects.
>
>> However, is there a session where I can discuss this BP with you guys?,
>> the session that I proposed in Neutron has been rejected because it was
>> considered by the PTL as an overlapping work with the Heat goals,
>> therefore I wanted to know if you can to discuss it or I just simple go
>> ahead and start the implementation. I do still believe it can be easily
>> implemented in Neutron and then exposed to Heat but I am really looking
>> forward to having a broader discussion.
>
>I don't think we have any sessions directly related to Neutron, but we are
>definitely interested in discussing this (and other Neutron BPs which may
>have integration points requiring orchestration).
>
>I suggest we have an informal breakout session with those interested on
>Tuesday or Wednesday, or it could be a topic which Steve Baker may
>consider
>for this placeholder session:
>
>http://summit.openstack.org/cfp/details/360
>
>Steve
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Templates in Trove

2013-10-29 Thread Denis Makogon
Robert,  i also have thoughts about templates.

Your suggestion is rather complex. Let me explain why is it so:
 With new datastore support you should update PackageLoader and
FilesystemLoader with new filesystem path and package path. I would prefe
more easy configuration and store it in next way:
 - templates/configuration/{datastore}.config.template;
 - templates/heat/{datastore}.heat.template.

Heat templates would be static until in trove become super-complex in
instance configuration like Savanna (Hadoop on OpenStack).

What about jinja - ok , i agree to use it, but (!!!) we would not use it
for heat template rendering, because templates are static. Trove is not so
complex in instance configuration that is why it doesn't need to
genereate/modify heat templates on-the-go.

Please take a look at this one https://review.openstack.org/#/c/54315/


2013/10/29 Robert Myers 

> I'm pulling this conversation out of the gerrit review as I think it needs
> more discussion.
>
> https://review.openstack.org/#/c/53499/
>
> I want to discuss the design decision to not use Jinja templates for the
> heat templates. My arguments for using Jinja for heat as well are:
>
> 1. We have to rewrite all the template loading logic. The current
> implementation is pretty simple but in order to make in production worthy
> it will need to handle many more edge cases as we use develop this feature.
> The main argument I have heard against using the existing ENV is that the
> path is hard coded. (This can and should be turned into a config flag)
> 2. We are already using Jinja templates for config files so it will be
> less confusing for a new person starting up. Why do these custom templates
> go here but these over here? Having one place to override defaults makes
> sense.
> 3. Looking at the current heat templates I could easily see some areas
> that could take advantage of being a real Jinja template, an admin could
> create a base template and extend that for each different service and just
> change a few values in each.
> 4. The default templates could be package with trove (using the Jijna
> PackageLoader) so the initial setup out of the box will work.
>
> If we go this route it would also be a good time to discuss the
> origination of the templates. Currently the templates are just in
>
> - trove/templates/{data_store}.config.template
> - trove/templates/{data_store}.heat.template
>
>
> I suggest that we move this into a folder structure like so:
>
> - trove/template/{data_store}/config.template
> - trove/template/{data_store}/heat.template
> - trove/template/{data_store}/the_next.template
>
> Thanks!
> Robert
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-29 Thread Henry Gessau
Lots of great info and discussion going on here.

One additional thing I would like to mention is regarding PF and VF usage.

Normally VFs will be assigned to instances, and the PF will either not be
used at all, or maybe some agent in the host of the compute node might have
access to the PF for something (management?).

There is a neutron design track around the development of "service VMs".
These are dedicated instances that run neutron services like routers,
firewalls, etc. It is plausible that a service VM would like to use PCI
passthrough and get the entire PF. This would allow it to have complete
control over a physical link, which I think will be wanted in some cases.

-- 
Henry

On Tue, Oct 29, at 10:23 am, Irena Berezovsky  wrote:

> Hi,
> 
> I would like to share some details regarding the support provided by
> Mellanox plugin. It enables networking via SRIOV pass-through devices or
> macvtap interfaces.  It plugin is available here:
> https://github.com/openstack/neutron/tree/master/neutron/plugins/mlnx.
> 
> To support either PCI pass-through device and macvtap interface type of
> vNICs, we set neutron port profile:vnic_type according to the required VIF
> type and then use the created port to ‘nova boot’ the VM.
> 
> To  overcome the missing scheduler awareness for PCI devices which was not
> part of the Havana release yet, we
> 
> have an additional service (embedded switch Daemon) that runs on each
> compute node.  
> 
> This service manages the SRIOV resources allocation,  answers vNICs
> discovery queries and applies VLAN/MAC configuration using standard Linux
> APIs (code is here: https://github.com/mellanox-openstack/mellanox-eswitchd
> ).  The embedded switch Daemon serves as a glue layer between VIF Driver and
> Neutron Agent.
> 
> In the Icehouse Release when SRIOV resources allocation is already part of
> the Nova, we plan to eliminate the need in embedded switch daemon service.
> So what is left to figure out is how to tie up between neutron port and PCI
> device and invoke networking configuration.
> 
>  
> 
> In our case what we have is actually the Hardware VEB that is not programmed
> via either 802.1Qbg or 802.1Qbh, but configured locally by Neutron Agent. We
> also support both Ethernet and InfiniBand physical network L2 technology.
> This means that we apply different configuration commands  to set
> configuration on VF.
> 
>  
> 
> I guess what we have to figure out is how to support the generic case for
> the PCI device networking support, for HW VEB, 802.1Qbg and 802.1Qbh cases.
> 
>  
> 
> BR,
> 
> Irena
> 
>  
> 
> *From:*Robert Li (baoli) [mailto:ba...@cisco.com]
> *Sent:* Tuesday, October 29, 2013 3:31 PM
> *To:* Jiang, Yunhong; Irena Berezovsky; prashant.upadhy...@aricent.com;
> chris.frie...@windriver.com; He, Yongli; Itzik Brown
> *Cc:* OpenStack Development Mailing List; Brian Bowen (brbowen); Kyle
> Mestery (kmestery); Sandhya Dasu (sadasu)
> *Subject:* Re: [openstack-dev] [nova] [neutron] PCI pass-through network 
> support
> 
>  
> 
> Hi Yunhong,
> 
>  
> 
> I haven't looked at Mellanox in much detail. I think that we'll get more
> details from Irena down the road. Regarding your question, I can only answer
> based on my experience with Cisco's VM-FEX. In a nutshell:
> 
>  -- a vNIC is connected to an external switch. Once the host is booted
> up, all the PFs and VFs provisioned on the vNIC will be created, as well as
> all the corresponding ethernet interfaces . 
> 
>  -- As far as Neutron is concerned, a neutron port can be associated
> with a VF. One way to do so is to specify this requirement in the —nic
> option, providing information such as:
> 
>. PCI alias (this is the same alias as defined in your nova
> blueprints)
> 
>. direct pci-passthrough/macvtap
> 
>. port profileid that is compliant with 802.1Qbh
> 
>  -- similar to how you translate the nova flavor with PCI requirements
> to PCI requests for scheduling purpose, Nova API (the nova api component)
> can translate the above to PCI requests for scheduling purpose. I can give
> more detail later on this. 
> 
>  
> 
> Regarding your last question, since the vNIC is already connected with the
> external switch, the vNIC driver will be responsible for communicating the
> port profile to the external switch. As you have already known, libvirt
> provides several ways to specify a VM to be booted up with SRIOV. For
> example, in the following interface definition: 
> 
>   
> 
>   **
> 
> *  *
> 
> * function='0x01'/>*
> 
> *  *
> 
> *  *
> 
> *  *
> 
> **
> 
> *  *
> 
> **
> 
>  
> 
> The SRIOV VF (bus 0x09, VF 0x01) will be allocated, and the port profile 
> 'my-port-profile' will be used to provision this VF. Libvirt will be 
> responsible for invoking the vNIC driver to configure this VF with the port 
> profile my-port-porfile. The driver will talk to the external switch using 
> the 802.1qbh standar

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-29 Thread Thomas Spatzier
Hi all,

I have read thru all the good discussion around software orchestration and
what to share my view. I see that there are two threads - this one and [1]
- but I am just replying to the one with the more generic topic line.

So on the recent twist to have software components implemented as
resources, I see some pros and cons. I am not actually decided yet what I
like most, so just shedding some thoughts here.
The good thing it would bring is definitely that it would give us a lot of
the things that Lakshmi has asked for in [1] (e.g. feed in properties,
get_attr, state) for free, and it would keep the HOT language smaller.

One of the big differences between the way resources are used today, and
what components are meant to be is that components had semantics of being
declared once and potentially used in many places in a template. Whereas a
resource would result in one instance at runtime. I think this is a key
issue that needs to be addressed.

One possibility would be the "SoftwareComponent - Deployment - Server"
resource triple that Zane brought up on the other thread. Or the original
component concept ...

With the latter one, the other open issue is how the binding gets defined,
and there are some debates about it. In summary, I think the following
option have been outlined:

(1) a components section with the server
(2) a hosted_on link with the component
(3) some explicit binding object

(1) seems doable, but has state dependency impacts that Steve Baker lined
out in a recent post. (2) would solve the state dependency problem, but has
the problem that the re-usable thing (the component) points explicitly to
its host. (3) sounds like Zane's "SoftwareComponent - Deployment - Server"
idea.

At the moment I am inclined to (3) because it actually makes the
distinction between a component declaration and its use ("Deployment") very
clear. And I think this would have to be done in any case - whether
software components are resources or a new concept.
Originally, I actually brought up hosted_on in an own proposal because I am
coming from the TOSCA world where we have a HostedOn relationship type. The
difference, though, is that in TOSCA we have relationships as own objects
so we avoid "hardcoding" the host in the software component object but
having association objects instead. Again, this seems to go into the
direction of option (3).

BTW: +10 on Angus' request for big whiteboards at the summit!

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-October/017540.html

Regards,
Thomas

Angus Salkeld  wrote on 29.10.2013 02:22:22:
> From: Angus Salkeld 
> To: openstack-dev@lists.openstack.org,
> Date: 29.10.2013 02:25
> Subject: Re: [openstack-dev] [Heat] HOT Software configuration proposal
>
> On 28/10/13 16:26 -0700, Clint Byrum wrote:
>
> [snip]
>
> >> >
> >> My components proposals had no hosted_on, but I've been thinking about
> >> the implications of implementing software configuration as resources,
> >> and one of the natural consequences might be that hosted_on is the
best
> >> way of establishing the relationship with compute and its
> >> configurations. Let me elaborate.
> >>
> >> Lets say that Heat has resource types for software configuration, with
> >> the following behaviours:
> >> * like other resources, a config resource goes into CREATE IN_PROGRESS
> >> as soon as its dependencies are satisfied (these dependencies may be
> >> values signalled from other config resources)
> >> * a config resource goes to state CREATE COMPLETE when it receives a
> >> signal that configuration on the compute resource is complete (by some
> >> mechanism; wait condition, marconi message, whatevs)
> >> * the relationship between config resources and compute resources are
> >> achieved with existing intrinsic functions (get_resource, get_attr)
> >>
> >> This lifecycle behaviour means that a configuration resource can only
> >> run on a single compute resource, and that relationship needs to be
> >> established somehow. Config resources will have a quirk in that they
> >> need to provide 2 sources of configuration data at different times:
> >> 1) cloud-init config-data (or other boot-only mechanism), which must
be
> >> available when the compute resource goes into CREATE IN_PROGRESS
> >> 2) the actual configuration data (oac metadata, puppet manifest, chef
> >> recipe) which the compute resource needs to be able to fetch and
execute
> >> whenever it becomes available.
> >>
> >> The data in 1) implies that the compute needs to depend on the config,
> >> but then all concurrency is lost (this won't matter for a
> >> cloud-init-only config resource).  Either way, the data for 1) will
need
> >> to be available when the config resource is still in state INIT
> >> COMPLETE, which may impose limitations on how that is defined (ie
> >> get_resource, get_attr not allowed).
> >>
> >> So, 2 concrete examples for handling config/compute dependencies:
> >>
> >> hosted_on
> >> -
> >> resources:
> >>   configA

[openstack-dev] [Trove] Templates in Trove

2013-10-29 Thread Robert Myers
I'm pulling this conversation out of the gerrit review as I think it needs
more discussion.

https://review.openstack.org/#/c/53499/

I want to discuss the design decision to not use Jinja templates for the
heat templates. My arguments for using Jinja for heat as well are:

1. We have to rewrite all the template loading logic. The current
implementation is pretty simple but in order to make in production worthy
it will need to handle many more edge cases as we use develop this feature.
The main argument I have heard against using the existing ENV is that the
path is hard coded. (This can and should be turned into a config flag)
2. We are already using Jinja templates for config files so it will be less
confusing for a new person starting up. Why do these custom templates go
here but these over here? Having one place to override defaults makes sense.
3. Looking at the current heat templates I could easily see some areas that
could take advantage of being a real Jinja template, an admin could create
a base template and extend that for each different service and just change
a few values in each.
4. The default templates could be package with trove (using the Jijna
PackageLoader) so the initial setup out of the box will work.

If we go this route it would also be a good time to discuss the origination
of the templates. Currently the templates are just in

- trove/templates/{data_store}.config.template
- trove/templates/{data_store}.heat.template


I suggest that we move this into a folder structure like so:

- trove/template/{data_store}/config.template
- trove/template/{data_store}/heat.template
- trove/template/{data_store}/the_next.template

Thanks!
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object status and admin_state_up

2013-10-29 Thread Avishay Balderman
Hi
It feels like a driver specific topic.
So I am not sure we  can come to a generic solution in the lbaas core code.
Thanks
Avishay

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Tuesday, October 29, 2013 11:19 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] Object status and admin_state_up

Hi folks,

Currently there are two attributes of vips/pools/members that represent a 
status: 'status' and 'admin_state_up'.

The first one is used to represent deployment status and can be PENDING_CREATE, 
ACTIVE, PENDING_DELETE, ERROR.
We also have admin_state_up which could be True or False.

I'd like to know your opinion on how to change 'status' attribute based on 
admin_state_up changes.
For instance. If admin_state_up is updated to be False, how do you think 
'status' should change?

Also, speaking of reference implementation (HAProxy), changing vip or pool 
admin_state_up to False effectively destroys the balancer (undeploys it), while 
the objects remain in ACTIVE state.
There are two options to fix this discrepancy:
1) Change status of vip/pool to PENDING_CREATE if admin_state_up changes to 
False
2) Don't destroy the loadbalancer and use HAProxy capability to disable 
frontend and backend while leave vip/pool in ACTIVE state

Please share your opinion.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Comments on Steve Baker's Proposal on HOT Software Config

2013-10-29 Thread Lakshminaraya Renganarayana

I am wondering on the execution semantics of these components or software
config resources with respect to restart or re-execution. Coming from a
iterative development and deployment angle, I would like these software
config resources to be idempotent. What are your thoughts?

Thanks,
LN

Steven Hardy  wrote on 10/29/2013 03:23:14 AM:

> From: Steven Hardy 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 10/29/2013 03:28 AM
> Subject: Re: [openstack-dev] [Heat] Comments on Steve Baker's
> Proposal on HOT Software Config
>
> On Mon, Oct 28, 2013 at 02:34:44PM -0700, Georgy Okrokvertskhov wrote:
> > I believe we had a discussion about difference between declarative
approach
> > and workflows. A component approach is consistent with declarative
format
> > as all actions\operations are hidden inside the service. If you want to
use
> > actions and operations explicitly you will have to add a workflows
specific
> > language to HOT format. You will need to have some conditions and other
> > control structures.
>
> Please don't confuse the component/resource discussion further by adding
> all these unrelated terms into the mix:
>
> - Resources are declarative, components aren't in any way more
declarative
>
> - The resource/component discussion is unrelated to workflows, we're
>   discussing the template level interfaces.
>
> - Adding imperative control-flow interfaces to the template is the
opposite
>   of a declarative approach
>
> > I also want to highlight that in most of examples on wiki pages there
are
> > actions instead of components. Just check names: install_mysql,
> > configure_app.
>
> Having descriptions of the actions required to do configure an
application
> is not declarative.  Having a resource define the properties of the
> application is.
>
> > I think you revealed the major difference between resource and
component.
> > While the first has a fixed API and Heat already knows how to work with
it,
>
> A resource doesn't have a fixed API as such - it has flexible,
user-definable
> interfaces (inputs/properties and outputs/attributes)
>
> > components are not determined and Heat does not know what this
component
> > actually does.
>
> Heat doesn't need to know what a resource or component actually does, it
> needs to know what do do with the inputs/properties, and how to obtain
the
> outputs/attributes.
>
> > I remember the first draft for Software components and it
> > had a specific examples for yum invocation for package installation.
This
> > is a good example of declarative component. When scripts and recipes
> > appeared a component definition was blurred.
>
> This makes no sense, scripts defining platform specific installation
> methods are the exact opposite of a declarative component.
>
> The blurred component definition you refer to is a very good reason not
to
> add a new abstraction IMO - we should focus on adding the functionality
via
> the existing, well understood interfaces.
>
> Steve
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >