Re: [openstack-dev] [Nova][Neutron] Neutron + Nova + OVS security group fix

2014-03-25 Thread Akihiro Motoki
Hi Nachi and the teams,

(2014/03/26 9:57), Salvatore Orlando wrote:
> I hope we can sort this out on the mailing list IRC, without having to 
> schedule emergency meetings.
>
> Salvatore
>
> On 25 March 2014 22:58, Nachi Ueno mailto:na...@ntti3.com>> 
> wrote:
>
> Hi Nova, Neturon Team
>
> I would like to discuss issue of Neutron + Nova + OVS security group fix.
> We have a discussion in IRC today, but the issue is complicated so we 
> will have
> a conf call tomorrow 17:00 UST (10AM PDT). #openstack-neutron
>
> (I'll put conf call information in IRC)
>
>
> thanks, but I'd prefer you discuss the matter on IRC.
> I won't be available at that time and having IRC logs on eavesdrop will allow 
> me to catch up without having to ask people or waiting for minutes on the 
> mailing list.

I can't join the meeting too. It is midnight.

>
> <-- Please let me know if this time won't work with you.
>
> Bug Report
> https://bugs.launchpad.net/neutron/+bug/1297469
>
> Background of this issue:
> ML2 + OVSDriver + IptablesBasedFirewall combination is a default
> plugin setting in the Neutron.
> In this case, we need a special handing in VIF. Because OpenVSwitch
> don't support iptables, we are
> using linuxbride + openvswitch bridge. We are calling this as hybrid 
> driver.
>
>
> The hybrid solution in Neutron has been around for such a long time that I 
> would hardly call it a "special handling".
> To summarize, the VIF is plugged into a linux bridge, which has another leg 
> plugged in the OVS integration bridge.
>
> On the other discussion, we generalized the Nova  side VIF plugging to
> the Libvirt GenericVIFDriver.
> The idea is let neturon tell the VIF plugging configration details to
> the GenericDriver, and GerericDriver
> takes care of it.
>
>
> The downside of the generic driver is that so far it's assuming local 
> configuration values are sufficient to correctly determine VIF plugging.
> The generic VIF driver will use the hybrid driver if get_firewall_required is 
> true. And this will happen if the firewall driver is anything different from 
> the NoOp driver.
> This was uncovered by a devstack commit (1143f7e). When I previously 
> discussed with the people involved this issue, I was under the impression 
> that the devstack patch introduced the problem.
> Apparently the Generic VIF driver is not taking at the moments hints from 
> neutron regarding the driver to use, and therefore, from what I gather, makes 
> a decision based on nova conf flags only.
> So a quick fix would be to tell the Generic VIF driver to always use hybrid 
> plugging when neutron is enabled (which can be gathered by nova conf flags).
> This will fix the issue for ML2, but will either break or insert an 
> unnecessary extra hop for other plugins.

When the generic VIF driver is introduced, OVS VIF driver and the hybrid VIF 
driver are
considered same as e as both are pluggged into OVS and the hybrid driver is 
implemeted
as a variation of OVS driver, but the thing is not so simple than the first 
thought.
The hybrid driver solution lives such a long time and IMO the hybrid VIF driver 
should
be considered as a different one from OVS VIF driver. I start to think 
VIF_TYPE_OVS_HYBRID
is a good way as Savaltore mentioned below.

Another point to be discussed is whether passing vif secuirty attributes work 
from now on.
Even when neutron security group is enabled, do we need to do some port 
security mechanism
(anti-spoofing, )  on nova-compute side (such as libvirt nwfilter) or not?

>
>
> Unfortunatly, HybridDriver is removed before GenericDriver is ready
> for security group.
>
>
> The drivers were marked for deprecation in Havana, and if we thought the 
> GenericDriver was not good for neutron security groups we had enough time to 
> scream.
>
> This makes ML2 + OVSDriver + IptablesBasedFirewall combination 
> unfunctional.
> We were working on realfix, but we can't make it until Icehouse
> release due to design discussions [1].
>
> # Even if neturon side patch isn't merged yet.
>
> So we are proposing a workaround fix to Nova side.
> In this fix, we are adding special version of the GenericVIFDriver
> which can work with the combination.
> There is two point on this new Driver.
> (1) It prevent set conf.filtername. Because we should use
> NoopFirewallDriver, we need conf.filtername should be None
> when we use it.
> (2) use plug_ovs_hybrid and unplug_ovs_hybrid by enforcing
> get_require_firewall as True.

IIUC, the original intention of get_firewall_required() is to control
whether nwfilter is enabled or not, not to control hybird plugging.
As a plan, get_firewall_required() is changed to look binding:attribute
(binding:capablity:port_filter or binding:vif_security:iptable_required
if I use the concept discussed so far).
What we need is a way to determine hybrid plugging is required or not.

Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-25 Thread Clint Byrum
Excerpts from Dolph Mathews's message of 2014-03-25 19:01:17 -0700:
> On Tue, Mar 25, 2014 at 5:50 PM, Russell Bryant  wrote:
> 
> > We discussed the deprecation of the v2 keystone API in the cross-project
> > meeting today [1].  This thread is to recap and bring that discussion to
> > some consensus.
> >
> > The issue is that Keystone has marked the v2 API as deprecated in Icehouse:
> >
> > https://blueprints.launchpad.net/keystone/+spec/deprecate-v2-api
> >
> > If you use the API, deployments will get this in their logs:
> >
> > WARNING keystone.openstack.common.versionutils [-] Deprecated: v2 API is
> > deprecated as of Icehouse in favor of v3 API and may be removed in K.
> >
> > The deprecation status is reflected in the API for end users, as well.
> > For example, from the CLI:
> >
> >   $ keystone discover
> >   Keystone found at http://172.16.12.38:5000/v2.0
> > - supports version v2.0 (deprecated) here
> > http://172.16.12.38:5000/v2.0/
> >
> > My proposal is that this deprecation be reverted.  Here's why:
> >
> > First, it seems there isn't a common use of "deprecated".  To me,
> > marking something deprecated means that the deprecated feature:
> >
> >  - has been completely replaced by something else
> >
> 
> >  - end users / deployers should take action to migrate to the
> >new thing immediately.
> >
> 
> >  - The project has provided a documented migration path
> 
> >  - the old thing will be removed at a specific date/release
> >
> 
> Agree on all points. Unfortunately, we have yet to succeed on the
> documentation front:
> 
> 
> https://blueprints.launchpad.net/keystone/+spec/document-v2-to-v3-transition
> 
> >
> > The problem with the above is that most OpenStack projects do not
> > support the v3 API yet.
> >
> > From talking to Dolph in the meeting, it sounds like the intention is:
> >
> >  - fully support v2, just don't add features
> >
> >  - signal to other projects that they should be migrating to v3
> >
> 
> Above all else, this was our primary goal: to raise awareness about our
> path forward, and to identify the non-obvious stakeholders that we needed
> to work with in order to drop support for v2. With today's discussion as
> evidence, I think we've succeeded in that regard :)
> 
> >
> > Given that intention, I believe the proper thing to do is to actually
> > leave the API marked as fully supported / stable.  Keystone should be
> > working with other OpenStack projects to migrate them to v3.  Once that
> > is complete, deprecation can be re-visited.
> >
> 
> Happy to!
> 
> Revert deprecation of the v2 API: https://review.openstack.org/#/c/82963/
> 
> Although I'd prefer to apply this patch directly to milestone-proposed, so
> we can continue into Juno with the deprecation in master.
> 

As somebody maintaining a few master-chasing CD clouds, I'd like to ask
you to please stop the squawking about deprecation until it has a definite
replacement and most if not all OpenStack core projects are using it.

1 out of every 2 API calls on these clouds produces one of these errors
in Keystone. That is just pointless. :-P

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SR-IOV and IOMMU check

2014-03-25 Thread Gouzongmei
Hi, Yang, Yi y

Agree with you,  IOMMU and SR-IOV need to be checked beforehand.

I think it should be checked before booting a instance with the pci flavor, 
that means when the flavor contains some normal pci cards or SR-IOV cards. Just 
like when you find there are pci_requests in the instance system_metadata.

The details are out of my current knows.

Hope can help you.
From: Yang, Yi Y [mailto:yi.y.y...@intel.com]
Sent: Wednesday, March 26, 2014 10:51 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] SR-IOV and IOMMU check

Hi, all

Currently openstack can support SR-IOV device pass-through (at least there are 
some patches for this), but the prerequisite to this is both IOMMU and SR-IOV 
must be enabled correctly, it seems there is not a robust way to check this in 
openstack, I have implemented a way to do this and hope it can be committed 
into upstream, this can help find the issue beforehand, instead of letting kvm 
report the issue "no IOMMU found" until the VM is started. I didn't find an 
appropriate place to put into this, do you think this is necessary? Where can 
it be put into? Welcome your advice and thank you in advance.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] SR-IOV and IOMMU check

2014-03-25 Thread Yang, Yi Y
Hi, all

Currently openstack can support SR-IOV device pass-through (at least there are 
some patches for this), but the prerequisite to this is both IOMMU and SR-IOV 
must be enabled correctly, it seems there is not a robust way to check this in 
openstack, I have implemented a way to do this and hope it can be committed 
into upstream, this can help find the issue beforehand, instead of letting kvm 
report the issue "no IOMMU found" until the VM is started. I didn't find an 
appropriate place to put into this, do you think this is necessary? Where can 
it be put into? Welcome your advice and thank you in advance.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-25 Thread Jay Faulkner

On 3/25/2014 1:50 PM, Matt Wagner wrote:

This would argue to me that the easiest thing for Ceilometer might be
to query us for IPMI stats, if the credential store is pluggable.
"Fetch these bare metal statistics" doesn't seem too off-course for
Ironic to me. The alternative is that Ceilometer and Ironic would both
have to be configured for the same pluggable credential store. 


There is already a blueprint with a proposed patch here for Ironic to do 
the querying: 
https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.


I think, for terms of credential storage (and, for that matter, metrics 
gathering as I noted in that blueprint), it's very useful to have things 
pluggable. Ironic, in particular, has many different use cases: bare 
metal private cloud, bare metal public cloud, and triple-o. I could 
easily see all three being different enough to call for different forms 
of credential storage.


-Jay Faulkner

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Neutron + Nova + OVS security group fix

2014-03-25 Thread Nachi Ueno
Hi Salvatore


2014-03-25 17:57 GMT-07:00 Salvatore Orlando :
> I hope we can sort this out on the mailing list IRC, without having to
> schedule emergency meetings.

Russel requested to have a conf call on this, so let him decide it.

> Salvatore
>
> On 25 March 2014 22:58, Nachi Ueno  wrote:
>>
>> Hi Nova, Neturon Team
>>
>> I would like to discuss issue of Neutron + Nova + OVS security group fix.
>> We have a discussion in IRC today, but the issue is complicated so we will
>> have
>> a conf call tomorrow 17:00 UST (10AM PDT). #openstack-neutron
>>
>> (I'll put conf call information in IRC)
>
>
> thanks, but I'd prefer you discuss the matter on IRC.
> I won't be available at that time and having IRC logs on eavesdrop will
> allow me to catch up without having to ask people or waiting for minutes on
> the mailing list.
>
>>
>>
>> <-- Please let me know if this time won't work with you.
>>
>> Bug Report
>> https://bugs.launchpad.net/neutron/+bug/1297469
>>
>> Background of this issue:
>> ML2 + OVSDriver + IptablesBasedFirewall combination is a default
>> plugin setting in the Neutron.
>> In this case, we need a special handing in VIF. Because OpenVSwitch
>> don't support iptables, we are
>> using linuxbride + openvswitch bridge. We are calling this as hybrid
>> driver.
>>
>
> The hybrid solution in Neutron has been around for such a long time that I
> would hardly call it a "special handling".
> To summarize, the VIF is plugged into a linux bridge, which has another leg
> plugged in the OVS integration bridge.
>
>>
>> On the other discussion, we generalized the Nova  side VIF plugging to
>> the Libvirt GenericVIFDriver.
>> The idea is let neturon tell the VIF plugging configration details to
>> the GenericDriver, and GerericDriver
>> takes care of it.
>
>
> The downside of the generic driver is that so far it's assuming local
> configuration values are sufficient to correctly determine VIF plugging.
> The generic VIF driver will use the hybrid driver if get_firewall_required
> is true. And this will happen if the firewall driver is anything different
> from the NoOp driver.
> This was uncovered by a devstack commit (1143f7e). When I previously
> discussed with the people involved this issue, I was under the impression
> that the devstack patch introduced the problem. Apparently the Generic VIF
> driver is not taking at the moments hints from neutron regarding the driver
> to use, and therefore, from what I gather, makes a decision based on nova
> conf flags only.
> So a quick fix would be to tell the Generic VIF driver to always use hybrid
> plugging when neutron is enabled (which can be gathered by nova conf flags).
> This will fix the issue for ML2, but will either break or insert an
> unnecessary extra hop for other plugins.

get_firewall_required = True won't fix this issue. We need to make sure
we won't set config.filtername in this case.

>>
>> Unfortunatly, HybridDriver is removed before GenericDriver is ready
>> for security group.
>
>
> The drivers were marked for deprecation in Havana, and if we thought the
> GenericDriver was not good for neutron security groups we had enough time to
> scream.

The reason we missed this issue is we lack the negative test for
security groups..
so we couldn't realize this.

>> This makes ML2 + OVSDriver + IptablesBasedFirewall combination
>> unfunctional.
>> We were working on realfix, but we can't make it until Icehouse
>> release due to design discussions [1].
>>
>> # Even if neturon side patch isn't merged yet.
>>
>> So we are proposing a workaround fix to Nova side.
>> In this fix, we are adding special version of the GenericVIFDriver
>> which can work with the combination.
>> There is two point on this new Driver.
>> (1) It prevent set conf.filtername. Because we should use
>> NoopFirewallDriver, we need conf.filtername should be None
>> when we use it.
>> (2) use plug_ovs_hybrid and unplug_ovs_hybrid by enforcing
>> get_require_firewall as True.
>>
>> Here is patchs with UT.
>>
>> Workaournd fix:
>> Nova
>> https://review.openstack.org/#/c/82904/
>>
>> Devstack patch for ML2 (Tested with 82904)
>> https://review.openstack.org/#/c/82937/
>
>
> Are there other plugins which need the hybrid driver for sec groups to work?
> I think so.
> And also - the patch does not seem to work according to Jenkins. The
> failures look genuine to me.

I agere with you, however we should start with minimal fix for this.
We can work for another plugin in another patch.
This patch will fail in Jenkins because it needs 82904.

>>
>>
>> We have tested the patch 82904 with following test, and this works.
>>
>> - Launch VM
>> - Assign floating ip
>> - make sure ping to the floating ip is failing from GW
>> - modify security group rule to allow ping from anywhere
>> - make sure ping is working
>
>
> You can actually run your devstack patch with your patch under review in the
> check queue.
> Check what Aaron did here: https://review.openstack.org/#/c/78694/11

Nice hack. let me try it

Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-25 Thread Dolph Mathews
On Tue, Mar 25, 2014 at 5:50 PM, Russell Bryant  wrote:

> We discussed the deprecation of the v2 keystone API in the cross-project
> meeting today [1].  This thread is to recap and bring that discussion to
> some consensus.
>
> The issue is that Keystone has marked the v2 API as deprecated in Icehouse:
>
> https://blueprints.launchpad.net/keystone/+spec/deprecate-v2-api
>
> If you use the API, deployments will get this in their logs:
>
> WARNING keystone.openstack.common.versionutils [-] Deprecated: v2 API is
> deprecated as of Icehouse in favor of v3 API and may be removed in K.
>
> The deprecation status is reflected in the API for end users, as well.
> For example, from the CLI:
>
>   $ keystone discover
>   Keystone found at http://172.16.12.38:5000/v2.0
> - supports version v2.0 (deprecated) here
> http://172.16.12.38:5000/v2.0/
>
> My proposal is that this deprecation be reverted.  Here's why:
>
> First, it seems there isn't a common use of "deprecated".  To me,
> marking something deprecated means that the deprecated feature:
>
>  - has been completely replaced by something else
>

>  - end users / deployers should take action to migrate to the
>new thing immediately.
>

>  - The project has provided a documented migration path


>  - the old thing will be removed at a specific date/release
>

Agree on all points. Unfortunately, we have yet to succeed on the
documentation front:


https://blueprints.launchpad.net/keystone/+spec/document-v2-to-v3-transition


>
> The problem with the above is that most OpenStack projects do not
> support the v3 API yet.
>
> From talking to Dolph in the meeting, it sounds like the intention is:
>
>  - fully support v2, just don't add features
>
>  - signal to other projects that they should be migrating to v3
>

Above all else, this was our primary goal: to raise awareness about our
path forward, and to identify the non-obvious stakeholders that we needed
to work with in order to drop support for v2. With today's discussion as
evidence, I think we've succeeded in that regard :)


>
> Given that intention, I believe the proper thing to do is to actually
> leave the API marked as fully supported / stable.  Keystone should be
> working with other OpenStack projects to migrate them to v3.  Once that
> is complete, deprecation can be re-visited.
>

Happy to!

Revert deprecation of the v2 API: https://review.openstack.org/#/c/82963/

Although I'd prefer to apply this patch directly to milestone-proposed, so
we can continue into Juno with the deprecation in master.


>
> In summary, until we have completed v3 support within OpenStack itself,
> it's premature to mark the API deprecated since that's a signal to end
> users and deployers that says action is required.
>
> Thoughts?
>
> [1]
>
> http://eavesdrop.openstack.org/meetings/project/2014/project.2014-03-25-21.01.log.html#l-103
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-25 Thread Zane Bitter

On 21/03/14 18:58, Stan Lagun wrote:

Zane,

I appreciate your explanations on Heat/HOT. This really makes sense.
I didn't mean to say that MuranoPL is better for Heat. Actually HOT is
good for Heat's mission. I completely acknowledge it.
I've tried to avoid comparison between languages and I'm sorry if it
felt that way. This is not productive as I don't offer you to replace


No problem, I didn't interpret it that way. I just wanted to use it as 
an opportunity to document a bit more information about Heat works that 
may be helpful to you.



HOT with MuranoPL (although I believe that certain elements of MuranoPL
syntax can be contributed to HOT and be valuable addition there). Also
people tend to protect what they have developed and invested into and to
be fair this is what we did in this thread to great extent.


Yes, we are all human and this is an unavoidable part of life :)


What I'm trying to achieve is that you and the rest of Heat team
understand why it was designed the way it is. I don't feel that Murano
can become full-fledged member of OpenStack ecosystem without a bless
from Heat team. And it would be even better if we agree on certain
design, join our efforts and contribute to each other for sake of
Orchestration program.


Thanks. To be clear, this is what I am looking for. At the beginning of 
this thread I proposed a very simple possible implementation, and I'm 
wanting folks to tell me what is missing from that model that would 
justify a more complicated approach.



I'm sorry for long mail texts written in not-so-good English and
appreciate you patience reading and answering them.

Having said that let me step backward and explain our design decisions.

Cloud administrators are usually technical guys that are capable of
learning HOT and writing YAML templates. They know exact configuration
of their cloud (what services are available, what is the version of
OpenStack cloud is running) and generally understands how OpenStack
works. They also know about software they intent to install. If such guy
wants to install Drupal he knows exactly that he needs HOT template
describing Fedora VM with Apache + PHP + MySQL + Drupal itself. It is
not a problem for him to write such HOT template.


I'm aware that TOSCA has these types of constraints, and in fact I 
suggested to the TOSCA TC that maybe this is where we should draw the 
line between Heat and some TOSCA-compatible service: HOT should be a 
concrete description of exactly what you're going to get, whereas some 
other service (in this case Murano) would act as the constraints solver. 
e.g. something like an image name would not be hardcoded in a Murano 
template, you have some constraints about which operating system and 
what versions should be allowed, and it would pick one and pass it to 
Heat. So I am interested in this approach.


The worst outcome here would be to end up with something that was 
equivalent to TOSCA but not easily translatable to the TOSCA Simple 
Profile YAML format (currently a Working Draft). Where 'easily 
translatable' preferably means 'by just changing some names'. I can't 
comment on whether this is the case as things stand.



Note that such template would be designed for very particular
configuration. There are hundreds of combinations that may be used to
install that Drupal - use RHEL/Windows/etc instead of Fedora, use
ngnix/IIS/etc instead of Apache, use FastCGI instead of mod_php,
PostgreSQL instead of MySQL. You may choose to have all software on
single VM or have one VM for database and another for Drupal. There are
also constraints to those combinations. For example you cannot have
Fedora + IIS on the same VM. You cannot have Apache and Drupal on
different VMs.

So the HOT template represent fixed combination of those software
components. HOT may have input parameters like "username" or
"dbImageName" but the overall structure of template is fixed. You cannot
have template that choses whether to use Windows or Linux based on
parameter value.


As Thomas mentioned, CloudFormation now has conditionals. If we were to 
add this feature in HOT (which seems likely), you would actually be able 
to do that.


http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html


You cannot write HOT that accepts number of instances
it allowed to create and then decide what would be installed on each of
them. This is just not needed for Heat users.


The path we've gone down in Heat is to introduce software component 
resources so that the definition of the software component to install is 
separated from the definition of the server it runs on, and may even be 
in a separate template file.


So you can write a template that launches a single server with (e.g.) 
both Wordpress and MySQL installed, or you can have a separate template 
(though with the conditional stuff above it could theoretically be the 
same template even) that launches two servers with Wordpress on one and 
MySQL on t

Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-25 Thread Anne Gentle
On Tue, Mar 25, 2014 at 5:50 PM, Russell Bryant  wrote:

> We discussed the deprecation of the v2 keystone API in the cross-project
> meeting today [1].  This thread is to recap and bring that discussion to
> some consensus.
>
> The issue is that Keystone has marked the v2 API as deprecated in Icehouse:
>
> https://blueprints.launchpad.net/keystone/+spec/deprecate-v2-api
>
> If you use the API, deployments will get this in their logs:
>
> WARNING keystone.openstack.common.versionutils [-] Deprecated: v2 API is
> deprecated as of Icehouse in favor of v3 API and may be removed in K.
>
> The deprecation status is reflected in the API for end users, as well.
> For example, from the CLI:
>
>   $ keystone discover
>   Keystone found at http://172.16.12.38:5000/v2.0
> - supports version v2.0 (deprecated) here
> http://172.16.12.38:5000/v2.0/
>
> My proposal is that this deprecation be reverted.  Here's why:
>
> First, it seems there isn't a common use of "deprecated".  To me,
> marking something deprecated means that the deprecated feature:
>
>  - has been completely replaced by something else
>
>  - end users / deployers should take action to migrate to the
>new thing immediately.
>
>  - The project has provided a documented migration path
>
>  - the old thing will be removed at a specific date/release
>
> The problem with the above is that most OpenStack projects do not
> support the v3 API yet.
>
> From talking to Dolph in the meeting, it sounds like the intention is:
>
>  - fully support v2, just don't add features
>
>  - signal to other projects that they should be migrating to v3
>
> Given that intention, I believe the proper thing to do is to actually
> leave the API marked as fully supported / stable.  Keystone should be
> working with other OpenStack projects to migrate them to v3.  Once that
> is complete, deprecation can be re-visited.
>

Sounds great, thanks for the discussion.

Anne

>
> In summary, until we have completed v3 support within OpenStack itself,
> it's premature to mark the API deprecated since that's a signal to end
> users and deployers that says action is required.
>
> Thoughts?
>
> [1]
>
> http://eavesdrop.openstack.org/meetings/project/2014/project.2014-03-25-21.01.log.html#l-103
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Neutron + Nova + OVS security group fix

2014-03-25 Thread Salvatore Orlando
I hope we can sort this out on the mailing list IRC, without having to
schedule emergency meetings.

Salvatore

On 25 March 2014 22:58, Nachi Ueno  wrote:

> Hi Nova, Neturon Team
>
> I would like to discuss issue of Neutron + Nova + OVS security group fix.
> We have a discussion in IRC today, but the issue is complicated so we will
> have
> a conf call tomorrow 17:00 UST (10AM PDT). #openstack-neutron

(I'll put conf call information in IRC)
>

thanks, but I'd prefer you discuss the matter on IRC.
I won't be available at that time and having IRC logs on eavesdrop will
allow me to catch up without having to ask people or waiting for minutes on
the mailing list.


>
> <-- Please let me know if this time won't work with you.
>
> Bug Report
> https://bugs.launchpad.net/neutron/+bug/1297469
>
> Background of this issue:
> ML2 + OVSDriver + IptablesBasedFirewall combination is a default
> plugin setting in the Neutron.
> In this case, we need a special handing in VIF. Because OpenVSwitch
> don't support iptables, we are
> using linuxbride + openvswitch bridge. We are calling this as hybrid
> driver.
>
>
The hybrid solution in Neutron has been around for such a long time that I
would hardly call it a "special handling".
To summarize, the VIF is plugged into a linux bridge, which has another leg
plugged in the OVS integration bridge.


> On the other discussion, we generalized the Nova  side VIF plugging to
> the Libvirt GenericVIFDriver.
> The idea is let neturon tell the VIF plugging configration details to
> the GenericDriver, and GerericDriver
> takes care of it.
>

The downside of the generic driver is that so far it's assuming local
configuration values are sufficient to correctly determine VIF plugging.
The generic VIF driver will use the hybrid driver if get_firewall_required
is true. And this will happen if the firewall driver is anything different
from the NoOp driver.
This was uncovered by a devstack commit (1143f7e). When I previously
discussed with the people involved this issue, I was under the impression
that the devstack patch introduced the problem. Apparently the Generic VIF
driver is not taking at the moments hints from neutron regarding the driver
to use, and therefore, from what I gather, makes a decision based on nova
conf flags only.
So a quick fix would be to tell the Generic VIF driver to always use hybrid
plugging when neutron is enabled (which can be gathered by nova conf flags).
This will fix the issue for ML2, but will either break or insert an
unnecessary extra hop for other plugins.


> Unfortunatly, HybridDriver is removed before GenericDriver is ready
> for security group.
>

The drivers were marked for deprecation in Havana, and if we thought the
GenericDriver was not good for neutron security groups we had enough time
to scream.

This makes ML2 + OVSDriver + IptablesBasedFirewall combination unfunctional.
> We were working on realfix, but we can't make it until Icehouse
> release due to design discussions [1].
>
# Even if neturon side patch isn't merged yet.
>
> So we are proposing a workaround fix to Nova side.
> In this fix, we are adding special version of the GenericVIFDriver
> which can work with the combination.
> There is two point on this new Driver.
> (1) It prevent set conf.filtername. Because we should use
> NoopFirewallDriver, we need conf.filtername should be None
> when we use it.
> (2) use plug_ovs_hybrid and unplug_ovs_hybrid by enforcing
> get_require_firewall as True.
>
> Here is patchs with UT.
>
> Workaournd fix:
> Nova
> https://review.openstack.org/#/c/82904/
>
> Devstack patch for ML2 (Tested with 82904)
> https://review.openstack.org/#/c/82937/


Are there other plugins which need the hybrid driver for sec groups to
work? I think so.
And also - the patch does not seem to work according to Jenkins. The
failures look genuine to me.


>
> We have tested the patch 82904 with following test, and this works.
>
- Launch VM
> - Assign floating ip
> - make sure ping to the floating ip is failing from GW
> - modify security group rule to allow ping from anywhere
> - make sure ping is working
>

You can actually run your devstack patch with your patch under review in
the check queue.
Check what Aaron did here: https://review.openstack.org/#/c/78694/11

I wonder if instead of putting this bit of tape, we could just leverage the
VIF_TYPE attribute of the port binding extension.
Most plugins use VIF_TYPE_OVS already. It's a pity the ml2 plugin with the
OVS mech driver did not specify VIF_TYPE_OVS_HYBRID.

But, in my opinion if we fix the relevant constants in the plugins which
require hybrid plugging, and we slightly change the generic VIF driver
logic to make a decision according to the VIF_TYPE binding attribute we
should fine, as we'll end up with two small, contained patches, which,
IMHO, are not even much ugly.
But again, I'm far from being a subject matter expert when it comes to
nova/neutron integration and the ML2 plugin.


> [1] Real fix: (defered to Juno)
>
>

Re: [openstack-dev] User mailing lists for OpenStack projects

2014-03-25 Thread Stefano Maffulli
Hi Shaunak,

the lists for the OpenStack project are hosted on
http://lists.openstack.org. You can see the full roster on that page.

If you need a special list for the users of OpenStack PHP SDK you can
propose a change to the lists manifest on
openstack-infra/config/modules/openstack_project/manifests/lists.pp.
Follow the instructions on this wiki page:

https://wiki.openstack.org/wiki/Community#Mailing_lists_in_local_languages

Have people interested in the new list vote the review, I think it would
be a good discussion to have.

Cheers,
stef

On 03/20/2014 09:34 PM, Shaunak Kashyap wrote:
> Hi folks,
> 
> I am relatively new to OpenStack development as one of the developers on the 
> unified PHP SDK for OpenStack [1].
> 
> We were recently discussing about a mailing list for the users of this SDK 
> (as opposed to it’s contributors who will use openstack-dev@). The purpose of 
> such as mailing list would be for users of the SDK to communicate with the 
> contributors as well as each other. Of course, there would be other avenues 
> for such communication as well (IRC, for instance).
> 
> Specifically, we would like to know whether existing OpenStack projects have 
> mailing lists for their users and, if so, where they are being hosted.
> 
> Thanks,
> 
> Shaunak
> 
> [1] https://wiki.openstack.org/wiki/OpenStack-SDK-PHP
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][reviews] We're falling behind

2014-03-25 Thread Clint Byrum
Excerpts from Robert Collins's message of 2014-03-25 13:17:29 -0700:
> TripleO has just seen an influx of new contributors. \o/. Flip side -
> we're now slipping on reviews /o\.
> 
> In the meeting today we had basically two answers: more cores, and
> more work by cores.
> 
> We're slipping by 2 reviews a day, which given 16 cores is a small amount.
> 
> I'm going to propose some changes to core in the next couple of days -
> I need to go and re-read a bunch of reviews first - but, right now we
> don't have a hard lower bound on the number of reviews we request
> cores commit to (on average).
> 
> We're seeing 39/day from the 16 cores - which isn't enough as we're
> falling behind. Thats 2.5 or so. So - I'd like to ask all cores to
> commit to doing 3 reviews a day, across all of tripleo (e.g. if your
> favourite stuff is all reviewed, find two new things to review even if
> outside comfort zone :)).
> 
> And we always need more cores - so if you're not a core, this proposal
> implies that we'll be asking that you a) demonstrate you can sustain 3
> reviews a day on average as part of stepping up, and b) be willing to
> commit to that.
> 
> Obviously if we have enough cores we can lower the minimum commitment
> - so I don't think this figure should be fixed in stone.
> 
> And now - time for a loose vote - who (who is a tripleo core today)
> supports / disagrees with this proposal - lets get some consensus
> here.
> 
> I'm in favour, obviously :), though it is hard to put reviews ahead of
> direct itch scratching, its the only way to scale the project.
> 

+1

FWIW I think we just haven't made reviews as much of a priority at the
same time that we got an influx of contribution.

We also started getting serious on CI, which I think has slowed the pace
through the review machine, with the bonus that we break ourselves less
often. :)

> -Rob
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-25 Thread Chris Friesen

On 03/25/2014 04:50 PM, Russell Bryant wrote:

We discussed the deprecation of the v2 keystone API in the cross-project
meeting today [1].  This thread is to recap and bring that discussion to
some consensus.





In summary, until we have completed v3 support within OpenStack itself,
it's premature to mark the API deprecated since that's a signal to end
users and deployers that says action is required.

Thoughts?


Makes sense to me.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All][Keystone] Deprecation of the v2 API

2014-03-25 Thread Russell Bryant
We discussed the deprecation of the v2 keystone API in the cross-project
meeting today [1].  This thread is to recap and bring that discussion to
some consensus.

The issue is that Keystone has marked the v2 API as deprecated in Icehouse:

https://blueprints.launchpad.net/keystone/+spec/deprecate-v2-api

If you use the API, deployments will get this in their logs:

WARNING keystone.openstack.common.versionutils [-] Deprecated: v2 API is
deprecated as of Icehouse in favor of v3 API and may be removed in K.

The deprecation status is reflected in the API for end users, as well.
For example, from the CLI:

  $ keystone discover
  Keystone found at http://172.16.12.38:5000/v2.0
- supports version v2.0 (deprecated) here http://172.16.12.38:5000/v2.0/

My proposal is that this deprecation be reverted.  Here's why:

First, it seems there isn't a common use of "deprecated".  To me,
marking something deprecated means that the deprecated feature:

 - has been completely replaced by something else

 - end users / deployers should take action to migrate to the
   new thing immediately.

 - The project has provided a documented migration path

 - the old thing will be removed at a specific date/release

The problem with the above is that most OpenStack projects do not
support the v3 API yet.

>From talking to Dolph in the meeting, it sounds like the intention is:

 - fully support v2, just don't add features

 - signal to other projects that they should be migrating to v3

Given that intention, I believe the proper thing to do is to actually
leave the API marked as fully supported / stable.  Keystone should be
working with other OpenStack projects to migrate them to v3.  Once that
is complete, deprecation can be re-visited.

In summary, until we have completed v3 support within OpenStack itself,
it's premature to mark the API deprecated since that's a signal to end
users and deployers that says action is required.

Thoughts?

[1]
http://eavesdrop.openstack.org/meetings/project/2014/project.2014-03-25-21.01.log.html#l-103

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][chaining][policy] Port-oriented Network service chaining

2014-03-25 Thread Carlos Gonçalves
Hi,

Most of the advanced services and group policy sub-team members who attended 
last week’s meeting should remember I promised to start a drafting proposal 
regarding network service chaining. This week I got to start writing a document 
which is accessible here: 
https://docs.google.com/document/d/1Bk1e8-diE1VnzlbM8l479Mjx2vKliqdqC_3l5S56ITU

It should not be considered a formal blueprint as it yet requires large 
discussion from the community wrt the validation (or sanity if you will) of the 
proposed idea.

I will be joining the advanced service IRC meeting tomorrow, and the group 
policy IRC meeting thursday, making myself available to answer any questions 
you may have. In the meantime you can also start discussing in this email 
thread or commenting in the document.

Thanks,
Carlos Goncalves

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] Neutron + Nova + OVS security group fix

2014-03-25 Thread Nachi Ueno
Hi Nova, Neturon Team

I would like to discuss issue of Neutron + Nova + OVS security group fix.
We have a discussion in IRC today, but the issue is complicated so we will have
a conf call tomorrow 17:00 UST (10AM PDT). #openstack-neutron
(I'll put conf call information in IRC)

<-- Please let me know if this time won't work with you.

Bug Report
https://bugs.launchpad.net/neutron/+bug/1297469

Background of this issue:
ML2 + OVSDriver + IptablesBasedFirewall combination is a default
plugin setting in the Neutron.
In this case, we need a special handing in VIF. Because OpenVSwitch
don't support iptables, we are
using linuxbride + openvswitch bridge. We are calling this as hybrid driver.

On the other discussion, we generalized the Nova  side VIF plugging to
the Libvirt GenericVIFDriver.
The idea is let neturon tell the VIF plugging configration details to
the GenericDriver, and GerericDriver
takes care of it.

Unfortunatly, HybridDriver is removed before GenericDriver is ready
for security group.
This makes ML2 + OVSDriver + IptablesBasedFirewall combination unfunctional.
We were working on realfix, but we can't make it until Icehouse
release due to design discussions [1].
# Even if neturon side patch isn't merged yet.

So we are proposing a workaround fix to Nova side.
In this fix, we are adding special version of the GenericVIFDriver
which can work with the combination.
There is two point on this new Driver.
(1) It prevent set conf.filtername. Because we should use
NoopFirewallDriver, we need conf.filtername should be None
when we use it.
(2) use plug_ovs_hybrid and unplug_ovs_hybrid by enforcing
get_require_firewall as True.

Here is patchs with UT.

Workaournd fix:
Nova
https://review.openstack.org/#/c/82904/

Devstack patch for ML2 (Tested with 82904)
https://review.openstack.org/#/c/82937/

We have tested the patch 82904 with following test, and this works.

- Launch VM
- Assign floating ip
- make sure ping to the floating ip is failing from GW
- modify security group rule to allow ping from anywhere
- make sure ping is working

[1] Real fix: (defered to Juno)

Improve vif attributes related with firewalling
https://review.openstack.org/#/c/21946/

Support binding:vif_security parameter in neutron
https://review.openstack.org/#/c/44596/

--> I'll put latest update on here
https://etherpad.openstack.org/p/neturon_security_group_fix_workaround_icehouse

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

2014-03-25 Thread Jorge Miramontes
Hey Susanne,

I think it makes sense to group drivers by each LB software. For example, there 
would be a driver for HAProxy, one for Citrix's Netscalar, one for Riverbed's 
Stingray, etc. One important aspect about Openstack that I don't want us to 
forget though is that a tenant should be able to move between cloud providers 
at their own will (no vendor lock-in). The API contract is what allows this. 
The challenging aspect is ensuring different drivers support the API contract 
in the same way. What components should drivers share is also and interesting 
conversation to be had.

Cheers,
--Jorge

From: Susanne Balle mailto:sleipnir...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, March 25, 2014 6:59 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed 
services"

John, Brandon,

I agree that we cannot have a multitude of drivers doing the same thing or 
close to because then we end-up in the same situation as we are today where we 
have duplicate effort and technical debt.

The goal would be here to be able to built a framework around the drivers that 
would allow for resiliency, failover, etc...

If the differentiators are in higher level APIs then we can have one a single 
driver (in the best case) for each software LB e.g. HA proxy, nginx, etc.

Thoughts?

Susanne


On Mon, Mar 24, 2014 at 11:26 PM, John Dewey 
mailto:j...@dewey.ws>> wrote:
I have a similar concern.  The underlying driver may support different 
functionality, but the differentiators need exposed through the top level API.

I see the SSL work is well underway, and I am in the process of defining L7 
scripting requirements.  However, I will definitely need L7 scripting prior to 
the API being defined.
Is this where vendor extensions come into play?  I kinda like the route the 
Ironic guy safe taking with a “vendor passthru” API.

John

On Monday, March 24, 2014 at 3:17 PM, Brandon Logan wrote:

Creating a separate driver for every new need brings up a concern I have had.  
If we are to implement a separate driver for every need then the permutations 
are endless and may cause a lot drivers and technical debt.  If someone wants 
an ha-haproxy driver then great.  What if they want it to be scalable and/or 
HA, is there supposed to be scalable-ha-haproxy, scalable-haproxy, and 
ha-haproxy drivers?  Then what if instead of doing spinning up processes on the 
host machine we want a nova VM or a container to house it?  As you can see the 
permutations will begin to grow exponentially.  I'm not sure there is an easy 
answer for this.  Maybe I'm worrying too much about it because hopefully most 
cloud operators will use the same driver that addresses those basic needs, but 
worst case scenarios we have a ton of drivers that do a lot of similar things 
but are just different enough to warrant a separate driver.

From: Susanne Balle [sleipnir...@gmail.com]
Sent: Monday, March 24, 2014 4:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed 
services"

Eugene,

Thanks for your comments,

See inline:

Susanne


On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov 
mailto:enikano...@mirantis.com>> wrote:
Hi Susanne,

a couple of comments inline:





We would like to discuss adding the concept of “managed services” to the 
Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA proxy. 
The latter could be a second approach for some of the software load-balancers 
e.g. HA proxy since I am not sure that it makes sense to deploy Libra within 
Devstack on a single VM.



Currently users would have to deal with HA, resiliency, monitoring and managing 
their load-balancers themselves.  As a service provider we are taking a more 
managed service approach allowing our customers to consider the LB as a black 
box and the service manages the resiliency, HA, monitoring, etc. for them.


As far as I understand these two abstracts, you're talking about making LBaaS 
API more high-level than it is right now.
I think that was not on our roadmap because another project (Heat) is taking 
care of more abstracted service.
The LBaaS goal is to provide vendor-agnostic management of load balancing 
capabilities and quite fine-grained level.
Any higher level APIs/tools can be built on top of that, but are out of LBaaS 
scope.



[Susanne] Yes. Libra currently has some internal APIs that get triggered when 
an action needs to happen. We would like similar functionality in Neutron LBaaS 
so the user doesn’t have to manage the load-balancers but can consider them as 
black-boxes. Would it make sense to maybe consider integrating Neutron LBaaS 
with heat to support some

Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-25 Thread Stefano Maffulli
On 03/25/2014 01:30 AM, Russell Bryant wrote:
> I really don't see why this is so bad.  We're using a tool specifically
> designed for reviewing things that is already working *really* well, 

right, for code reviews.

> not just for code, but also for TC governance documents.

I disagree: that's another hack, needed because there is nothing better
around. The only decent tool that I remember had a decent UX to discuss
text documents was the tool used to discuss online the GPLv3 drafts.
Google Docs has features similar to that... Unfortunately free software
tools somehow got stuck even when they pioneered many features.

> Unless Storyboard plans to re-implement what gerrit does (I sure hope it
> doesn't), I expect we would keep working like this.  Do you expect
> storyboard to have an interface for iterating on text documents, where
> you can provide inline comments, review differences between revisions,
> etc?  What am I missing?

Mainly I think you're missing things related to usability,
discoverability, search, reporting and all sorts of drawbacks related to
having two separate tools to do one thing, loosely coupled. Gerrit's
search is obscure for non-daily gerrit users, indexing by web search
engines is non existent... grep on local filesystem may work for some,
but that's not all the people interested in knowing why a feature was
implemented that way (for example), git is not unfriendly only picky
about who his friends are... and I can go on but I'll stop here: I
understand we have no better solution in the given timeframe, it's
pointless to debate it further.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Jenkins test logs and their retention period

2014-03-25 Thread Brant Knudson
On Mon, Mar 24, 2014 at 5:49 AM, Sean Dague  wrote:

> ...
> Part of the challenge is turning off DEBUG is currently embedded in code
> in oslo log, which makes it kind of awkward to set sane log levels for
> included libraries because it requires an oslo round trip with code to
> all the projects to do it.
>
>
Here's how it's done in Keystone:
https://review.openstack.org/#/c/62068/10/keystone/config.py

It's definitely awkward.

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-25 Thread Matt Wagner

On 25/03/14 12:23 +, Lucas Alvares Gomes wrote:

Hi,

Right now Ironic is being responsible for storing the credentials for the
IPMI and SSH drivers (and potentially other drivers in the future), I
wonder if we should delegate this task to Keystone. The Keystone V3 API now
has a /credentials endpoint which would allow us to specify arbitrary types
(not only ec2 anymore) and use it as a credential store[1].

That would avoid further fragmentation of credentials being stored in
different places in OpenStack, and make the management of the credentials
easier (Think about a situation where many nodes share the same IPMI
username/password and we need to update it, if this is stored in Keystone
it only needs to be updated there once cause nodes will only hold a
reference to it)

It also was pointed to me that setting a hard dependency on Keystone V3
might significantly raises the bar for integration with existing clouds*.
So perhaps we should make it optional? In the same way we can specify a
username/password or key_filename for the ssh driver we could have a
reference to a credential in Keystone V3?


As others seem to be saying, I think it might make sense to make this
pluggable. Store it in driver metadata, or store it in Keystone, or
store it in Barbican. Of course, that's 3x the effort.

As a relative newcomer -- is it problematic for us to leverage an
incubated service? I suppose that a pluggable approach with Barbican
as one option would alleviate any problems that might exist.

This would argue to me that the easiest thing for Ceilometer might be
to query us for IPMI stats, if the credential store is pluggable.
"Fetch these bare metal statistics" doesn't seem too off-course for
Ironic to me. The alternative is that Ceilometer and Ironic would both
have to be configured for the same pluggable credential store.

Or am I crazy?

-- Matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-25 Thread Dolph Mathews
On Tue, Mar 25, 2014 at 12:49 PM, Jay Pipes  wrote:

> On Tue, 2014-03-25 at 17:39 +, Miller, Mark M (EB SW Cloud - R&D -
> Corvallis) wrote:
> > Why not use Barbican? It stores credentials after encrypting them.
>
> No reason not to add a Barbican driver as well.
>
>
If Keystone's /v3/credentials API has any legs, it should be backed by
barbican, if not superseded by it completely.


> Best,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-delete in amqp "reply_*" queues in OpenStack

2014-03-25 Thread Dmitry Mescheryakov
Ok, assuming that you've run that query against the 5 stuck queues I
would expect the following results:

 * if an active listener for a queue lives on one of compute hosts:
that queue was created by compute service initiating rpc command.
Since you didn't restart them during switchover, the compute services
still use the same queues.
 * if queue does not have a listener: the queue was created by the
controller which was active before the switchover. That queue could
have become stuck not exactly at previous switchover, but as well at
some other switchover occurred in the past.

2014-03-25 0:33 GMT+04:00 Chris Friesen :
> On 03/24/2014 01:27 PM, Dmitry Mescheryakov wrote:
>>
>> I see two possible explanations for these 5 remaining queues:
>>
>>   * They were indeed recreated by 'compute' services. I.e. controller
>> service send some command over rpc and then it was shut down. Its
>> reply queue was automatically deleted, since its only consumer was
>> disconnected. The compute services replied after that and so recreated
>> the queue. According to Rabbit MQ docs, such queue will be stuck alive
>> indefinitely, since it will never have a consumer.
>>
>>   * Possibly there are services on compute nodes which initiate RPC
>> calls themselves. I don't know OpenStack architecture enough to say if
>> services running on compute nodes do so. In that case these 5 queues
>> are still used by compute services.
>>
>> Do Rabbit MQ management tools (web or cli) allow to view active
>> consumers for queues? If yes, then you can find out which of the cases
>> above you encountered. Or it maybe be some third case I didn't account
>> for :-)
>
>
> It appears that the cli tools do not provide a way to print the info, but if
> you query a single queue via the web API it will give the IP address and
> port of the consumers for the queue.  The vhost needs to be URL-encoded, so
> the query looks something like this:
>
> curl -i -u guest:guest
> http://192.168.204.2:15672/api/queues/%2f/reply_08e35acffe2c4078ae4603f08e9d0997
>
>
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][reviews] We're falling behind

2014-03-25 Thread Chris Jones
Hey

+1

3 a day seems pretty sustainable.

Cheers,
--
Chris Jones

> On 25 Mar 2014, at 20:17, Robert Collins  wrote:
> 
> TripleO has just seen an influx of new contributors. \o/. Flip side -
> we're now slipping on reviews /o\.
> 
> In the meeting today we had basically two answers: more cores, and
> more work by cores.
> 
> We're slipping by 2 reviews a day, which given 16 cores is a small amount.
> 
> I'm going to propose some changes to core in the next couple of days -
> I need to go and re-read a bunch of reviews first - but, right now we
> don't have a hard lower bound on the number of reviews we request
> cores commit to (on average).
> 
> We're seeing 39/day from the 16 cores - which isn't enough as we're
> falling behind. Thats 2.5 or so. So - I'd like to ask all cores to
> commit to doing 3 reviews a day, across all of tripleo (e.g. if your
> favourite stuff is all reviewed, find two new things to review even if
> outside comfort zone :)).
> 
> And we always need more cores - so if you're not a core, this proposal
> implies that we'll be asking that you a) demonstrate you can sustain 3
> reviews a day on average as part of stepping up, and b) be willing to
> commit to that.
> 
> Obviously if we have enough cores we can lower the minimum commitment
> - so I don't think this figure should be fixed in stone.
> 
> And now - time for a loose vote - who (who is a tripleo core today)
> supports / disagrees with this proposal - lets get some consensus
> here.
> 
> I'm in favour, obviously :), though it is hard to put reviews ahead of
> direct itch scratching, its the only way to scale the project.
> 
> -Rob
> 
> -- 
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-25 Thread Sangeeta Singh
Hi,

The availability Zones filter states that theoretically a compute node can be 
part of multiple availability zones. I have a requirement where I need to make 
a compute node part to 2 AZ. When I try to create a host aggregates with AZ I 
can not add the node in two host aggregates that have AZ defined. However if I 
create a host aggregate without associating an AZ then I can add the compute 
nodes to it. After doing that I can update the host-aggregate an associate an 
AZ. This looks like a bug.

I can see the compute node to be listed in the 2 AZ with the 
availability-zone-list command.

The problem that I have is that I can still not boot a VM on the compute node 
when I do not specify the AZ in the command though I have set the default 
availability zone and the default schedule zone in nova.conf.

I get the error “ERROR: The requested availability zone is not available”

What I am  trying to achieve is have two AZ that the user can select during the 
boot but then have a default AZ which has the HV from both AZ1 AND AZ2 so that 
when the user does not specify any AZ in the boot command I scatter my VM on 
both the AZ in a balanced way.

Any pointers.

Thanks,
Sangeeta
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][reviews] We're falling behind

2014-03-25 Thread Robert Collins
TripleO has just seen an influx of new contributors. \o/. Flip side -
we're now slipping on reviews /o\.

In the meeting today we had basically two answers: more cores, and
more work by cores.

We're slipping by 2 reviews a day, which given 16 cores is a small amount.

I'm going to propose some changes to core in the next couple of days -
I need to go and re-read a bunch of reviews first - but, right now we
don't have a hard lower bound on the number of reviews we request
cores commit to (on average).

We're seeing 39/day from the 16 cores - which isn't enough as we're
falling behind. Thats 2.5 or so. So - I'd like to ask all cores to
commit to doing 3 reviews a day, across all of tripleo (e.g. if your
favourite stuff is all reviewed, find two new things to review even if
outside comfort zone :)).

And we always need more cores - so if you're not a core, this proposal
implies that we'll be asking that you a) demonstrate you can sustain 3
reviews a day on average as part of stepping up, and b) be willing to
commit to that.

Obviously if we have enough cores we can lower the minimum commitment
- so I don't think this figure should be fixed in stone.

And now - time for a loose vote - who (who is a tripleo core today)
supports / disagrees with this proposal - lets get some consensus
here.

I'm in favour, obviously :), though it is hard to put reviews ahead of
direct itch scratching, its the only way to scale the project.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Performance numbers

2014-03-25 Thread Tomasz Janczuk
?Hello,


I wonder if any performance measurements have been done with Marconi? Are there 
results available somewhere?


I am generally trying to set my expectations in terms of latency and throughput 
as a function of the size of the deployment, number of queues, number of 
producers/consumers, type of backend, size of backend cluster, guarantees, 
message size etc. Trying to understand how Marconi would do in a large 
multi-tenant deployment.?


Any and all data points would be helpful.


Thanks,

Tomasz Janczuk
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Meeting Minutes - Tuesday 25 March

2014-03-25 Thread Brian Curtin
Thanks for another good meeting. More code coming along, and a few
reviews out there to take a look at.

Minutes: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-25-19.01.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-25-19.01.txt

Log: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-25-19.01.log.html

We're all in #openstack-sdks in Freenode

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] addition to requirement wiki

2014-03-25 Thread Jorge Miramontes
Thanks Itsuro,

Good requirement since Neutron LBaaS is an asynchronous API.

Cheers,
--Jorge




On 3/24/14 7:27 PM, "Itsuro ODA"  wrote:

>Hi LBaaS developpers,
>
>I added 'Status Indication' to requirement Wiki.
>It may be independent from object model discussion
>but I think this is an item which should not be forgotten.
>
>Thanks.
>-- 
>Itsuro ODA 
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] should there be an audit to clear the REBOOTING task_state?

2014-03-25 Thread Chris Friesen


I've reported a bug (https://bugs.launchpad.net/nova/+bug/1296967) where 
we got stuck with a task_state of REBOOTING due to what seem to be RPC 
issues.


Regardless of how we got there, currently there is no audit that will 
clear the task_state if it gets stuck.  Because of this, once we got 
into this state it required manual action to get out of it.


This doesn't seem like a good design...is there a way to do some sort of 
audit to ensure that the task_state is actually valid?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nodeless Vendor Passthru API

2014-03-25 Thread Russell Haering
On Tue, Mar 25, 2014 at 6:56 AM, Lucas Alvares Gomes
wrote:

>
> Hi Russell,
>
> Ironic allows drivers to expose a "vendor passthru" API on a Node. This
>> basically serves two purposes:
>>
>> 1. Allows drivers to expose functionality that hasn't yet been
>> standardized in the Ironic API. For example, the Seamicro driver exposes
>> "attach_volume", "set_boot_device" and "set_node_vlan_id" passthru methods.
>> 2. Vendor passthru is also used by the PXE deploy driver as an internal
>> RPC callback mechanism. The deploy ramdisk makes calls to the passthru API
>> to signal for a deployment to continue once a server has booted.
>>
>> For the purposes of this discussion I want to focus on case #2. Case #1
>> is certainly worth a separate discussion - we started this in
>> #openstack-ironic on Friday.
>>
>> In the new agent we are working on, we want to be able to look up what
>> node the agent is running on, and eventually to be able to register a new
>> node automatically. We will perform an inventory of the server and submit
>> that to Ironic, where it can be used to map the agent to an existing Node
>> or to create a new one. Once the agent knows what node it is on, it will
>> check in with a passthru API much like that used by the PXE driver - in
>> some configurations this might trigger an immediate "continue" of an
>> ongoing deploy, in others it might simply register the agent as available
>> for new deploys in the future.
>>
>
> Maybe another way to look up what node the agent is running on would be by
> looking at the MAC address of that node, having it on hand you could then
> GET /ports/detail and find which port has that MAC associated with it,
> after you find the port you can look at the node_uuid field which holds the
> UUID of the node which that port belongs to (All ports have a node_uuid,
> it's mandatory). So, right now you would need to list all the ports and
> find that MAC address there, but I got a review up that might help you with
> it by allowing you to get a port using its address as input:
> https://review.openstack.org/#/c/82773/ (GET /ports/detail?address=).
>
> What you think?
>

Right, we discussed this possibility as well. Internally that's actually
how the first iteration of our lookup call will work (mapping MAC addresses
to ports to nodes).

This can definitely be made to work, but in my mind it has a few
limitations:

1. It limits how we can do lookups. In the future I'd like to be able to
consider serial numbers, hardware profiles, etc when trying to map an agent
to a node. Needing to expose an API for each of these is going to be
infeasible.
2. It limits how we do enrollment.
3. It forces us to expose more of the API to agents.

Items 1 and 2 seem like things that someone deploying Ironic is especially
likely to want to customize. For example, I might have some business
process around replacing NICs where I want to customize how a server is
uniquely identified (or even hit an external source to do it), or I might
want want to override enrollment to hook into Fuel.

While we can probably find a way to solve our current problem, how can we
generically solve the need for an agent to talk to a driver (either in
order to centralize orchestration, or because access to the database is
needed) without needing a Node UUID?

Thanks,
Russell
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Swift storage policies in Icehouse

2014-03-25 Thread John Dickinson

On Mar 25, 2014, at 12:11 PM, Kurt Griffiths  
wrote:

>> As a quick review, storage policies allow objects to be stored across a
>> particular subset of hardware...and with a particular storage algorithm
> 
> Having worked on backup software in the past, this sounds interesting. :D
> 
> What is the scope of these policies? Are they per-object, per-container,
> and/or per-project? Or do they not work like that?

A storage policy is set on a container when it is created. So, for example, 
create your "photos" container with a global 3-replica scheme and also a 
"thumbnails-west" with 2 replicas in your West Coast region and 
"thumbnails-east" with 2 replicas in your East Coast region. Then make a 
container for "server-backups" that is erasure coded and stored in the EU. And 
all of that is stored and managed in the same logical Swift cluster.

So you can see that this feature set gives deployers and users a ton of 
flexibility.

How will storage policies be exposed? I'm glad you asked... A deployer (ie the 
cluster operator) will configure the storage policies (including which is the 
default). At that point, an end-user can create containers with a particular 
storage policy and start saving objects there. What about automatically moving 
data between storage policies? This is something that is explicitly not in 
scope for this set of work. Maybe someday, but in the meantime, I fully expect 
the Swift ecosystem to create and support tools to help manage data lifecycle 
management. For now, that doesn't belong in Swift.

--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Swift storage policies in Icehouse

2014-03-25 Thread Kurt Griffiths
> As a quick review, storage policies allow objects to be stored across a
>particular subset of hardware...and with a particular storage algorithm

Having worked on backup software in the past, this sounds interesting. :D

What is the scope of these policies? Are they per-object, per-container,
and/or per-project? Or do they not work like that?

---
Kurt G. | @kgriffs

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder FFE] Request for HDS FFE

2014-03-25 Thread John Griffith
On Tue, Mar 25, 2014 at 10:19 AM, Russell Bryant  wrote:

> On 03/25/2014 10:42 AM, Steven Sonnenberg wrote:
> > I just want to point out, there were no changes required to pass the
> tests. We were running those tests in Brazil and tunneling NFS and iSCSI
> across the Internet which explain timeout issues. Those are the same tests
> that passed a month earlier before we went into the cycle of
> review/fix/format etc.
>
> I think the key point is the current timing.  We're aiming to do RC1 for
> projects this week if possible.  FFEs were really only allowed weeks ago.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Hey Steven,

As we discussed last night, my issue is that it's just very late at this
point.  Yes, your patch has been in review for 45 days, however even then
it was pushing it.  By the way 45 days at this stage of the release is not
a long time.  My question is if people have been running this since
HongKong why did you wait until February before submitting it?

It is minimal risk to the core code, no doubt.  However, I've taking a
pretty hard stance with other 3'rd party drivers that have missed the cut
off dates and I don't see a compelling reason to make an exception here
based just on principal.

I'd also like to point out that contributions help when it comes to FFE's.
 In other words I don't see any activity other than this driver for the
last 4 months (reviews or otherwise).  When somebody comes in with a patch
past a date and asks for an exception the first thing I consider is whether
they've been active for the cycle or if they're just racing the clock to
get a driver in for the next release.

Something else I consider is if current code is maintained, in other words
you have a driver in the code base currently and it hasn't been maintained
since August (again last minute fixes before RC).  Now during RC you have
another driver that you want added.  If there was active involvement and
maintenance of the code and I didn't see a pattern here (pattern of
late/last minute submission) I likely would have a different opinion.

I'm still a -1 on the exception, even if it inherently doesn't introduce
significant risk.  It's not the code or the driver itself at this point but
the point regarding dates, process etc.

[1] History of maintenance for existing HDS code in Cinder
[2] Commit history for Erlon (author of the current patch/driver)
[3] Commit history for the author of the previous driver including the last
minute updates

Thanks,
John

[1]:
https://github.com/openstack/cinder/commits/master/cinder/volume/drivers/hds
[2]:
https://review.openstack.org/#/dashboard/10058
[3]:
https://review.openstack.org/#/dashboard/7447
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Neutron Routers and LLAs

2014-03-25 Thread Collins, Sean
During the review[0] of the patch that only allows RAs from known
addresses, Robert Li brought up a bug in Neutron, where a
IPv6 Subnet could be created, with a link local address for the gateway,
that would fail to create the Neutron router because the IP address that
the router's port would be assigned, was a link local
address that was not on the subnet. 

This may or may have been run before force_gateway_on_subnet flag was
introduced. Robert - if you can give us what version of Neutron you were
running that would be helpful.

Here's the full text of what Robert posted in the review, which shows
the bug, which was later filed[1].

>> This is what I've tried, creating a subnet with a LLA gateway address: 
 
>> neutron subnet-create --ip-version 6 --name myipv6sub --gateway fe80::2001:1 
>> mynet :::/64
>>
>> Created a new subnet: 
>> +--++
>> | Field | Value |
>> +--++
>> | allocation_pools | {"start": ":::1", "end": 
>> "::::::fffe"} | | cidr | :::/64 | | 
>> dns_nameservers | | | enable_dhcp | True | | gateway_ip | fe80::2001:1 | | 
>> host_routes | | | id | a1513aa7-fb19-4b87-9ce6-25fd238ce2fb | | ip_version | 
>> 6 | | name | myipv6sub | | network_id | 9c25c905-da45-4f97-b394-7299ec586cff 
>> | | tenant_id | fa96d90f267b4a93a5198c46fc13abd9 |
>> +--++
>> 
>> openstack@devstack-16:~/devstack$ neutron router-list

>> +--+-+-+
>>  
>> | id | name | external_gateway_info
>> | 
>> +--+-+-+
>>  
>> | 7cf084b4-fafd-4da2-9b15-0d25a3e27e67 | router1 | {"network_id": 
>> "02673c3c-35c3-40a9-a5c2-9e5c093aca48", "enable_snat": true} 
>> | 
>> +--+-+-+
>>
>> openstack@devstack-16:~/devstack$ neutron router-interface-add 
>> 7cf084b4-fafd-4da2-9b15-0d25a3e27e67 myipv6sub
>>
>> 400-{u'NeutronError': {u'message': u'Invalid input for operation: IP address 
>> fe80::2001:1 is not a valid IP for the defined subnet.', u'type': 
>> u'InvalidInput', u'detail': u''}}
>>

During last week's meeting, we had a bit of confusion near the end of the
meeting[2] about the following bug, and the fix[3].

If I am not mistaken - the fix is so that when you create a v6 Subnet
with a link local address, then create a Neutron router to serve as the
gateway for that subnet - the operation will successfully complete and a
router will be created.

We may need to take a look at the code that create a router - to ensure
that only one gateway port is created, and that the link local address
from the subnet's 'gateway' attribute is used as the address.

This is at least my understanding of the problem as it stands today -
and that this bug and fix does not involve any external gateways or
physical devices that Neutron does not control - this is exclusively
about Neutron routers.


[0]: https://review.openstack.org/#/c/72252/

[1]: https://bugs.launchpad.net/neutron/+bug/1284518

[2]: 
http://eavesdrop.openstack.org/meetings/neutron_ipv6/2014/neutron_ipv6.2014-03-18-14.02.log.html

[3]: https://review.openstack.org/#/c/76125/


-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Requirements for dropping / skipping a Tempest test

2014-03-25 Thread Sean Dague
Because this has come up a few times during freeze, it's probably worth
actively advertising the policy we've developed on dropping / skipping a
Tempest test.

Tempest tests encode the behavior of the system (to some degree), which
means that once we know the behavior of the system, if code in a core
project can only land if we skip or drop a tempest test, that's clearly
a behavior change.

We want to be really deliberate about this, so in these situations we
require:
 * an extremely clear commit message about why this is required (and why
backwards compatible behavior is not an option)
 * a link in the commit message to the dependent review (you can just
put the idempotent change id in there)
 * a +2 on the dependent review in the project by a core member

The 3rd part is important, because incompatible behavior changes are
something we want to make the core team for any given project is sure
they want to do before we drop or skip a test and let those changes come
into the project.

This system isn't perfect, but it's trying to strike a balance. My hope
is in the future we could implement cross project dependencies in zuul
so that you could gather test results for a project change - assuming
the tempest change was applied, that would prevent all these changes
from being -1ed until the tempest change is landed.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-25 Thread Douglas Mendizabal
Yes, this is exactly the use case we’re trying to address with Barbican. I
think this is something that definitely belongs in Barbican, especially
now that we are an incubated project.  We’d love to help out with any
integration questions you may have.

-Doug Mendizabal


On 3/25/14, 12:49 PM, "Jay Pipes"  wrote:

>On Tue, 2014-03-25 at 17:39 +, Miller, Mark M (EB SW Cloud - R&D -
>Corvallis) wrote:
>> Why not use Barbican? It stores credentials after encrypting them.
>
>No reason not to add a Barbican driver as well.
>
>Best,
>-jay
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]

2014-03-25 Thread Vijay B
Hi Oleg!

Thanks for the confirmation, and for the link to the services in service VM
blueprint link. I went through it and while the concept of using service
VMs is one way to go (cloudstack uses the approach, for example), I am not
inclined to take that approach for some reasons. Having a service image
essentially means forcing customers/deployers to either go with a specific
OS version or to maintain multiple formats (qcow/ova/vhd etc) for different
hypervisors. It will mean putting in packaging code into openstack to build
these VMs with every build. Upgradation from one Openstack release to the
next will sometime eventually force upgradation of these service VMs as
well. We will need to track service VM versions in glance or elsewhere and
prevent deployment across versions if required. Since we're talking about
VMs and not just modules, downtime also increases during upgrades. It adds
a whole level of complexity to handle all these scenarios, and we will need
to do that in Openstack code. I am very averse to doing that. Also as an
aside, when service VMs come into picture, deployment (depending on the
hypervisor) can take quite a long time (and scalability issues can be hit).
Take VMWare for example. We can create full or linked clones on it. Full
clone creation is quite costly in it. Storage migration is another area.
The issues will add up with time. From a developer's pov, debugging can get
tricky at times with service VMs.

The issues brought up in the blueprint are definitely valid and must be
addressed, but we'll need to have a discussion on what the optimal and
cleanest approach would be.

There is another interesting discussion going on regarding LBaaS APIs for
Libra and HAProxy between Susanne Balle/Eugene and others - I'll chip in
with my 2 cents on that..

Thanks a lot again!
Regards,
Vijay


On Tue, Mar 25, 2014 at 1:02 AM, Oleg Bondarev wrote:

> Hi Vijay,
> Currently Neutron LBaaS supports only namespace based implementation for
> HAProxy.
> You can however run LBaaS agent on the host other than network controller
> node - in that
> case HAProxy processes will be running on that host but still in
> namespaces.
>
> Also there is an effort in Neutron regarding adding support of advanced
> services in VMs [1].
> After it is completed I hope it will be possible to adopt it in LBaaS and
> run HAProxy in such a service VM.
>
> [1] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
>
> Thanks,
> Oleg
>
>
> On Tue, Mar 25, 2014 at 1:39 AM, Vijay B  wrote:
>
>> Hi Eugene,
>>
>> Thanks for the reply! How/where is the agent configuration done for
>> HAProxy? If I don't want to go with a network namespace based HAProxy
>> process, but want to deploy my own HAProxy instance on a host outside of
>> the network controller node, and make neutron deploy pools/VIPs on that
>> HAProxy instance, does neutron currently support this scenario? If so, what
>> are the configuration steps I will need to carry out to deploy HAProxy on a
>> separate host (for example, where do I specify the ip address of the
>> haproxy host, etc)?
>>
>> Regards,
>> Vijay
>>
>>
>> On Mon, Mar 24, 2014 at 2:04 PM, Eugene Nikanorov <
>> enikano...@mirantis.com> wrote:
>>
>>> Hi,
>>>
>>> HAProxy driver has not removed from the trunk, instead it became a base
>>> for agent-based driver, so the only haproxy-specific thing in the plugin
>>> driver is device driver name. Namespace driver is a device driver on the
>>> agent side and it was there from the beginning.
>>> The reason for the change is mere refactoring: it seems that solutions
>>> that employ agents could share the same code with only device driver being
>>> specific.
>>>
>>> So, everything is in place, HAProxy continues to be the default
>>> implementation of Neutron LBaaS service. It supports spawning haproxy
>>> processes on any host that runs lbaas agent.
>>>
>>> Thanks,
>>> Eugene.
>>>
>>>
>>>
>>> On Tue, Mar 25, 2014 at 12:33 AM, Vijay B  wrote:
>>>
 Hi,

 I'm looking at HAProxy support in Neutron, and I observe that the
 drivers/haproxy/plugin_driver.py file in the stable/havana release has been
 effectively removed from trunk (master), in that the plugin driver in the
 master simply points to the namespace driver. What was the reason to do
 this? Was the plugin driver in havana tested and documented? I can't seem
 to get hold of any relevant documentation that describes how to configure
 HAProxy LBs installed on separate boxes (and not brought up in network
 namespaces) - can anyone please point me to the same?

 Also, are there any plans to bring back the HAProxy plugin driver to
 talk to remote HAProxy instances?

 Thanks,
 Regards,
 Vijay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> _

Re: [openstack-dev] [Neutron] Using Python-Neutronclient from Python - docstrings needed?

2014-03-25 Thread Collins, Sean
On Fri, Mar 21, 2014 at 08:35:05PM EDT, Rajdeep Dua wrote:
> Sean,
> If you can point me to the project file in github which needs to be modified 
> , i will include these docs
> 
> Thanks
> Rajdeep

I imagine inside the openstack-manuals git repo

https://github.com/openstack/openstack-manuals

Possibly inside the doc/user-guide tree.

Although others may have better suggestions.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat][TC] MuranoPL questions?

2014-03-25 Thread Steven Dake

On 03/25/2014 09:55 AM, Ruslan Kamaldinov wrote:

Murano folks,

I guess we should stop our debates with the Heat team about MuranoPL. What
we should do instead is to carefully read this thread couple of times and
collect and process all the feedback we've got. Steve and Zane did a very good
job helping us to find a way to align with Heat and Orhcestration program. Let
me outline the most important (imho) quotes:

Ruslan,

+2!

The plan forward at the end of your email seem reasonable as well, but 
Zane is probably in a bit better position to clarify if any suggested 
changes would be recommended.


As a heat-core member, I would be supportive of ALM scope expansion 
happening within the Orchestration program.  I also think from a 
technical perspective ALM fits nicely within the scope of OpenStack.  
Just to head off possible questions about ALM and Solum, I do not see 
them as competitive.  I am only one voice among many, and think we need 
some broad general agreement with heat-core and the TC that suggested 
scope expansion makes sense and we won't get pushback 6-12 months down 
the road.  Perhaps the TC may think it belongs in a new program.  I 
honestly don't know.  I'm not quite sure how to get that conversation 
started, so I added [TC] to the subject tags in an attempt to get the 
ball rolling.


Regards,
-steve


# START quotes

Steven Dake:


I see no issue with HOT remaining simple and tidy focused entirely on
orchestration (taking a desired state and converting that into reality) with
some other imperative language layered on top to handle workflow and ALM.  I
believe this separation of concerns is best for OpenStack and should be the
preferred development path.


Zane Bitter:


I do think we should implement the hooks I mentioned at the start of this
thread to allow tighter integration between Heat and a workflow engine
(i.e. Mistral).

So building a system on top of Heat is good. Building it on top of Mistral as
well would also be good, and that was part of the feedback from the TC.

To me, building on top means building on top of the languages (which users
will have to invest a lot of work in learning) as well, rather than having a
completely different language and only using the underlying implementation(s).

To me that implies that Murano should be a relatively thin wrapper that ties
together HOT and Mistral's DSL.


Steve Dake:
---

I don't think HOT can do these things and I don't think we want HOT to do
these things.  I am ok with that, since I don't see the pushback on having
two languages for two different things in OpenStack.  I got from gokrove on
iRC today that the rationale for the pushback was the TC wanted Murano folks
to explore how to integrate better with Heat and possibly the orchestration
program.  I don't see HOT as a place where there is an opportunity for scope
expansion.  I see instead Murano creating HOT blobs and feeding them to Heat.


Zane Bitter:


Because there have to be some base types that you can use as building blocks.
In the case of Heat, those base types are the set of things that you can
create by talking to OpenStack APIs authenticated with the user's token.
In the case of Mistral, I would expect it to be the set of actions that you
can take by talking to OpenStack APIs authenticated with the user's token.
And in the case of Murano, I would expect it to be the union of those two.


Everything is a combination of existing resources, because the set of existing
resources is the set of things which the operator provides as-a-Service. The
set of things that the operator provides as a service plus the set of things
that you can implement yourself on your own server (virtual or not) covers
the entire universe of things. What you appear to be suggesting is that
OpenStack must provide *Everything*-as-a-Service by allowing users to write
their own services and have the operator execute them as-a-Service. This
would be a breathtakingly ambitious undertaking, and I don't mean that in
a good way.

When the TC said "Murano is slightly too far up the stack at this point to
meet the "measured progression of openstack as a whole" requirement", IMO
one of the major things they meant was that you're inventing your own
workflow thing, leading to duplication of effort between this and Workflow
as a Service. (And Mistral folks are in turn doing the same thing by not
using the same workflow library, taskflow, as the rest of OpenStack.)
Candidly, I'm surprised that this is not the #1 thing on your priority list
because IMO it's the #1 thing that will delay getting the project incubated.

# END quotes

Also I should quote what Georgy Okrokvertkhov said:

Having this aligned I see a Murano package as an archive with all necessary
definitions and resources and Murano service will just properly pass them to
related services like Heat and Mistral. I think then Murano DSL will be much
more simple and probably will be closer to declarative

Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-25 Thread Jay Pipes
On Tue, 2014-03-25 at 17:39 +, Miller, Mark M (EB SW Cloud - R&D -
Corvallis) wrote:
> Why not use Barbican? It stores credentials after encrypting them.

No reason not to add a Barbican driver as well.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-25 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Why not use Barbican? It stores credentials after encrypting them.

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Tuesday, March 25, 2014 9:50 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to
> Keystone
> 
> On Tue, 2014-03-25 at 12:23 +, Lucas Alvares Gomes wrote:
> > Hi,
> >
> > Right now Ironic is being responsible for storing the credentials for
> > the IPMI and SSH drivers (and potentially other drivers in the
> > future), I wonder if we should delegate this task to Keystone. The
> > Keystone V3 API now has a /credentials endpoint which would allow us
> > to specify arbitrary types (not only ec2 anymore) and use it as a
> > credential store[1].
> >
> > That would avoid further fragmentation of credentials being stored in
> > different places in OpenStack, and make the management of the
> > credentials easier (Think about a situation where many nodes share the
> > same IPMI username/password and we need to update it, if this is
> > stored in Keystone it only needs to be updated there once cause nodes
> > will only hold a reference to it)
> >
> > It also was pointed to me that setting a hard dependency on Keystone
> > V3 might significantly raises the bar for integration with existing
> > clouds*. So perhaps we should make it optional? In the same way we can
> > specify a username/password or key_filename for the ssh driver we
> > could have a reference to a credential in Keystone V3?
> 
> I think the idea of using Keystone for keypair management in Nova is a good
> one. There is already precedent in Nova for doing this kind of thing ... it's
> already been done for images, volumes, and network.
> 
> One problem with the Keystone v3 credentials API, though, is that it does not
> have support for unique names of keypairs per project, as that is how the
> Nova API /keypairs resource endpoint works.
> 
> > What you guys think about the idea? What are the cloud
> > operators/sysadmins view on that?
> 
> As long as the functionality was enabled using the standard driver-based
> setup (as was done for glance, nova, cinder, and neutron integration), I don't
> see any issues for deployers. Of course, you'd need a migration script, but
> that's not a huge deal.
> 
> Best,
> -jay
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-25 Thread Ruslan Kamaldinov
Murano folks,

I guess we should stop our debates with the Heat team about MuranoPL. What
we should do instead is to carefully read this thread couple of times and
collect and process all the feedback we've got. Steve and Zane did a very good
job helping us to find a way to align with Heat and Orhcestration program. Let
me outline the most important (imho) quotes:

# START quotes

Steven Dake:

> I see no issue with HOT remaining simple and tidy focused entirely on
> orchestration (taking a desired state and converting that into reality) with
> some other imperative language layered on top to handle workflow and ALM.  I
> believe this separation of concerns is best for OpenStack and should be the
> preferred development path.


Zane Bitter:

> I do think we should implement the hooks I mentioned at the start of this
> thread to allow tighter integration between Heat and a workflow engine
> (i.e. Mistral).
>
> So building a system on top of Heat is good. Building it on top of Mistral as
> well would also be good, and that was part of the feedback from the TC.
>
> To me, building on top means building on top of the languages (which users
> will have to invest a lot of work in learning) as well, rather than having a
> completely different language and only using the underlying implementation(s).
>
> To me that implies that Murano should be a relatively thin wrapper that ties
> together HOT and Mistral's DSL.


Steve Dake:
---
> I don't think HOT can do these things and I don't think we want HOT to do
> these things.  I am ok with that, since I don't see the pushback on having
> two languages for two different things in OpenStack.  I got from gokrove on
> iRC today that the rationale for the pushback was the TC wanted Murano folks
> to explore how to integrate better with Heat and possibly the orchestration
> program.  I don't see HOT as a place where there is an opportunity for scope
> expansion.  I see instead Murano creating HOT blobs and feeding them to Heat.


Zane Bitter:

> Because there have to be some base types that you can use as building blocks.
> In the case of Heat, those base types are the set of things that you can
> create by talking to OpenStack APIs authenticated with the user's token.
> In the case of Mistral, I would expect it to be the set of actions that you
> can take by talking to OpenStack APIs authenticated with the user's token.
> And in the case of Murano, I would expect it to be the union of those two.
>
>
> Everything is a combination of existing resources, because the set of existing
> resources is the set of things which the operator provides as-a-Service. The
> set of things that the operator provides as a service plus the set of things
> that you can implement yourself on your own server (virtual or not) covers
> the entire universe of things. What you appear to be suggesting is that
> OpenStack must provide *Everything*-as-a-Service by allowing users to write
> their own services and have the operator execute them as-a-Service. This
> would be a breathtakingly ambitious undertaking, and I don't mean that in
> a good way.
>
> When the TC said "Murano is slightly too far up the stack at this point to
> meet the "measured progression of openstack as a whole" requirement", IMO
> one of the major things they meant was that you're inventing your own
> workflow thing, leading to duplication of effort between this and Workflow
> as a Service. (And Mistral folks are in turn doing the same thing by not
> using the same workflow library, taskflow, as the rest of OpenStack.)
> Candidly, I'm surprised that this is not the #1 thing on your priority list
> because IMO it's the #1 thing that will delay getting the project incubated.

# END quotes

Also I should quote what Georgy Okrokvertkhov said:
> Having this aligned I see a Murano package as an archive with all necessary
> definitions and resources and Murano service will just properly pass them to
> related services like Heat and Mistral. I think then Murano DSL will be much
> more simple and probably will be closer to declarative format with some
> specific data operations.


Summary:

I would like to propose further path for Murano evolution where it evolves as
an OpenStack project, aligned with others and developed in agreement between
all the interested sides. Here is the plan:

* Step forward and implement hooks (for workflow customization) in Heat. We
  will register a blueprint, discuss it with the Heat team and implement it
* Use our cross-project session on ATL summit to set clear goals and
  expectations
* Build pilot in Murano which:
   a. Uses new HOT Software components to do VM side software deployments
   b. Uses Mistral DSL to describe workflow. It'll require more focused
  discussion with Mistral team
   c. Bundles all that into an application package using new Murano DSL
   d. Allows users to combine applications in a single environenment
* Continuously align with the He

Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-25 Thread Jay Pipes
On Tue, 2014-03-25 at 12:23 +, Lucas Alvares Gomes wrote:
> Hi,
> 
> Right now Ironic is being responsible for storing the credentials for
> the IPMI and SSH drivers (and potentially other drivers in the
> future), I wonder if we should delegate this task to Keystone. The
> Keystone V3 API now has a /credentials endpoint which would allow us
> to specify arbitrary types (not only ec2 anymore) and use it as a
> credential store[1].
> 
> That would avoid further fragmentation of credentials being stored in
> different places in OpenStack, and make the management of the
> credentials easier (Think about a situation where many nodes share the
> same IPMI username/password and we need to update it, if this is
> stored in Keystone it only needs to be updated there once cause nodes
> will only hold a reference to it)
> 
> It also was pointed to me that setting a hard dependency on Keystone
> V3 might significantly raises the bar for integration with existing
> clouds*. So perhaps we should make it optional? In the same way we can
> specify a username/password or key_filename for the ssh driver we
> could have a reference to a credential in Keystone V3?

I think the idea of using Keystone for keypair management in Nova is a
good one. There is already precedent in Nova for doing this kind of
thing ... it's already been done for images, volumes, and network.

One problem with the Keystone v3 credentials API, though, is that it
does not have support for unique names of keypairs per project, as that
is how the Nova API /keypairs resource endpoint works.

> What you guys think about the idea? What are the cloud
> operators/sysadmins view on that?

As long as the functionality was enabled using the standard driver-based
setup (as was done for glance, nova, cinder, and neutron integration), I
don't see any issues for deployers. Of course, you'd need a migration
script, but that's not a huge deal.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][all] persisting dump tables after migration

2014-03-25 Thread Gordon Chung
in ceilometer we have a bug regarding residual dump tables left after 
migration: https://bugs.launchpad.net/ceilometer/+bug/1259724

basically, in a few prior migrations, when adding missing constraints, a 
dump table was create to backup values which didn't fit into the new 
constraints. i raised the initial bug because i believe there is very 
little value to these tables as i would expect any administrator capturing 
data of some importance would backup their data before any migration to 
begin with.  i noticed in Nova, they also clean up their dump tables but i 
wanted to raise this on mailing list so that everyone is aware of the 
issue before i add a patch which blows away these dump tables. :)

i'd be interested if anyone actually finds value in having these dump 
tables persist just so we can see if your use case can be handle without 
the tables.

for reference, the dump tables are created in:
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/sqlalchemy/migrate_repo/versions/012_add_missing_foreign_keys.py
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/sqlalchemy/migrate_repo/versions/027_remove_alarm_fk_constraints.py

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder FFE] Request for HDS FFE

2014-03-25 Thread Russell Bryant
On 03/25/2014 10:42 AM, Steven Sonnenberg wrote:
> I just want to point out, there were no changes required to pass the tests. 
> We were running those tests in Brazil and tunneling NFS and iSCSI across the 
> Internet which explain timeout issues. Those are the same tests that passed a 
> month earlier before we went into the cycle of review/fix/format etc.

I think the key point is the current timing.  We're aiming to do RC1 for
projects this week if possible.  FFEs were really only allowed weeks ago.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][TaskFlow] Long running actions

2014-03-25 Thread Adam Young

On 03/21/2014 12:33 AM, W Chan wrote:
Can the long running task be handled by putting the target task in the 
workflow in a persisted state until either an event triggers it or 
timeout occurs?  An event (human approval or trigger from an external 
system) sent to the transport will rejuvenate the task.  The timeout 
is configurable by the end user up to a certain time limit set by the 
mistral admin.


Based on the TaskFlow examples, it seems like the engine instance 
managing the workflow will be in memory until the flow is completed. 
 Unless there's other options to schedule tasks in TaskFlow, if we 
have too many of these workflows with long running tasks, seems like 
it'll become a memory issue for mistral...


Look into the "Trusts" capability of Keystone for Authorization support 
on long running tasks.





On Thu, Mar 20, 2014 at 3:07 PM, Dmitri Zimine > wrote:




For the 'asynchronous manner' discussion see
http://tinyurl.com/n3v9lt8; I'm still not sure why u would want
to make is_sync/is_async a primitive concept in a workflow
system, shouldn't this be only up to the entity running the
workflow to decide? Why is a task allowed to be sync/async, that
has major side-effects for state-persistence, resumption (and to
me is a incorrect abstraction to provide) and general workflow
execution control, I'd be very careful with this (which is why I
am hesitant to add it without much much more discussion).


Let's remove the confusion caused by "async". All tasks [may] run
async from the engine standpoint, agreed.

"Long running tasks" - that's it.

Examples: wait_5_days, run_hadoop_job, take_human_input.
The Task doesn't do the job: it delegates to an external system.
The flow execution needs to wait  (5 days passed, hadoob job
finished with data x, user inputs y), and than continue with the
received results.

The requirement is to survive a restart of any WF component
without loosing the state of the long running operation.

Does TaskFlow already have a way to do it? Or ongoing ideas,
considerations? If yes let's review. Else let's brainstorm together.

I agree,

that has major side-effects for state-persistence, resumption
(and to me is a incorrect abstraction to provide) and general
workflow execution control, I'd be very careful with this

But these requirement  comes from customers'  use cases:
wait_5_day - lifecycle management workflow, long running external
system - Murano requirements, user input - workflow for operation
automations with control gate checks, provisions which require
'approval' steps, etc.

DZ>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS 2.1.0 is available but not the Neutron ARP responder

2014-03-25 Thread Édouard Thuleau
Hi all,

As promise, the blog post [1] about running devstack into containers.

[1]
http://dev.cloudwatt.com/en/blog/running-devstack-into-linux-containers.html

Regards,
Edouard.
Le 21 mars 2014 14:12, "Kyle Mestery"  a écrit :

> Getting this type of functional testing into the gate would be pretty
> phenomenal.
> Thanks for your continued efforts here Mathieu! If there is anything I can
> do to
> help here, let me know. One other concern here is that the infra team may
> have
> issues running a version of OVS which isn't packaged into Ubuntu/CentOS.
> Keep
> that in mind as well.
>
> Edourard, I look forward to your blog, please share it here once you've
> written it!
>
> Thanks,
> Kyle
>
>
>
> On Fri, Mar 21, 2014 at 6:15 AM, Édouard Thuleau wrote:
>
>> Thanks Mathieu for your support and work onto CI to enable multi-node.
>>
>> I wrote a blog post about how to run devstack development environment
>> with LXC.
>> I hope it will be publish next week.
>>
>> Just add a pointer about OVS support network namespaces since 2 years ago
>> now [1].
>>
>> [1]
>> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=2a4999f3f33467f4fa22ed6e5b06350615fb2dac
>>
>> Regards,
>> Édouard.
>>
>>
>> On Fri, Mar 21, 2014 at 11:31 AM, Mathieu Rohon 
>> wrote:
>>
>>> Hi edouard,
>>>
>>> thanks for the information. I would love to see your patch getting
>>> merged to have l2-population MD fully functional with an OVS based
>>> deployment. Moreover, this patch has a minimal impact on neutron,
>>> since the code is used only if l2-population MD is used in the ML2
>>> plugin.
>>>
>>> markmcclain was concerned that no functional testing is done, but
>>> L2-population MD needs mutlinode deployment to be tested. A deployment
>>> based on a single VM won't create overlay tunnels, which is a
>>> mandatory technology to have l2-population activated.
>>> The Opensatck-CI is not able, for the moment, to run job based on
>>> multi-node deployment. We proposed an evolution of devstack to have a
>>> multinode deployment based on a single VM which launch compute nodes
>>> in LXC containers [1], but this evolution has been refused by
>>> Opensatck-CI since there is other ways to run multinode setup with
>>> devstack, and LXC container is not compatible with iscsi and probably
>>> ovs [2][3].
>>>
>>> One way to have functional test for this feature would be to deploy
>>> 3rd party testing environment, but it would be a pity to have to
>>> maintain a 3rd party to test some functionalities which are not based
>>> on 3rd party equipments. So we are currently learning about the
>>> Openstack-CI tools to propose some evolutions to have mutinode setup
>>> inside the gate [4]. There are a lot of way to implement it
>>> (node-pools evolution, usage of tripleO, of Heat [5]), and we don't
>>> know which one would be the easiest, and so the one we have to work on
>>> to have the multinode feature available ASAP.
>>>
>>> This feature looks very important for Neutron, at least to test
>>> overlay tunneling. I thinks it's very important for nova too, to test
>>> live-migration.
>>>
>>>
>>> [1]https://blueprints.launchpad.net/devstack/+spec/lxc-computes
>>> [2]https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
>>> [3]
>>> http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-18-19.01.log.html
>>> [4]
>>> https://www.mail-archive.com/openstack-infra@lists.openstack.org/msg00968.html
>>> [5]
>>> http://lists.openstack.org/pipermail/openstack-infra/2013-July/000128.html
>>>
>>> On Fri, Mar 21, 2014 at 10:08 AM, Édouard Thuleau 
>>> wrote:
>>> > Hi,
>>> >
>>> > Just to inform you that the new OVS release 2.1.0 was done yesterday
>>> [1].
>>> > This release contains new features and significant performance
>>> improvements
>>> > [2].
>>> >
>>> > And in that new features, one [3] was use to add local ARP responder
>>> with
>>> > OVS agent and the plugin ML2 with the MD l2-pop [4]. Perhaps, it's
>>> time to
>>> > reconsider that review?
>>> >
>>> > [1] https://www.mail-archive.com/discuss@openvswitch.org/msg09251.html
>>> > [2] http://openvswitch.org/releases/NEWS-2.1.0
>>> > [3]
>>> >
>>> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=f6c8a6b163af343c66aea54953553d84863835f7
>>> > [4] https://review.openstack.org/#/c/49227/
>>> >
>>> > Regards,
>>> > Édouard.
>>> >
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _

Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-03-25 Thread Asselin, Ramy
Hi Shlomi,

Another solution to consider is to create a subclass per transport (iSCSI, 
iSER) which reference the same shared common code.
This is the solution used for the 3PAR iSCSI & FC transports. See these for 
reference:
cinder/volume/drivers/san/hp/hp_3par_common.py
cinder/volume/drivers/san/hp/hp_3par_fc.py
cinder/volume/drivers/san/hp/hp_3par_iscsi.py

Hope this helps.

Ramy

From: Shlomi Sasson [mailto:shlo...@mellanox.com]
Sent: Tuesday, March 25, 2014 8:07 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other 
iSCSI transports besides TCP

Hi,

I want to share with the community the following challenge:
Currently, Vendors who have their iSCSI driver, and want to add RDMA transport 
(iSER), cannot leverage their existing plug-in which inherit from iSCSI
And must modify their driver or create an additional plug-in driver which 
inherit from iSER, and copy the exact same code.

Instead I believe a simpler approach is to add a new attribute to ISCSIDriver 
to support other iSCSI transports besides TCP, which will allow minimal changes 
to support iSER.
The existing ISERDriver code will be removed, this will eliminate significant 
code and class duplication, and will work with all the iSCSI vendors who 
supports both TCP and RDMA without the need to modify their plug-in drivers.

To achieve that both cinder & nova requires slight changes:
For cinder, I wish to add a  parameter called "transport" (default to iscsi) to 
distinguish between the transports and use the existing "iscsi_ip_address" 
parameter for any transport type connection.
For nova, I wish to add a parameter called "default_rdma" (default to false) to 
enable initiator side.
The outcome will avoid code duplication and the need to add more classes.

I am not sure what will be the right approach to handle this, I already have 
the code, should I open a bug or blueprint to track this issue?

Best Regards,
Shlomi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting

2014-03-25 Thread Peter Pouliot
Hi All,

We have numerous people travelling today, and therefore we need to cancel the 
meeting for today.   We will resume next week.

p

We will resume next week.
Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research & Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Nodeless Vendor Passthru API

2014-03-25 Thread Lucas Alvares Gomes
Hi Russell,

Ironic allows drivers to expose a "vendor passthru" API on a Node. This
> basically serves two purposes:
>
> 1. Allows drivers to expose functionality that hasn't yet been
> standardized in the Ironic API. For example, the Seamicro driver exposes
> "attach_volume", "set_boot_device" and "set_node_vlan_id" passthru methods.
> 2. Vendor passthru is also used by the PXE deploy driver as an internal
> RPC callback mechanism. The deploy ramdisk makes calls to the passthru API
> to signal for a deployment to continue once a server has booted.
>
> For the purposes of this discussion I want to focus on case #2. Case #1 is
> certainly worth a separate discussion - we started this in
> #openstack-ironic on Friday.
>
> In the new agent we are working on, we want to be able to look up what
> node the agent is running on, and eventually to be able to register a new
> node automatically. We will perform an inventory of the server and submit
> that to Ironic, where it can be used to map the agent to an existing Node
> or to create a new one. Once the agent knows what node it is on, it will
> check in with a passthru API much like that used by the PXE driver - in
> some configurations this might trigger an immediate "continue" of an
> ongoing deploy, in others it might simply register the agent as available
> for new deploys in the future.
>

Maybe another way to look up what node the agent is running on would be by
looking at the MAC address of that node, having it on hand you could then
GET /ports/detail and find which port has that MAC associated with it,
after you find the port you can look at the node_uuid field which holds the
UUID of the node which that port belongs to (All ports have a node_uuid,
it's mandatory). So, right now you would need to list all the ports and
find that MAC address there, but I got a review up that might help you with
it by allowing you to get a port using its address as input:
https://review.openstack.org/#/c/82773/ (GET /ports/detail?address=).

What you think?


>
> The point here is that we need a way for the agent driver to expose a
> top-level "lookup" API, which doesn't require a Node UUID in the URL.
>
> I've got a review (https://review.openstack.org/#/c/81919/) up which
> explores one possible implementation of this. It basically routes POSTs to
> /drivers//vendor_passthru/ to a new method on the
> vendor interface.
>
> Importantly, I don't believe that this is a useful way for vendors to
> implement new consumer-facing functionality. If we decide to take this
> approach, we should reject drivers try to do so. It is intended *only* for
> internal communications with deploy agents.
>
> Another possibility is that we could create a new API service intended
> explicitly to serve use case #2 described above, which doesn't include most
> of the existing public paths. In our environment I expect us to allow
> agents whitelisted access to only two specific paths (lookup and checkin),
> but this might be a better way to achieve that.
>
> Thoughts?
>
> Thanks,
> Russell
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2][Ml2Plugin] Setting _original_network in NetworkContext:

2014-03-25 Thread Nader Lahouti
Hi All,

In the current Ml2Plugin code when 'create_network' is called, as shown
below:



def create_network(self, context, network)

net_data = network['network']

...

session = context.session

with session.begin(subtransactions=True):

self._ensure_default_security_group(context, tenant_id)

result = super(Ml2Plugin, self).create_network(context, network)
...

mech_context = driver_context.NetworkContext(self, context,
result)

self.mechanism_manager.create_network_precommit(mech_context)

...



the original_network parameter is not set (the default is None) when
instantiating NetworkContext, and as a result the mech_context has only the
value of network object returned from super(Ml2Plugin,
self).create_network().

This causes issue when a mechanism driver needs to use the original network
parameters (given to the create_network), specially when extension is used
for the network resources.

(The 'result' only has the network attributes without extension which is
used to set the '_network' in the NetwrokContext object).

Even using  extension function registration using

db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs(...) won't
help as the network object that is passed to the registered function does
not include the extension parameters.


Is there any reason that the original_network is not set when initializing
the NetworkContext? Would that cause any issue to set it to 'net_data' so
that any mechanism driver can use original network parameters as they are
available when create_network is called?


Appreciate your comments.


Thanks,

Nader.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-03-25 Thread Shlomi Sasson
Hi,

I want to share with the community the following challenge:
Currently, Vendors who have their iSCSI driver, and want to add RDMA transport 
(iSER), cannot leverage their existing plug-in which inherit from iSCSI
And must modify their driver or create an additional plug-in driver which 
inherit from iSER, and copy the exact same code.

Instead I believe a simpler approach is to add a new attribute to ISCSIDriver 
to support other iSCSI transports besides TCP, which will allow minimal changes 
to support iSER.
The existing ISERDriver code will be removed, this will eliminate significant 
code and class duplication, and will work with all the iSCSI vendors who 
supports both TCP and RDMA without the need to modify their plug-in drivers.

To achieve that both cinder & nova requires slight changes:
For cinder, I wish to add a  parameter called "transport" (default to iscsi) to 
distinguish between the transports and use the existing "iscsi_ip_address" 
parameter for any transport type connection.
For nova, I wish to add a parameter called "default_rdma" (default to false) to 
enable initiator side.
The outcome will avoid code duplication and the need to add more classes.

I am not sure what will be the right approach to handle this, I already have 
the code, should I open a bug or blueprint to track this issue?

Best Regards,
Shlomi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][PCI] one use case make the flavor/extra-info based solution to be right choice

2014-03-25 Thread Jay Pipes
On Tue, 2014-03-25 at 14:21 +0800, yongli he wrote:
> 于 2014年03月21日 03:18, Jay Pipes 写道:
> > On Thu, 2014-03-20 at 13:50 +, Robert Li (baoli) wrote:
> >> Hi Yongli,
> >>
> >> I'm very glad that you bring this up and relive our discussion on PCI
> >> passthrough and its application on networking. The use case you brought up
> >> is:
> >>
> >> user wants a FASTER NIC from INTEL to join a virtual
> >> networking.
> >>
> >> By FASTER, I guess that you mean that the user is allowed to select a
> >> particular vNIC card. Therefore, the above statement can be translated
> >> into the following requests for a PCI device:
> >>  . Intel vNIC
> >>  . 1G or 10G or ?
> >>  . network to join
> >>
> >> First of all, I'm not sure in a cloud environment, a user would care about
> >> the vendor or card type.
> > Correct. Nor would/should a user of the cloud know what vendor or card
> > type is in use on a particular compute node. At most, all a user of the
> > cloud would be able to select from is an instance type (flavor) that
> > listed some capability like "high_io_networking" or something like that,
> > and the mapping of what "high_io_networking" meant on the back end of
> > Nova would need to be done by the operator (i.e. if the tag
> > "high_io_networking" is on a flavor a user has asked to launch a server
> > with, then that tag should be translated into a set of capabilities that
> > is passed to the scheduler and used to determine where the instance can
> > be scheduled by looking at which compute nodes support that set of
> > capabilities.
> >
> > This is what I've been babbling about with regards to "leaking
> > implementation through the API". What happens if, say, the operator
> > decides to use IBM cards (instead of or in addition to Intel ones)? If
> > you couple the implementation with the API, like the example above shows
> > ("user wants a FASTER NIC from INTEL"), then you have to add more
> > complexity to the front-end API that a user deals with, instead of just
> > adding a capabilities mapping for new compute nodes that says
> > "high_io_networking" tag can match to these new compute nodes with IBM
> > cards.
> Jay
> 
> thank you, sorry for later reply
> 
> in this use case, use might so not care about the vendor id/product id.
> but for a specific image , the product's model(which related to the 
> vendor id/product id)
> might cared by user. cause the image might could not support new device
> which possibly use vendor_id and product id to eliminate the unsupported 
> device.
> 
> anyway, even without the product/vendor id, the multiple extra tag still 
> needed.
> and consideration this case, some accelerate card for encryption and 
> decryption/hash
> there are many supported feature, and most likely different pci card 
> might support
> different feature set,like : md5, DES,3DES, AES, RSA, SHA-x, IDEA, 
> RC4,5,6 
> the way to select such a device is use it's feature set instead of one 
> or 2 of group, so
> the extra information about a pci card is need, in a flexible way.

Hi Yongli,

I'm all for enabling users to take advantage of technology. I just will
push to make sure that the public user-facing API hides things like
vendor or product information as much as possible.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

2014-03-25 Thread Geoff Arnold
There are (at least) two ways of expressing differentiation:
- through an API extension, visible to the tenant
- though an internal policy mechanism, with specific policies inferred from 
tenant or network characteristics

Both have their place. Please don't fall into the trap of thinking that 
differentiation requires API extension. 

Sent from my iPhone - please excuse any typos or "creative" spelling 
corrections! 

> On Mar 25, 2014, at 1:36 PM, Eugene Nikanorov  wrote:
> 
> Hi John,
> 
> 
>> On Tue, Mar 25, 2014 at 7:26 AM, John Dewey  wrote:
>> I have a similar concern.  The underlying driver may support different 
>> functionality, but the differentiators need exposed through the top level 
>> API.
> Not really, whole point of the service is to abstract the user from specifics 
> of backend implementation. So for any feature there is a common API, not 
> specific to any implementation.
> 
> There probably could be some exception to this guide line that lays in the 
> area of admin API, but that's yet to be discussed.
>> 
>> I see the SSL work is well underway, and I am in the process of defining L7 
>> scripting requirements.  However, I will definitely need L7 scripting prior 
>> to the API being defined.
>> Is this where vendor extensions come into play?  I kinda like the route the 
>> Ironic guy safe taking with a “vendor passthru” API. 
> I may say that core team has rejected 'vendor extensions' idea due to 
> potential non-uniform user API experience. That becomes even worse with 
> flavors introduced, because users don't know what vendor is backing up the 
> service they have created.
> 
> Thanks,
> Eugene.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving tripleo-ci towards the gate

2014-03-25 Thread Derek Higgins
On 24/03/14 22:58, Joe Gordon wrote:
> 
> 
> 
> On Fri, Mar 21, 2014 at 6:29 AM, Derek Higgins  > wrote:
> 
> Hi All,
>I'm trying to get a handle on what needs to happen before getting
> tripleo-ci(toci) into the gate, I realize this may take some time but
> I'm trying to map out how to get to the end goal of putting multi node
> tripleo based deployments in the gate which should cover a lot of uses
> cases that devstact-gate doesn't. Here are some of the stages I think we
> need to achieve before being in the gate along with some questions where
> people may be able to fill in the blanks.
> 
> Stage 1: check - tripleo projects
>This is what we currently have running, 5 separate jobs running non
> voting checks against tripleo projects
> 
> Stage 2 (a). reliability
>Obviously keeping the reliability of both the results and the ci
> system is a must and we should always aim towards 0% false test results,
> but is there an acceptable number of false negatives for example that
> would be acceptable to infa, what are the numbers on the gate at the
> moment? should we aim to match those at the very least (Maybe we already
> have). And for how long do we need to maintain those levels before
> considering the system proven?
> 
> 
> I cannot come up with a specific number for this, perhaps someone else
> can. I see the results and CI system reliability as two very different
> things, for the CI system it should ideally never go down for very long
> (although this is less critical while tripleo is non-voting check only,
> like all other 3rd party systems).  As for false negatives in the
> results, they should be on par with devstack-gate jobs especially once
> you start running tempest.

Yup, that would seem like a reasonable/fair target.

>  
> 
> 
> Stage 2 (b). speedup
>How long can the longest jobs take? We have plans in place to speed
> up our current jobs but what should the target be?
> 
> 
> Gate jobs currently take up to a little over an hour [0][1]
> 
> [0]
> https://jenkins01.openstack.org/job/check-tempest-dsvm-postgres-full/buildTimeTrend
> [1] 
> https://jenkins02.openstack.org/job/check-tempest-dsvm-postgres-full/buildTimeTrend

Our overcloud job is currently just under 90 minutes, I'm confident we
can get below an hour (of course then we have to run tempest and
whatever else we add which will bring us back up)

> 
>  
> 
> 3. More Capacity
> 
> 
> If you wanted to run tripleo-check everwhere a
> ''check-tempest-dsvm-full' job is run that is over 600 jobs in a  24
> hour period.

Looks like I was a little short in my guesstimation and presumably it
wont be 600 this time next year 

> 
> [3] graphite
> 
>  
> color(alias(hitcount(sum(stats.zuul.pipeline.check.job.check-tempest-dsvm-full.{SUCCESS,FAILURE}),'24hours'),
> 'check-tempest-dsvm-full hits over 24 hours'),'orange')
>  
> 
>I'm going to talk about RAM here as its probably the resource where
> we will hit our infrastructure limits first.
>Each time a suite of toci jobs is kicked off we currently kick off 5
> jobs (which will double once Fedora is added[1])
>In total these jobs spawn 15 vm's consuming 80G of RAM (its actually
> 120G to workaround a bug we will should soon have fixed[2]), we also
> have plans that will reduce this 80G further but lets stick with it for
> the moment.
>Some of these jobs complete after about 30 minutes but lets say our
> target is an overall average of 45 minutes.
> 
>With Fedora that means each run will tie up 160G for 45 minutes. Or
> 160G can provide us with 32 runs (each including 10 jobs) per day
> 
>So to kick off 500 (I made this number up) runs per day, we would
> need
>(500 / 32.0) * 160G = 2500G of RAM
> 
>We then need to double this number to allow for redundancy, so thats
> 5000G of RAM
> 
>We probably have about 3/4 of this available to us at the moment but
> its not evenly balanced between the 2 clouds so we're not covered from a
> redundancy point of view.
> 
>So we need more hardware (either by expanding the clouds we have or
> added new clouds), I'd like for us to start a separate effort to map out
>

Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-25 Thread Steven Dake

On 03/25/2014 03:32 AM, Thomas Herve wrote:

Hi Thomas,

I think we went to the second loop of the discussion about generic language
concepts. Murano does not use a new language for the sole purpose of having
parameters, constraints and polymorphism. These are generic concepts which
are common for different languages, so keeping arguing about these generic
concepts is just like a holy war like Python vs. C. Keeping these arguments
is just like to say that we don't need Python as functions and parameters
already exists in C which is used under the hood in Python.

Yes Murano DSL have some generic concepts similar to HOT. I think this is a
benefit as user will see the familiar syntax constructions and it will be a
lower threshold for him to start using Murano DSL.

In a simplified view Murano uses DSL for application definition to solve
several particular problems:
a) control UI rendering of Application Catalog
b) control HOT template generation

These aspects are not covered in HOT and probably should not be covered. I
don't like an idea of expressing HOT template generation in HOT as it sounds
like a creation another Lisp like language :-)

I'm not saying that HOT will cover all your needs. I think it will cover a 
really good portion. And I'm saying that for the remaining part, you can use an 
existing language and not create a new one.


I don't think that your statement that most of the people in the community
are against new DSL is a right summary. There are some disagreements how it
should look like and what are the goals. You will be probably surprised but
we are not the first who use DSL for HOT templates generation. Here is an
e-mail thread right about Ruby based DSL used in IBM for the same purpose:
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026606.html

The term "Orchestration" is quite generic. Saying that orchestration should
be Heat job sounds like a well know Henry Ford's phrase "You can have any
colour as long as it's black.".

That worked okay for him :).


I think this is again a lack of understanding of the difference between
Orchestration program and Heat project. There are many aspects of
Orchestration and OpenStack has the Orchestration program for the projects
which are focused on some aspects of orchestration. Heat is one of the
project inside Orchestration program but it does not mean that Heat should
cover everything. That is why we discussed in this thread how workflows
aspects should be aligned and how they should be placed into this
Orchestration program.

Well, today Heat is the one and only program in the Orchestration program. If 
and when you have orchestration needs not covered, we are there to make sure 
Heat is not the best place to handle them. The answer will probably not Heat 
forever, but we need good use cases to delegate those needs to another project.



Thomas,

I see a natural expansion of the Orchestration codebase in spinning our 
autoscaling work that is tightly integrated and woven into Heat into a 
separate repo under the orchestration program.  Note this is not an 
expansion in scope, as autoscaling is already part of the orchestration 
program's scope.


I could also see how workflow could fit into the orchestration program, 
and if it did, it would definitely need to be a different code base then 
Heat proper.  IMO the autoscaling  built into Heat makes Heat a bit more 
difficult to understand and maintain.  I don't think we really want to 
complicate that with more stuff like workflows and imperative 
programming.  After you having been a champion of the HOT DSL, I suspect 
you are not keen to jam imperative things into it, given how nice and 
tidy it is at present :)


Now RE the multiple DSLs,  I have heard some folks mention they don't 
want multiple DSLs for different jobs in the orchestration program.I 
have provided a cost benefit analysis of one dsl vs multiple DSLs in 
several previous threads.  I found the cost of a unified DSL to be 
unacceptable to the mission of Heat.  If folks reallly feel differently, 
please feel free to refute my points :)


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

2014-03-25 Thread Susanne Balle
On Tue, Mar 25, 2014 at 9:36 AM, Eugene Nikanorov
wrote:

> Hi John,
>
>
> On Tue, Mar 25, 2014 at 7:26 AM, John Dewey  wrote:
>
>>  I have a similar concern.  The underlying driver may support different
>> functionality, but the differentiators need exposed through the top level
>> API.
>>
> Not really, whole point of the service is to abstract the user from
> specifics of backend implementation. So for any feature there is a common
> API, not specific to any implementation.
>
> There probably could be some exception to this guide line that lays in the
> area of admin API, but that's yet to be discussed.
>

Admin APIs would make sense.


>
>> I see the SSL work is well underway, and I am in the process of defining
>> L7 scripting requirements.  However, I will definitely need L7 scripting
>> prior to the API being defined.
>> Is this where vendor extensions come into play?  I kinda like the route
>> the Ironic guy safe taking with a "vendor passthru" API.
>>
> I may say that core team has rejected 'vendor extensions' idea due to
> potential non-uniform user API experience. That becomes even worse with
> flavors introduced, because users don't know what vendor is backing up the
> service they have created.
>
> Thanks,
> Eugene.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Devstack. Fail to boot an instance if more than 1 network is defined

2014-03-25 Thread Avishay Balderman
Anyone else is facing this bug? https://bugs.launchpad.net/nova/+bug/1296808

Thanks

Avishay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

2014-03-25 Thread Eugene Nikanorov
Hi John,


On Tue, Mar 25, 2014 at 7:26 AM, John Dewey  wrote:

>  I have a similar concern.  The underlying driver may support different
> functionality, but the differentiators need exposed through the top level
> API.
>
Not really, whole point of the service is to abstract the user from
specifics of backend implementation. So for any feature there is a common
API, not specific to any implementation.

There probably could be some exception to this guide line that lays in the
area of admin API, but that's yet to be discussed.

>
> I see the SSL work is well underway, and I am in the process of defining
> L7 scripting requirements.  However, I will definitely need L7 scripting
> prior to the API being defined.
> Is this where vendor extensions come into play?  I kinda like the route
> the Ironic guy safe taking with a "vendor passthru" API.
>
I may say that core team has rejected 'vendor extensions' idea due to
potential non-uniform user API experience. That becomes even worse with
flavors introduced, because users don't know what vendor is backing up the
service they have created.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

2014-03-25 Thread Susanne Balle
On Tue, Mar 25, 2014 at 9:24 AM, Eugene Nikanorov
wrote:

>
>
>>

>>> That for sure can be implemented. I only would recommend to implement
>>> such kind of management system out of Neutron/LBaaS tree, e.g. to only have
>>> client within Libra driver that will communicate with the management
>>> backend.
>>>
>>
>> [Susanne] Again this would only be a short term solution since as we move
>> forward and want to contribute new features it would result in duplication
>> of efforts because the features might need to be done in Libra and not
>> Neutron LBaaS.
>>
>
> That seems to be a way other vendors are taking right now. Regarding the
> features, could you point to description of those?
>

Our end goal is to be able to move to just use Neutron LBaaS. For example
SSL termination is not in Libra and we don't want to have to implement it
when it is already in Neutron LBaaS. the same with L7 policies.

Having the service be resilient beyond just a pair of HA proxies is a biggy
for us. We cannot expect our customers to manage the LB themselves.

Susanne



>
> Thanks,
> Eugene.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

2014-03-25 Thread Eugene Nikanorov
Hi Brandon,


On Tue, Mar 25, 2014 at 2:17 AM, Brandon Logan
wrote:

>  Creating a separate driver for every new need brings up a concern I have
> had.  If we are to implement a separate driver for every need then the
> permutations are endless and may cause a lot drivers and technical debt.
>  If someone wants an ha-haproxy driver then great.  What if they want it to
> be scalable and/or HA, is there supposed to be scalable-ha-haproxy,
> scalable-haproxy, and ha-haproxy drivers?  Then what if instead of doing
> spinning up processes on the host machine we want a nova VM or a container
> to house it?  As you can see the permutations will begin to grow
> exponentially.  I'm not sure there is an easy answer for this.  Maybe I'm
> worrying too much about it because hopefully most cloud operators will use
> the same driver that addresses those basic needs, but worst case scenarios
> we have a ton of drivers that do a lot of similar things but are just
> different enough to warrant a separate driver.
>
The driver is what implements communicating to a particular
device/appliance and translating logical service configuration to a
backend-specific configuration. I never said the driver is per feature. But
different drivers may implement different features in their own way, the
general requirement is that user expectations should be properly satisfied.

Thanks,
Eugene.


>  --
> *From:* Susanne Balle [sleipnir...@gmail.com]
> *Sent:* Monday, March 24, 2014 4:59 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and
> "managed services"
>
>   Eugene,
>
>  Thanks for your comments,
>
>  See inline:
>
>  Susanne
>
>
>  On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov <
> enikano...@mirantis.com> wrote:
>
>> Hi Susanne,
>>
>>  a couple of comments inline:
>>
>>
>>
>>>
>>> We would like to discuss adding the concept of "managed services" to the
>>> Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
>>> proxy. The latter could be a second approach for some of the software
>>> load-balancers e.g. HA proxy since I am not sure that it makes sense to
>>> deploy Libra within Devstack on a single VM.
>>>
>>>
>>>
>>> Currently users would have to deal with HA, resiliency, monitoring and
>>> managing their load-balancers themselves.  As a service provider we are
>>> taking a more managed service approach allowing our customers to consider
>>> the LB as a black box and the service manages the resiliency, HA,
>>> monitoring, etc. for them.
>>>
>>
>
>>   As far as I understand these two abstracts, you're talking about
>> making LBaaS API more high-level than it is right now.
>> I think that was not on our roadmap because another project (Heat) is
>> taking care of more abstracted service.
>> The LBaaS goal is to provide vendor-agnostic management of load balancing
>> capabilities and quite fine-grained level.
>> Any higher level APIs/tools can be built on top of that, but are out of
>> LBaaS scope.
>>
>>
>  [Susanne] Yes. Libra currently has some internal APIs that get triggered
> when an action needs to happen. We would like similar functionality in
> Neutron LBaaS so the user doesn't have to manage the load-balancers but can
> consider them as black-boxes. Would it make sense to maybe consider
> integrating Neutron LBaaS with heat to support some of these use cases?
>
>
>>
>>>
>>> We like where Neutron LBaaS is going with regards to L7 policies and SSL
>>> termination support which Libra is not currently supporting and want to
>>> take advantage of the best in each project.
>>>
>>> We have a draft on how we could make Neutron LBaaS take advantage of
>>> Libra in the back-end.
>>>
>>> The details are available at:
>>> https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft
>>>
>>
>>  I looked at the proposal briefly, it makes sense to me. Also it seems
>> to be the simplest way of integrating LBaaS and Libra - create a Libra
>> driver for LBaaS.
>>
>
>  [Susanne] Yes that would be the short team solution to get us where we
> need to be. But We do not want to continue to enhance Libra. We would like
> move to Neutron LBaaS and not have duplicate efforts.
>
>
>>
>>
>>>  While this would allow us to fill a gap short term we would like to
>>> discuss the longer term strategy since we believe that everybody would
>>> benefit from having such "managed services" artifacts built directly into
>>> Neutron LBaaS.
>>>
>>  I'm not sure about building it directly into LBaaS, although we can
>> discuss it.
>>
>
>  [Susanne] The idea behind the "managed services" aspect/extensions would
> be reusable for other software LB.
>
>
>>   For instance, HA is definitely on roadmap and everybody seems to agree
>> that HA should not require user/tenant to do any specific configuration
>> other than choosing HA capability of LBaaS service. So as far as I see it,
>> requirements for HA

Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

2014-03-25 Thread Eugene Nikanorov
>
> One example where the managed service approach for the HA proxy load
>>> balancer is different from the current Neutron LBaaS roadmap is around HA
>>> and resiliency. The 2 LB HA setup proposed (
>>> https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy) isn't
>>> appropriate for service providers in that users would have to pay for the
>>> extra load-balancer even though it is not being actively used.
>>>
>> One important idea of the HA is that its implementation is
>> vendor-specific, so each vendor or cloud provider can implement it in the
>> way that suits their needs. So I don't see why particular HA solution for
>> haproxy should be considered as a common among other vendors/providers.
>>
>
> [Susanne] Are you saying that we should create a driver that would be a
> peer to the current loadbalancer/ ha-proxy driver? So for example
>  loadbalancer/managed-ha-proxy (please don't get hung-up on the name I
> picked) would be a driver we would implement to get our interaction with a
> pool of stand-by load-and preconfigured load balancers instead of the 2 LB
> HA servers? And it would be part of the Neutron LBaaS branch?
>
No, I mean that haproxy driver would do it the way HA for haproxy is
typically setup, and driver for Libra will do it in the way you think it's
best for your deployment. User just asks for HA capability of the service.
If we need better distinction of HA methods, we could introduce it into
'flavors' (see https://wiki.openstack.org/wiki/Neutron/FlavorFramework )
Having different devices in HA pairs are not planned.


>
>
>>
>
>> That for sure can be implemented. I only would recommend to implement
>> such kind of management system out of Neutron/LBaaS tree, e.g. to only have
>> client within Libra driver that will communicate with the management
>> backend.
>>
>
> [Susanne] Again this would only be a short term solution since as we move
> forward and want to contribute new features it would result in duplication
> of efforts because the features might need to be done in Libra and not
> Neutron LBaaS.
>

That seems to be a way other vendors are taking right now. Regarding the
features, could you point to description of those?

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-25 Thread Eoghan Glynn


> Hi,
> 
> Right now Ironic is being responsible for storing the credentials for the
> IPMI and SSH drivers (and potentially other drivers in the future), I wonder
> if we should delegate this task to Keystone. The Keystone V3 API now has a
> /credentials endpoint which would allow us to specify arbitrary types (not
> only ec2 anymore) and use it as a credential store[1].
> 
> That would avoid further fragmentation of credentials being stored in
> different places in OpenStack, and make the management of the credentials
> easier (Think about a situation where many nodes share the same IPMI
> username/password and we need to update it, if this is stored in Keystone it
> only needs to be updated there once cause nodes will only hold a reference
> to it)
> 
> It also was pointed to me that setting a hard dependency on Keystone V3 might
> significantly raises the bar for integration with existing clouds*. So
> perhaps we should make it optional? In the same way we can specify a
> username/password or key_filename for the ssh driver we could have a
> reference to a credential in Keystone V3?
> 
> What you guys think about the idea?

Hi Lucas,

At a high level, this sounds like an excellent idea to me.

IIUC the major blocker to ceilometer taking point on controlling the
IPMI polling cycle has been secure access to these credentials. If these
were available to ceilometer in a controlled way via keystone, then the
IPMI polling cycle could be managed in a very similar way to the ceilo
polling activity on the hypervisor and SMNP daemons.

However, I'm a little fuzzy on the detail of enabling this via keystone
v3, so it would be great to drill down into the detail either on the ML
or at summit. 

For example, would it be in the guise of a trust that delegates limited
privilege to allow the ceilometer user call GET /credentials to retrieve
the IPMI user/pass?

Or would the project_id parameter to POST /credentials suffice to limit
access to IPMI credentials to the ceilometer tenant only? (as opposed to
allowing any other openstack service access these creds)

In that case, would we need to also decouple the ceilometer user from
the generic service tenant?

Cheers,
Eoghan

> What are the cloud operators/sysadmins
> view on that?
> 
> * There's also some ongoing thoughts about using v3 for other things in
> Ironic (e.g signed url's) but that's kinda out of the topic.
> 
> 
> [1]
> https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#create-credential-post-credentials
> Ironic bp (discussion):
> https://blueprints.launchpad.net/ironic/+spec/credentials-keystone-v3
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Dependency freeze coming up (EOD Tuesday March 25)

2014-03-25 Thread Sergey Lukjanov
We'd like to bump python-saharaclient min version to the next major
version (>=0.6.0 -> >= 0.7.0) and remove python-savannaclient.

On Tue, Mar 25, 2014 at 1:47 PM, Thierry Carrez  wrote:
> Sergey Lukjanov wrote:
>> RE Sahara, we'll need one more version bump to remove all backward
>> compat code added for smooth transition. What's the deadline for doing
>> it? Personally, I'd like to do it next week. Is it ok?
>
> If you are only *removing* dependencies I think next week is fine :)
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-25 Thread Lucas Alvares Gomes
Hi,

Right now Ironic is being responsible for storing the credentials for the
IPMI and SSH drivers (and potentially other drivers in the future), I
wonder if we should delegate this task to Keystone. The Keystone V3 API now
has a /credentials endpoint which would allow us to specify arbitrary types
(not only ec2 anymore) and use it as a credential store[1].

That would avoid further fragmentation of credentials being stored in
different places in OpenStack, and make the management of the credentials
easier (Think about a situation where many nodes share the same IPMI
username/password and we need to update it, if this is stored in Keystone
it only needs to be updated there once cause nodes will only hold a
reference to it)

It also was pointed to me that setting a hard dependency on Keystone V3
might significantly raises the bar for integration with existing clouds*.
So perhaps we should make it optional? In the same way we can specify a
username/password or key_filename for the ssh driver we could have a
reference to a credential in Keystone V3?

What you guys think about the idea? What are the cloud operators/sysadmins
view on that?

* There's also some ongoing thoughts about using v3 for other things in
Ironic (e.g signed url's) but that's kinda out of the topic.

[1]
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#create-credential-post-credentials
Ironic bp (discussion):
https://blueprints.launchpad.net/ironic/+spec/credentials-keystone-v3
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and "managed services"

2014-03-25 Thread Susanne Balle
John, Brandon,

I agree that we cannot have a multitude of drivers doing the same thing or
close to because then we end-up in the same situation as we are today where
we have duplicate effort and technical debt.

The goal would be here to be able to built a framework around the drivers
that would allow for resiliency, failover, etc...

If the differentiators are in higher level APIs then we can have one a
single driver (in the best case) for each software LB e.g. HA proxy, nginx,
etc.

Thoughts?

Susanne


On Mon, Mar 24, 2014 at 11:26 PM, John Dewey  wrote:

>  I have a similar concern.  The underlying driver may support different
> functionality, but the differentiators need exposed through the top level
> API.
>
> I see the SSL work is well underway, and I am in the process of defining
> L7 scripting requirements.  However, I will definitely need L7 scripting
> prior to the API being defined.
> Is this where vendor extensions come into play?  I kinda like the route
> the Ironic guy safe taking with a "vendor passthru" API.
>
> John
>
> On Monday, March 24, 2014 at 3:17 PM, Brandon Logan wrote:
>
>  Creating a separate driver for every new need brings up a concern I have
> had.  If we are to implement a separate driver for every need then the
> permutations are endless and may cause a lot drivers and technical debt.
>  If someone wants an ha-haproxy driver then great.  What if they want it to
> be scalable and/or HA, is there supposed to be scalable-ha-haproxy,
> scalable-haproxy, and ha-haproxy drivers?  Then what if instead of doing
> spinning up processes on the host machine we want a nova VM or a container
> to house it?  As you can see the permutations will begin to grow
> exponentially.  I'm not sure there is an easy answer for this.  Maybe I'm
> worrying too much about it because hopefully most cloud operators will use
> the same driver that addresses those basic needs, but worst case scenarios
> we have a ton of drivers that do a lot of similar things but are just
> different enough to warrant a separate driver.
>  --
> *From:* Susanne Balle [sleipnir...@gmail.com]
> *Sent:* Monday, March 24, 2014 4:59 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS, Libra and
> "managed services"
>
>   Eugene,
>
>  Thanks for your comments,
>
>  See inline:
>
>  Susanne
>
>
>  On Mon, Mar 24, 2014 at 4:01 PM, Eugene Nikanorov <
> enikano...@mirantis.com> wrote:
>
> Hi Susanne,
>
>  a couple of comments inline:
>
>
>
>
> We would like to discuss adding the concept of "managed services" to the
> Neutron LBaaS either directly or via a Neutron LBaaS plug-in to Libra/HA
> proxy. The latter could be a second approach for some of the software
> load-balancers e.g. HA proxy since I am not sure that it makes sense to
> deploy Libra within Devstack on a single VM.
>
>
>
> Currently users would have to deal with HA, resiliency, monitoring and
> managing their load-balancers themselves.  As a service provider we are
> taking a more managed service approach allowing our customers to consider
> the LB as a black box and the service manages the resiliency, HA,
> monitoring, etc. for them.
>
>
>
>   As far as I understand these two abstracts, you're talking about making
> LBaaS API more high-level than it is right now.
> I think that was not on our roadmap because another project (Heat) is
> taking care of more abstracted service.
> The LBaaS goal is to provide vendor-agnostic management of load balancing
> capabilities and quite fine-grained level.
> Any higher level APIs/tools can be built on top of that, but are out of
> LBaaS scope.
>
>
>  [Susanne] Yes. Libra currently has some internal APIs that get triggered
> when an action needs to happen. We would like similar functionality in
> Neutron LBaaS so the user doesn't have to manage the load-balancers but can
> consider them as black-boxes. Would it make sense to maybe consider
> integrating Neutron LBaaS with heat to support some of these use cases?
>
>
>
>
> We like where Neutron LBaaS is going with regards to L7 policies and SSL
> termination support which Libra is not currently supporting and want to
> take advantage of the best in each project.
>
> We have a draft on how we could make Neutron LBaaS take advantage of Libra
> in the back-end.
>
> The details are available at:
> https://wiki.openstack.org/wiki/Neutron/LBaaS/LBaaS%2Band%2BLibra%2Bintegration%2BDraft
>
>
>  I looked at the proposal briefly, it makes sense to me. Also it seems to
> be the simplest way of integrating LBaaS and Libra - create a Libra driver
> for LBaaS.
>
>
>  [Susanne] Yes that would be the short team solution to get us where we
> need to be. But We do not want to continue to enhance Libra. We would like
> move to Neutron LBaaS and not have duplicate efforts.
>
>
>
>
>  While this would allow us to fill a gap short term we would like to
> discuss the longer term strategy since w

Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-25 Thread Malini Kamalambal



We are talking about different levels of testing,

1. Unit tests - which everybody agrees should be in the individual project
itself
2. System Tests - 'System' referring to (& limited to), all the components
that make up the project. These are also the functional tests for the
project.
3. Integration Tests - This is to verify that the OS components interact
well and don't break other components -Keystone being the most obvious
example. This is where I see getting the maximum mileage out of Tempest.

"Its not easy to detect what the integration points with other projects are, 
any project can use any stable API from any other project. Because of this all 
OpenStack APIs should fit into this category. "

Any project can use any stable API –but that does not make all API tests , 
Integration Tests.
A test becomes Integration test when it has two or more projects interacting in 
the test.

Individual projects should be held accountable to make sure that their API's 
work – no matter who consumes them.
We should be able to treat the project as a complete system, make API calls and 
validate that the response matches the API definitions.
Identifying issues earlier in the pipeline reduces the Total Cost of Quality.

I agree that Integration Testing is hard. It is complicated because it requires 
knowledge of how systems interact with each other – and knowledge comes from a 
lot of time spent on analysis.
It requires people with project expertise to talk to each other & identify 
possible test cases.
openstack-qa is the ideal forum to do this.
Holding projects responsible for their functionality will help the QA team 
focus on complicated integration tests.

"Having a second group writing tests to Nova's public APIs has been really 
helpful in keeping us honest as well."

Sounds like a testimonial for more project level testing :)


I see value in projects taking ownership of the System Tests - because if
the project is not 'functionally ready', it is not ready to integrate with
other components of Openstack.

"What do you mean by not ready?"

'Functionally Ready' - The units that make up a project can work together as a 
system,  all API's have been exercised with positive & negative test cases by 
treating the project as a complete system.
There are no known critical bugs. The point here being identify as many issues 
as possible, earlier in the game.

But for this approach to be successful, projects should have diversity in
the team composition - we need more testers who focus on creating these
tests.
This will keep the teams honest in their quality standards.

As long as individual projects cannot guarantee functional test coverage,
we will need more tests in Tempest.
But that will shift focus away from Integration Testing, which can be done
ONLY in Tempest.

Regardless of whatever we end up deciding, it will be good to have these
discussions sooner than later.
This will help at least the new projects to move in the right direction.

-Malini








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-25 Thread Stan Lagun
On Tue, Mar 25, 2014 at 2:27 PM, Thomas Herve wrote:

>
> >> What I can say is that I'm not convinced. The only use-case for a DSL
> would
> >> be if you have to upload user-written code, but what you mentioned is a
> Web
> >> interface, where the user doesn't use the DSL, and the cloud provider
> is the
> >> developer. There is no reason in this case to have a secure environment
> for
> >> the code.
> >
> > I didn't say that. There are at least 2 different roles application
> > developers/publishers and application users. Application developer is not
> > necessary cloud provider. The whole point of AppCatalog is to support
> > scenario when anyone can create and package some application and that
> > package can be uploaded by user alone. Think Apple AppStore or Google
> Play.
> > Some cloud providers may configure ACLs so that user be allowed to
> consume
> > applications they decided while others may permit to upload applications
> to
> > some configurable scope (e.g. apps that would be visible to all cloud
> users,
> > to particular tenant or be private to the user). We also think to have
> some
> > of peer relations so that it would be possible to have application
> upload in
> > one catalog to become automatically available in all connected catalogs.
> >
> > This is similar to how Linux software repos work - AppCatalog is repo,
> Murano
> > package is what DEB/RPMs are to repo and DSL is what DEB/RPMs manifests
> are
> > to packages. Just that is run on cloud and designed to handle complex
> > multi-node apps as well as trivial ones in which case this may be
> narrowed
> > to actual installation of DEB/RPM
>
> I'm glad that you bring packages up. This is a really good example of why
> you don't need a new programming language. Packages uses whatever
> technology they prefer to handle their scripting needs. They then have an
> declarative interface which hides the imperative parts behind.
>
>
The same is true for Murano. MuranoPL is not used to express what should be
deployed. In Murano there is object model that describes view of the word.
It serves for the same purpose as HOT in Heat but it is simpler because it
says just what need to be deployed but not how it should be accomplished as
this information is already contained in application definitions. There is
REST API to edit/submit Object Model which is again has nothing to do with
MuranoPL. UI dashboard talks to AppCatalog to see what applications/classes
are available and AppCatalog also knows what are the properties of those
classes. This is needed for UI so that it can ask user for appropriate
input. This is similar to how Horizon asks user to input parameters that
are declared in HOT template. But all the imperative stuff is hidden inside
Murano packages and is not used for anything outside Murano engine.
MuranoPL is not a replacement for scripting languages. You still use
bash/puppet/PowerShell/whatever you like for actual deployment. No MuranoPL
code is executed on VM side. So the analogy with RPMs manifests is valid.




> You trust OpenStack developers with their code, you trust package
> developers with their code, why not trust catalog developers?
>

They do trust catalog developers (hopefully). But catalog developers have
nothing to do with catalog contents. Anyone can create and upload
application to App Catalog the same way how anyone can upload his
application to Google Play. The fact that I trust Google doesn't mean that
I trust all applications in Google Play. The fact that I trust catalog
developers doesn't mean that I (as a cloud operator) is going to allow
execution of untrusted code in that catalog. Unless that code is sandboxed
by design. Similar to that I can navigate to any web site google points me
out and let it execute any JavaScript it wants unless it it JavaScript and
not browser plugin or desktop application.



>
> --
> Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-25 Thread Georgy Okrokvertskhov
On Tue, Mar 25, 2014 at 3:32 AM, Thomas Herve wrote:

>
> > Hi Thomas,
> >
> > I think we went to the second loop of the discussion about generic
> language
> > concepts. Murano does not use a new language for the sole purpose of
> having
> > parameters, constraints and polymorphism. These are generic concepts
> which
> > are common for different languages, so keeping arguing about these
> generic
> > concepts is just like a holy war like Python vs. C. Keeping these
> arguments
> > is just like to say that we don't need Python as functions and parameters
> > already exists in C which is used under the hood in Python.
> >
> > Yes Murano DSL have some generic concepts similar to HOT. I think this
> is a
> > benefit as user will see the familiar syntax constructions and it will
> be a
> > lower threshold for him to start using Murano DSL.
> >
> > In a simplified view Murano uses DSL for application definition to solve
> > several particular problems:
> > a) control UI rendering of Application Catalog
> > b) control HOT template generation
> >
> > These aspects are not covered in HOT and probably should not be covered.
> I
> > don't like an idea of expressing HOT template generation in HOT as it
> sounds
> > like a creation another Lisp like language :-)
>
> I'm not saying that HOT will cover all your needs. I think it will cover a
> really good portion. And I'm saying that for the remaining part, you can
> use an existing language and not create a new one.
>

As a user can't run arbitrary python code in openstack we used Python
language to create a new API for the remaining parts. This API service
accepts a yaml based description of what should be done. There is no
intention to create a new generic programming language. We used OpenStack
approach and created a service for specific functions around Application
Catalog features. Due to dynamic nature of applications we had to add a bit
of dynamics to the service input just because of the same reason why Heat
uses templates.



> > I don't think that your statement that most of the people in the
> community
> > are against new DSL is a right summary. There are some disagreements how
> it
> > should look like and what are the goals. You will be probably surprised
> but
> > we are not the first who use DSL for HOT templates generation. Here is an
> > e-mail thread right about Ruby based DSL used in IBM for the same
> purpose:
> >
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/026606.html
> >
> > The term "Orchestration" is quite generic. Saying that orchestration
> should
> > be Heat job sounds like a well know Henry Ford's phrase "You can have any
> > colour as long as it's black.".
>
> That worked okay for him :).
>

Not really. The world acknowledged his inventions and new approaches. Other
manufacturers adopted his ideas and moved forward, providing more variety,
while Ford stuck with his model-T, which was very successful though. The
history shows that variety won the battle over single approach and now we
have different colors, shapes, engines :-)

>
> > I think this is again a lack of understanding of the difference between
> > Orchestration program and Heat project. There are many aspects of
> > Orchestration and OpenStack has the Orchestration program for the
> projects
> > which are focused on some aspects of orchestration. Heat is one of the
> > project inside Orchestration program but it does not mean that Heat
> should
> > cover everything. That is why we discussed in this thread how workflows
> > aspects should be aligned and how they should be placed into this
> > Orchestration program.
>
> Well, today Heat is the one and only program in the Orchestration program.
> If and when you have orchestration needs not covered, we are there to make
> sure Heat is not the best place to handle them. The answer will probably
> not Heat forever, but we need good use cases to delegate those needs to
> another project.
>
>
That is exactly the reason why we have these discussions :-) We have the
use cases for new functionality and we are trying to find a place for it.


>
> --
> Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Spec repos for blueprint development and review

2014-03-25 Thread Mark McClain

On Mar 24, 2014, at 2:23 PM, Sean Dague mailto:s...@dague.net>> 
wrote:

On 03/24/2014 02:05 PM, James E. Blair wrote:
Russell Bryant mailto:rbry...@redhat.com>> writes:

On 03/24/2014 12:34 PM, James E. Blair wrote:
Hi,

So recently we started this experiment with the compute and qa programs
to try using Gerrit to review blueprints.  Launchpad is deficient in
this area, and while we hope Storyboard will deal with it much better,
but it's not ready yet.

This seems to be a point of confusion.  My view is that Storyboard isn't
intended to implement what gerrit provides.  Given that, it seems like
we'd still be using this whether the tracker is launchpad or storyboard.

I don't think it's intended to implement what Gerrit provides, however,
I'm not sure what Gerrit provides is _exactly_ what's needed here.  I do
agree that Gerrit is a much better tool than launchpad for collaborating
on some kinds of blueprints.


Agreed, but the current blueprint system is broken in a way that a half step 
with an imperfect tool is better than keeping the status quo.

However, one of the reasons we're creating StoryBoard is so that we have
a tool that is compatible with our workflow and meets our requirements.
It's not just about tracking work items, it should be a tool for
creating, evaluating, and progressing changes to projects (stories),
across all stages.

I don't envision the end-state for storyboard to be that we end up
copying data back and forth between it and Gerrit.  Since we're
designing a system from scratch, we might as well design it to do what
we want.

One of our early decisions was to say that UX and code stories have
equally important use cases in StoryBoard.  Collaboration around UX
style blueprints (especially those with graphical mock-ups) sets a
fairly high bar for the kind of interaction we will support.

Gerrit is a great tool for reviewing code and other text media.  But
somehow it is even worse than launchpad for collaborating when visual
media are involved.  Quite a number of blueprints could benefit from
better support for that (not just UI mockups but network diagrams, etc).
We can learn a lot from the experiment of using Gerrit for blueprint
review, and I think it's going to help make StoryBoard a lot better for
all of our use cases.

Diagram handling was one of the first questions I asked Russell when I saw the 
repo creation proposal.  Diagrams are very helpful and while gerrit is not 
ideal for handling diagrams, Sphinx should allow us to incorporate them in 
basic way for now.  I view this as an improvement over the 5 different formats 
blueprints are submitted in now.


I think that's fine if long term this whole thing is optimized. I just
do very much worry that StoryBoard keeps going under progressive scope
creep before we've managed to ship the base case. That's a dangerous
situation to be in, as it means it's evolving without a feedback loop.

I'd much rather see Storyboard get us off launchpad ASAP across all the
projects, and then work on solving the things launchpad doesn't do.


+1000 I don’t want to see Storyboard changing scope to the point where it 
doesn’t adequately deliver because it is trying to solve too many problems in 
the first iteration.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-25 Thread Sean Dague
In fairness, there were also copious IRC conversations on the topic as
well. I think because there were so few people in this thread, and all
those people were participating in the review and irc conversations,
updating the thread just fell off the list.

My bad. When conversations jump media it's some times hard to remember
that each one might have had different people watching passively.

On 03/25/2014 06:36 AM, Mark McLoughlin wrote:
> FYI, allowing 0.9 recently merged into openstack/requirements:
> 
>   https://review.openstack.org/79817
> 
> This is a good example of how we should be linking gerrit and mailing
> list discussions together more. I don't think the gerrit review was
> linked in this thread nor was the mailing list discussion linked in the
> gerrit review.
> 
> Mark.
> 
> On Thu, 2014-03-13 at 22:45 -0700, Roman Podoliaka wrote:
>> Hi all,
>>
>> I think it's actually not that hard to fix the errors we have when
>> using SQLAlchemy 0.9.x releases.
>>
>> I uploaded two changes two Nova to fix unit tests:
>> - https://review.openstack.org/#/c/80431/ (this one should also fix
>> the Tempest test run error)
>> - https://review.openstack.org/#/c/80432/
>>
>> Thanks,
>> Roman
>>
>> On Thu, Mar 13, 2014 at 7:41 PM, Thomas Goirand  wrote:
>>> On 03/14/2014 02:06 AM, Sean Dague wrote:
 On 03/13/2014 12:31 PM, Thomas Goirand wrote:
> On 03/12/2014 07:07 PM, Sean Dague wrote:
>> Because of where we are in the freeze, I think this should wait until
>> Juno opens to fix. Icehouse will only be compatible with SQLA 0.8, which
>> I think is fine. I expect the rest of the issues can be addressed during
>> Juno 1.
>>
>> -Sean
>
> Sean,
>
> No, it's not fine for me. I'd like things to be fixed so we can move
> forward. Debian Sid has SQLA 0.9, and Jessie (the next Debian stable)
> will be released SQLA 0.9 and with Icehouse, not Juno.

 We're past freeze, and this requires deep changes in Nova DB to work. So
 it's not going to happen. Nova provably does not work with SQLA 0.9, as
 seen in Tempest tests.

   -Sean
>>>
>>> I'd be nice if we considered more the fact that OpenStack, at some
>>> point, gets deployed on top of distributions... :/
>>>
>>> Anyway, if we can't do it because of the freeze, then I will have to
>>> carry the patch in the Debian package. Never the less, someone will have
>>> to work and fix it. If you know how to help, it'd be very nice if you
>>> proposed a patch, even if we don't accept it before Juno opens.
>>>
>>> Thomas Goirand (zigo)
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-25 Thread Mark McLoughlin
FYI, allowing 0.9 recently merged into openstack/requirements:

  https://review.openstack.org/79817

This is a good example of how we should be linking gerrit and mailing
list discussions together more. I don't think the gerrit review was
linked in this thread nor was the mailing list discussion linked in the
gerrit review.

Mark.

On Thu, 2014-03-13 at 22:45 -0700, Roman Podoliaka wrote:
> Hi all,
> 
> I think it's actually not that hard to fix the errors we have when
> using SQLAlchemy 0.9.x releases.
> 
> I uploaded two changes two Nova to fix unit tests:
> - https://review.openstack.org/#/c/80431/ (this one should also fix
> the Tempest test run error)
> - https://review.openstack.org/#/c/80432/
> 
> Thanks,
> Roman
> 
> On Thu, Mar 13, 2014 at 7:41 PM, Thomas Goirand  wrote:
> > On 03/14/2014 02:06 AM, Sean Dague wrote:
> >> On 03/13/2014 12:31 PM, Thomas Goirand wrote:
> >>> On 03/12/2014 07:07 PM, Sean Dague wrote:
>  Because of where we are in the freeze, I think this should wait until
>  Juno opens to fix. Icehouse will only be compatible with SQLA 0.8, which
>  I think is fine. I expect the rest of the issues can be addressed during
>  Juno 1.
> 
>  -Sean
> >>>
> >>> Sean,
> >>>
> >>> No, it's not fine for me. I'd like things to be fixed so we can move
> >>> forward. Debian Sid has SQLA 0.9, and Jessie (the next Debian stable)
> >>> will be released SQLA 0.9 and with Icehouse, not Juno.
> >>
> >> We're past freeze, and this requires deep changes in Nova DB to work. So
> >> it's not going to happen. Nova provably does not work with SQLA 0.9, as
> >> seen in Tempest tests.
> >>
> >>   -Sean
> >
> > I'd be nice if we considered more the fact that OpenStack, at some
> > point, gets deployed on top of distributions... :/
> >
> > Anyway, if we can't do it because of the freeze, then I will have to
> > carry the patch in the Debian package. Never the less, someone will have
> > to work and fix it. If you know how to help, it'd be very nice if you
> > proposed a patch, even if we don't accept it before Juno opens.
> >
> > Thomas Goirand (zigo)
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-25 Thread Thomas Herve

> Hi Thomas,
> 
> I think we went to the second loop of the discussion about generic language
> concepts. Murano does not use a new language for the sole purpose of having
> parameters, constraints and polymorphism. These are generic concepts which
> are common for different languages, so keeping arguing about these generic
> concepts is just like a holy war like Python vs. C. Keeping these arguments
> is just like to say that we don't need Python as functions and parameters
> already exists in C which is used under the hood in Python.
> 
> Yes Murano DSL have some generic concepts similar to HOT. I think this is a
> benefit as user will see the familiar syntax constructions and it will be a
> lower threshold for him to start using Murano DSL.
> 
> In a simplified view Murano uses DSL for application definition to solve
> several particular problems:
> a) control UI rendering of Application Catalog
> b) control HOT template generation
> 
> These aspects are not covered in HOT and probably should not be covered. I
> don't like an idea of expressing HOT template generation in HOT as it sounds
> like a creation another Lisp like language :-)

I'm not saying that HOT will cover all your needs. I think it will cover a 
really good portion. And I'm saying that for the remaining part, you can use an 
existing language and not create a new one.

> I don't think that your statement that most of the people in the community
> are against new DSL is a right summary. There are some disagreements how it
> should look like and what are the goals. You will be probably surprised but
> we are not the first who use DSL for HOT templates generation. Here is an
> e-mail thread right about Ruby based DSL used in IBM for the same purpose:
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/026606.html
> 
> The term "Orchestration" is quite generic. Saying that orchestration should
> be Heat job sounds like a well know Henry Ford's phrase "You can have any
> colour as long as it's black.".

That worked okay for him :).

> I think this is again a lack of understanding of the difference between
> Orchestration program and Heat project. There are many aspects of
> Orchestration and OpenStack has the Orchestration program for the projects
> which are focused on some aspects of orchestration. Heat is one of the
> project inside Orchestration program but it does not mean that Heat should
> cover everything. That is why we discussed in this thread how workflows
> aspects should be aligned and how they should be placed into this
> Orchestration program.

Well, today Heat is the one and only program in the Orchestration program. If 
and when you have orchestration needs not covered, we are there to make sure 
Heat is not the best place to handle them. The answer will probably not Heat 
forever, but we need good use cases to delegate those needs to another project.


-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multiple patches in one review

2014-03-25 Thread Mark McLoughlin
On Mon, 2014-03-24 at 10:49 -0400, Russell Bryant wrote:
> Gerrit support for a patch series could certainly be better.

There has long been talking about gerrit getting "topic review"
functionality, whereby you could e.g. approve a whole series of patches
from a "topic view".

See:

  https://code.google.com/p/gerrit/issues/detail?id=51
  https://groups.google.com/d/msg/repo-discuss/5oRra_tLKMA/rxwU7pPAQE8J

My understanding is there's a fork of gerrit out there with this
functionality that some projects are using successfully.

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-25 Thread Thomas Herve
 
>> What I can say is that I'm not convinced. The only use-case for a DSL would
>> be if you have to upload user-written code, but what you mentioned is a Web
>> interface, where the user doesn't use the DSL, and the cloud provider is the
>> developer. There is no reason in this case to have a secure environment for
>> the code.
> 
> I didn't say that. There are at least 2 different roles application
> developers/publishers and application users. Application developer is not
> necessary cloud provider. The whole point of AppCatalog is to support
> scenario when anyone can create and package some application and that
> package can be uploaded by user alone. Think Apple AppStore or Google Play.
> Some cloud providers may configure ACLs so that user be allowed to consume
> applications they decided while others may permit to upload applications to
> some configurable scope (e.g. apps that would be visible to all cloud users,
> to particular tenant or be private to the user). We also think to have some
> of peer relations so that it would be possible to have application upload in
> one catalog to become automatically available in all connected catalogs.
> 
> This is similar to how Linux software repos work - AppCatalog is repo, Murano
> package is what DEB/RPMs are to repo and DSL is what DEB/RPMs manifests are
> to packages. Just that is run on cloud and designed to handle complex
> multi-node apps as well as trivial ones in which case this may be narrowed
> to actual installation of DEB/RPM

I'm glad that you bring packages up. This is a really good example of why you 
don't need a new programming language. Packages uses whatever technology they 
prefer to handle their scripting needs. They then have an declarative interface 
which hides the imperative parts behind.

You trust OpenStack developers with their code, you trust package developers 
with their code, why not trust catalog developers?

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-25 Thread Thierry Carrez
Russell Bryant wrote:
> On 03/24/2014 11:42 AM, Stefano Maffulli wrote:
>> At this point I'd like to get a fair assessment of storyboard's status
>> and timeline: it's clear that Launchpad blueprints need to be abandoned
>> lightspeed fast and I (and others) have been sold the idea that
>> Storyboard is a *thing* that will happen *soon*. I also was told that
>> spec reviews are an integral part of Storyboard use case scenarios, not
>> just defects.
> 
> Another critical point of clarification ... we are *not* moving out of
> blueprints at all.  We're still using them for tracking, just as before.
>  We are *adding* the use of gerrit for reviewing the design.

Yes, there is a clear misunderstanding here. There never was any kind of
spec review system in Launchpad. Launchpad tracks feature completion,
not design specs. It supported a link and behind that link was a set of
tools (wiki pages, etherpads, google docs) where the spec review would
hopefully happen.

This proposal is *just* about replacing all those "design documents"
with a clear change and using Gerrit to iterate and track approvals on it.

This is not "moving off Launchpad blueprints" nor is it "bypassing
StoryBoard". It's adding a bit of formal process around something that
randomly lived out of our tooling up to now.


About StoryBoard progress now:

People got very excited with the Django proof-of-concept because it
looked like it was almost ready to replace Launchpad. But I always made
clear this was just a proof-of-concept and it would take a *lot* of time
and effort to make it real. And my usual amount of free time would
probably not allow me to help that much.

It also took us more time than we expected to set up a basic team to
work on it (many thanks to HP and Mirantis for jumping on it with
significant resources), to align that team on clear goals, to rewrite
the main data server as an OpenStack-style API server, to write from
scratch a JavaScript webclient and get it properly tested at the gate, etc.

We now have the base infrastructure in place, we continuously deploy,
and the minimal viable product is almost completed. We expect the
infrastructure team to start dogfooding it in Juno and hopefully we'll
iterate faster next cycle... to make it a viable Launchpad alternative
for adventurous projects in the K cycle.

So it's not vaporware, it exists for real. But there is still a lot of
work needed on it to be generally usable, so it shouldn't be used as an
argument to stall everything else.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-25 Thread Georgy Okrokvertskhov
Hi Thomas,

I think we went to the second loop of the discussion about generic language
concepts.  Murano does not use a new language for the sole purpose of
having parameters, constraints and polymorphism. These are generic concepts
which are common for different languages, so keeping arguing about these
generic concepts is just like a holy war  like Python vs. C. Keeping these
arguments is just like to say that we don't need Python as functions and
parameters already exists in C which is used under the hood in Python.

Yes Murano DSL have some generic concepts similar to HOT. I think this is a
benefit as user will see the familiar syntax constructions and it will be a
lower threshold for him to start using Murano DSL.

In a simplified view Murano uses DSL for application definition to solve
several particular problems:
a) control UI rendering of Application Catalog
b) control HOT template generation

These aspects are not covered in HOT and probably should not be covered. I
don't like an idea of expressing HOT template generation in HOT as it
sounds like a creation another Lisp like language :-)

I don't think that your statement that most of the people in the community
are against new DSL is a right summary. There are some disagreements how it
should look like and what are the goals. You will be probably surprised but
we are not the first who use DSL for HOT templates generation. Here is an
e-mail thread right about Ruby based DSL used in IBM for the same purpose:
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026606.html

The term "Orchestration" is quite generic. Saying that orchestration should
be Heat job sounds like a well know Henry Ford's phrase "You can have any
colour as long as it's black.".
I think this is again a lack of understanding of the difference between
Orchestration program and Heat project. There are many aspects of
Orchestration and OpenStack has the Orchestration program for the projects
which are focused on some aspects of orchestration. Heat is one of the
project inside Orchestration program but it does not mean that Heat should
cover everything. That is why we discussed in this thread how workflows
aspects should be aligned and how they should be placed into this
Orchestration program.

Thanks
Georgy


On Mon, Mar 24, 2014 at 8:28 AM, Dmitry  wrote:

> MuranoPL supposed to provide a solution for the real needs to manage
> services in the centralized manner and to allow cloud provider customers to
> create their own services.
> The application catalog similar to AppDirect (www.appdirect.com) natively
> supported by OpenStack is a huge step forward.
> Think about Amazon which provides different services for the different
> needs: Amazon Cloud Formation, Amazon OpsWorks and Amazon Beanstalk.
> Following the similar logic (which is fully makes sense for me), OpenStack
> should provide resource reservation and orchestration (Heat and Climate),
> Application Catalog (Murano) and PaaS (Solum).
> Every project can live in harmony with other and contribute for the cloud
> service provider service completeness.
> This is my opinion and i would happy to use Murano in our internal
> solution.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284


On Mon, Mar 24, 2014 at 5:13 AM, Thomas Herve wrote:

> Hi Stan,
>
> Comments inline.
>
> > Zane,
> >
> > I appreciate your explanations on Heat/HOT. This really makes sense.
> > I didn't mean to say that MuranoPL is better for Heat. Actually HOT is
> good
> > for Heat's mission. I completely acknowledge it.
> > I've tried to avoid comparison between languages and I'm sorry if it felt
> > that way. This is not productive as I don't offer you to replace HOT with
> > MuranoPL (although I believe that certain elements of MuranoPL syntax
> can be
> > contributed to HOT and be valuable addition there). Also people tend to
> > protect what they have developed and invested into and to be fair this is
> > what we did in this thread to great extent.
> >
> > What I'm trying to achieve is that you and the rest of Heat team
> understand
> > why it was designed the way it is. I don't feel that Murano can become
> > full-fledged member of OpenStack ecosystem without a bless from Heat
> team.
> > And it would be even better if we agree on certain design, join our
> efforts
> > and contribute to each other for sake of Orchestration program.
>
> Note that I feel that most people outside of the Murano project are
> against the idea of using a DSL. You should feel that it could block the
> integration in OpenStack.
>
> > I'm sorry for long mail texts written in not-so-good English and
> appreciate
> > you patience reading and answering them.
> >
> > Having said that let me s

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-25 Thread Miguel Angel Ajo



On 03/24/2014 07:23 PM, Yuriy Taraday wrote:

On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin mailto:c...@ecbaldwin.net>> wrote:

Don't discard the first number so quickly.

For example, say we use a timeout mechanism for the daemon running
inside namespaces to avoid using too much memory with a daemon in
every namespace.  That means we'll pay the startup cost repeatedly but
in a way that amortizes it down.

Even if it is really a one time cost, then if you collect enough
samples then the outlier won't have much affect on the mean anyway.


It actually affects all numbers but mean (e.g. deviation is gross).



Carl is right, I thought of it later in the evening, when the timeout
mechanism is in place we must consider the number.



I'd say keep it in there.


+1 I agree.



Carl

On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo
mailto:majop...@redhat.com>> wrote:
 >
 >
 > It's the first call starting the daemon / loading config files, etc?,
 >
 > May be that first sample should be discarded from the mean for
all processes
 > (it's an outlier value).


I thought about cutting max from counting deviation and/or showing
second-max value. But I don't think it matters much and there's not much
people here who're analyzing deviation. It's pretty clear what happens
with the longest run with this case and I think we can let it be as is.
It's mean value that matters most here.


Yes, I agree, but as Carl said, having timeouts in place, in a practical
environment, the mean will be shifted too.

Timeouts are needed within namespaces, to avoid excessive memory
consumption. But it could be OK as we'd be cutting out the ip netns
delay.  Or , if we find a simpler "setns" mechanism enough for our
needs, may be we don't need to care about short-timeouts in ip netns
at all...


Best,
Miguel Ángel.




--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-25 Thread Alexander Tivelkov
Hi,

> I suggest to move all needed Powershell scripts and etc. to the main
repository 'murano' in the separate folder.

+1 on this. The scripts will not go inside the PyPi package, they will be
just grouped in a subfolder.

Completely agree on the repo-reorganization topic in general. However
> And I personally will do everything to prevent creation of new repo for
Murano.

Well, this may be unavoidable :)
We may face a need to create a "murano-contrib" repository where Murano
users will be able to contribute sources of their own murano packages,
improve the core library etc.
Given that we get rid of murano-conductor, murano-repository,
murano-metadataclient, murano-common, murano-tests and, probably,
murano-deployment, we are probably ok with having one more. Technically, we
may reuse murano-repository for this. But this can be discussed right
after there 0.5 release.


--
Regards,
Alexander Tivelkov


On Tue, Mar 25, 2014 at 12:09 PM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Dmitry,
>
> I suggest to move all needed Powershell scripts and etc. to the main
> repository 'murano' in the separate folder.
>
>
> On Tue, Mar 25, 2014 at 11:38 AM, Dmitry Teselkin 
> wrote:
>
>> Ruslan,
>>
>> What about murano-deployment repo? The most important part of it are
>> PowerSheel scripts, Windows Image Builder, package manifests, and some
>> other scripts that better to keep somewhere. Where do we plan to move them?
>>
>>
>> On Mon, Mar 24, 2014 at 10:29 PM, Ruslan Kamaldinov <
>> rkamaldi...@mirantis.com> wrote:
>>
>>> On Mon, Mar 24, 2014 at 10:08 PM, Joshua Harlow 
>>> wrote:
>>> > Seeing that the following repos already exist, maybe there is some
>>> need for
>>> > cleanup?
>>> >
>>> > - https://github.com/stackforge/murano-agent
>>> > - https://github.com/stackforge/murano-api
>>> > - https://github.com/stackforge/murano-common
>>> > - https://github.com/stackforge/murano-conductor
>>> > - https://github.com/stackforge/murano-dashboard
>>> > - https://github.com/stackforge/murano-deployment
>>> > - https://github.com/stackforge/murano-docs
>>> > - https://github.com/stackforge/murano-metadataclient
>>> > - https://github.com/stackforge/murano-repository
>>> > - https://github.com/stackforge/murano-tests
>>> > ...(did I miss others?)
>>> >
>>> > Can we maybe not have more git repositories and instead figure out a
>>> way to
>>> > have 1 repository (perhaps with submodules?) ;-)
>>> >
>>> > It appears like murano is already exploding all over stackforge which
>>> makes
>>> > it hard to understand why yet another repo is needed. I understand why
>>> from
>>> > a code point of view, but it doesn't seem right from a code
>>> organization
>>> > point of view to continue adding repos. It seems like murano
>>> > (https://github.com/stackforge/murano) should just have 1 repo, with
>>> > sub-repos (tests, docs, api, agent...) for its own organizational usage
>>> > instead of X repos that expose others to murano's internal
>>> organizational
>>> > details.
>>> >
>>> > -Josh
>>>
>>>
>>> Joshua,
>>>
>>> I agree that this huge number of repositories is confusing for
>>> newcomers. I've
>>> spent some time to understand mission of each of these repos. That's why
>>> we
>>> already did the cleanup :) [0]
>>>
>>> And I personally will do everything to prevent creation of new repo for
>>> Murano.
>>>
>>> Here is the list of repositories targeted for the next Murano release
>>> (Apr 17):
>>> * murano-api
>>> * murano-agent
>>> * python-muranoclient
>>> * murano-dashboard
>>> * murano-docs
>>>
>>> The rest of these repos will be deprecated right after the release.
>>>  Also we
>>> will rename murano-api to just "murano". murano-api will include all the
>>> Murano services, functionaltests for Tempest, Devstack scripts,
>>> developer docs.
>>> I guess we already can update README files in deprecated repos to avoid
>>> further
>>> confusion.
>>>
>>> I wouldn't agree that there should be just one repo. Almost every
>>> OpenStack
>>> project has it's own repo for python client. All the user docs are kept
>>> in a
>>> separate repo. Guest agent code should live in it's own repository to
>>> keep
>>> number of dependencies as low as possible. I'd say there should be
>>> required/comfortable minimum of repositories per project.
>>>
>>>
>>> And one more nit correction:
>>> OpenStack has it's own git repository [1]. We shoul avoid referring to
>>> github
>>> since it's just a convinient mirror, while [1] is an official
>>> OpenStack repository.
>>>
>>> [0]
>>> https://blueprints.launchpad.net/murano/+spec/repository-reorganization
>>> [1] http://git.openstack.org/cgit/
>>>
>>>
>>>
>>> Thanks,
>>> Ruslan
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Thanks,
>> Dmitry Teselkin
>> Deployment Engineer
>> Mirantis
>> http://www.mirantis.com
>>
>> ___

Re: [openstack-dev] [depfreeze] Dependency freeze coming up (EOD Tuesday March 25)

2014-03-25 Thread Thierry Carrez
Sergey Lukjanov wrote:
> RE Sahara, we'll need one more version bump to remove all backward
> compat code added for smooth transition. What's the deadline for doing
> it? Personally, I'd like to do it next week. Is it ok?

If you are only *removing* dependencies I think next week is fine :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Meeting Wednesday, 26 March @ 1800 UTC

2014-03-25 Thread Nikhil Manchanda

Just a quick reminder for the weekly Trove meeting.
https://wiki.openstack.org/wiki/Meetings#Trove_.28DBaaS.29_meeting

Date/Time: Wednesday 26 March - 1800 UTC / 1100 PDT / 1300 CDT
IRC channel: #openstack-meeting-alt

Meeting Agenda (https://wiki.openstack.org/wiki/Meetings/TroveMeeting):

1. Data Store abstraction start/stop/status/control
https://blueprints.launchpad.net/trove/+spec/trove-guest-agent-datastore-control

2. Point in time recovery
https://wiki.openstack.org/wiki/Trove/PointInTimeRecovery

3. Data volume snapshot
https://wiki.openstack.org/wiki/Trove/volume-data-snapshot-design

Cheers,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-25 Thread Maru Newby

On Mar 21, 2014, at 9:01 AM, David Kranz  wrote:

> On 03/20/2014 04:19 PM, Rochelle.RochelleGrober wrote:
>> 
>>> -Original Message-
>>> From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]
>>> Sent: Thursday, March 20, 2014 12:13 PM
>>> 
>>> 'project specific functional testing' in the Marconi context is
>>> treating
>>> Marconi as a complete system, making Marconi API calls & verifying the
>>> response - just like an end user would, but without keystone. If one of
>>> these tests fail, it is because there is a bug in the Marconi code ,
>>> and
>>> not because its interaction with Keystone caused it to fail.
>>> 
>>> "That being said there are certain cases where having a project
>>> specific
>>> functional test makes sense. For example swift has a functional test
>>> job
>>> that
>>> starts swift in devstack. But, those things are normally handled on a
>>> per
>>> case
>>> basis. In general if the project is meant to be part of the larger
>>> OpenStack
>>> ecosystem then Tempest is the place to put functional testing. That way
>>> you know
>>> it works with all of the other components. The thing is in openstack
>>> what
>>> seems
>>> like a project isolated functional test almost always involves another
>>> project
>>> in real use cases. (for example keystone auth with api requests)
>>> 
>>> "



>>> 
>>> One of the concerns we heard in the review was 'having the functional
>>> tests elsewhere (I.e within the project itself) does not count and they
>>> have to be in Tempest'.
>>> This has made us as a team wonder if we should migrate all our
>>> functional
>>> tests to Tempest.
>>> But from Matt's response, I think it is reasonable to continue in our
>>> current path & have the functional tests in Marconi coexist  along with
>>> the tests in Tempest.
>>> 
>> I think that what is being asked, really is that the functional tests could 
>> be a single set of tests that would become a part of the tempest repository 
>> and that these tests would have an ENV variable as part of the configuration 
>> that would allow either "no Keystone" or "Keystone" or some such, if that is 
>> the only configuration issue that separates running the tests isolated vs. 
>> integrated.  The functional tests need to be as much as possible a single 
>> set of tests to reduce duplication and remove the likelihood of two sets 
>> getting out of sync with each other/development.  If they only run in the 
>> integrated environment, that's ok, but if you want to run them isolated to 
>> make debugging easier, then it should be a configuration option and a 
>> separate test job.
>> 
>> So, if my assumptions are correct, QA only requires functional tests for 
>> integrated runs, but if the project QAs/Devs want to run isolated for dev 
>> and devtest purposes, more power to them.  Just keep it a single set of 
>> functional tests and put them in the Tempest repository so that if a failure 
>> happens, anyone can find the test and do the debug work without digging into 
>> a separate project repository.
>> 
>> Hopefully, the tests as designed could easily take a new configuration 
>> directive and a short bit of work with OS QA will get the integrated FTs 
>> working as well as the isolated ones.
>> 
>> --Rocky
> This issue has been much debated. There are some active members of our 
> community who believe that all the functional tests should live outside of 
> tempest in the projects, albeit with the same idea that such tests could be 
> run either as part of today's "real" tempest runs or mocked in various ways 
> to allow component isolation or better performance. Maru Newby posted a patch 
> with an example of one way to do this but I think it expired and I don't have 
> a pointer.

I think the best place for functional api tests to be maintained is in the 
projects themselves.  The domain expertise required to write api tests is 
likely to be greater among project resources, and they should be tasked with 
writing api tests pre-merge.  The current 'merge-first, test-later' procedure 
of maintaining api tests in the Tempest repo makes that impossible.  Worse, the 
cost of developing functional api tests is higher in the integration 
environment that is the Tempest default.

The patch in question [1] proposes allowing pre-merge functional api test 
maintenance and test reuse in an integration environment.


m.

1: https://review.openstack.org/#/c/72585/

> IMO there are valid arguments on both sides, but I hope every one could agree 
> that functional tests should not be arbitrarily split between projects and 
> tempest as they are now. The Tempest README states a desire for "complete 
> coverage of the OpenStack API" but Tempest is not close to that. We have been 
> discussing and then ignoring this issue for some time but I think the recent 
> action to say that Tempest will be used to determine if something can use the 
> OpenStack trademark will force more completeness on tempest (more tests, that 
> is). I

Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-25 Thread Russell Bryant
On 03/25/2014 12:01 AM, Stefano Maffulli wrote:
> On 03/24/2014 09:20 AM, Russell Bryant wrote:
>> Another critical point of clarification ... we are *not* moving out of
>> blueprints at all.  We're still using them for tracking, just as before.
>>  We are *adding* the use of gerrit for reviewing the design.
> 
> That changes things, thank you for the clarification. If I understand
> correctly, pages like
> https://wiki.openstack.org/wiki/HowToContribute#Feature_development will
> still be valid at pointing at Launchpad Blueprints as the tool we use
> for the design, roadmap, tracking of progress. What changes is that for
> Nova all blueprints will have to have a URL added to the specifications,
> filed in a gerrit repository.

Correct.

> I can live with that, even though I personally think this is a horrible
> hack and a major tech debt we need to solve properly in the shortest
> amount of time.

I really don't see why this is so bad.  We're using a tool specifically
designed for reviewing things that is already working *really* well, not
just for code, but also for TC governance documents.

Unless Storyboard plans to re-implement what gerrit does (I sure hope it
doesn't), I expect we would keep working like this.  Do you expect
storyboard to have an interface for iterating on text documents, where
you can provide inline comments, review differences between revisions,
etc?  What am I missing?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [neutron] Neutron Full Parallel job - Last 4 days failures

2014-03-25 Thread Salvatore Orlando
Inline

Salvatore


On 24 March 2014 23:01, Matthew Treinish  wrote:

> On Mon, Mar 24, 2014 at 09:56:09PM +0100, Salvatore Orlando wrote:
> > Thanks a lot!
> >
> > We now need to get on these bugs, and define with QA an acceptable
> failure
> > rate criterion for switching the full job to voting.
> > It would be good to have a chance to only run the tests against code
> which
> > is already in master.
> > To this aim we might push a dummy patch, and keep it spinning in the
> check
> > queue.
>
> Honestly, there isn't really a number. I had a thread trying to get
> consensus on
> that back when I first made tempest run in parallel. What I ended up doing
> back
> then and what we've done since for this kind of change is to just pick a
> slower
> week for the gate and just green light it, of course after checking to
> make sure
> if it blows up we're not blocking anything critical.


Then I guess the ideal period would be after RC2s are cut.
Also, we'd need to run also a postgres flavour of the job at least.
Meaning that when calculating the probability of a patch passing in the
gate is actually the combined probability of two jobs completing
successfully.
On another note, we noticed that the duplicated jobs currently executed for
redundancy in neutron actually seem to point all to the same build id.
I'm not sure then if we're actually executing each job twice or just
duplicating lines in the jenkins report.


> If it looks like it's
> passing at roughly the same rate as everything else and you guys think it's
> ready. 25% is definitely too high, for comparison when I looked at a
> couple of
> min. ago at the numbers for the past 4 days on the equivalent job with
> nova-network it only failed 4% of the time. (12 out of 300) But that
> number does
> fluctuate quite a bit for example looking at the past week the number
> grows to
> 11.6%. (171 out of 1480)


Even with 11.6% I would not enable it.
Running mysql and pg jobs this will give us a combined success rate of
 78.1%, which pretty much means the chances of clearing successfully a
5-deep queue in the gate will be a mere 29%. My "gut" metric is that we
should achieve a degree of pass rate which allows us to clear a 10-deep
gate queue with a 50% success rate. This translates to a 3.5% failure rate
per job, which is indeed inline with what's currently observed for
nova-network.

Doing it this way doesn't seem like the best, but until it's gating things
> really don't get the attention they deserve and more bugs will just slip in
> while you wait. There will most likely be initial pain after it merges,
> but it's
> the only real way to lock it down and make forward progress.
>

> -Matt Treinish
>
> >
> >
> > On 24 March 2014 21:45, Rossella Sblendido  wrote:
> >
> > > Hello all,
> > >
> > > here is an update regarding the Neutron full parallel job.
> > > I used the following Logstash query [1]  that checks the failures of
> the
> > > last
> > > 4 days (the last bug fix related with the full job was merged 4 days
> ago).
> > > These are the results:
> > >
> > > 123 failure (25% of the total)
> > >
> > > I took a sample of 50 failures and I obtained the following:
> > >
> > > 22% legitimate failures (they are due to the code change introduced by
> the
> > > patch)
> > > 22% infra issues
> > > 12% https://bugs.launchpad.net/openstack-ci/+bug/1291611
> > > 12% https://bugs.launchpad.net/tempest/+bug/1281969
> > > 8% https://bugs.launchpad.net/tempest/+bug/1294603
> > > 3% https://bugs.launchpad.net/neutron/+bug/1283522
> > > 3% https://bugs.launchpad.net/neutron/+bug/1291920
> > > 3% https://bugs.launchpad.net/nova/+bug/1290642
> > > 3% https://bugs.launchpad.net/tempest/+bug/1252971
> > > 3% https://bugs.launchpad.net/horizon/+bug/1257885
> > > 3% https://bugs.launchpad.net/tempest/+bug/1292242
> > > 3% https://bugs.launchpad.net/neutron/+bug/1277439
> > > 3% https://bugs.launchpad.net/neutron/+bug/1283599
> > >
> > > cheers,
> > >
> > > Rossella
> > >
> > > [1] http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOi
> > > BcImNoZWNrLXRlbXBlc3QtZHN2bS1uZXV0cm9uLWZ1bGxcIiBBTkQgbWVzc2
> > > FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIgQU5EIHRhZ3M6Y29uc29sZSIsIm
> > > ZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiY3VzdG9tIiwiZ3
> > > JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7ImZyb20iOiIyMDE0LTAzLTIwVD
> > > EzOjU0OjI1KzAwOjAwIiwidG8iOiIyMDE0LTAzLTI0VDEzOjU0OjI1KzAwOj
> > > AwIiwidXNlcl9pbnRlcnZhbCI6IjAifSwibW9kZSI6IiIsImFuYWx5emVfZm
> > > llbGQiOiIiLCJzdGFtcCI6MTM5NTY3MDY2ODc0OX0=
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lis

Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-25 Thread Timur Nurlygayanov
Dmitry,

I suggest to move all needed Powershell scripts and etc. to the main
repository 'murano' in the separate folder.


On Tue, Mar 25, 2014 at 11:38 AM, Dmitry Teselkin wrote:

> Ruslan,
>
> What about murano-deployment repo? The most important part of it are
> PowerSheel scripts, Windows Image Builder, package manifests, and some
> other scripts that better to keep somewhere. Where do we plan to move them?
>
>
> On Mon, Mar 24, 2014 at 10:29 PM, Ruslan Kamaldinov <
> rkamaldi...@mirantis.com> wrote:
>
>> On Mon, Mar 24, 2014 at 10:08 PM, Joshua Harlow 
>> wrote:
>> > Seeing that the following repos already exist, maybe there is some need
>> for
>> > cleanup?
>> >
>> > - https://github.com/stackforge/murano-agent
>> > - https://github.com/stackforge/murano-api
>> > - https://github.com/stackforge/murano-common
>> > - https://github.com/stackforge/murano-conductor
>> > - https://github.com/stackforge/murano-dashboard
>> > - https://github.com/stackforge/murano-deployment
>> > - https://github.com/stackforge/murano-docs
>> > - https://github.com/stackforge/murano-metadataclient
>> > - https://github.com/stackforge/murano-repository
>> > - https://github.com/stackforge/murano-tests
>> > ...(did I miss others?)
>> >
>> > Can we maybe not have more git repositories and instead figure out a
>> way to
>> > have 1 repository (perhaps with submodules?) ;-)
>> >
>> > It appears like murano is already exploding all over stackforge which
>> makes
>> > it hard to understand why yet another repo is needed. I understand why
>> from
>> > a code point of view, but it doesn't seem right from a code organization
>> > point of view to continue adding repos. It seems like murano
>> > (https://github.com/stackforge/murano) should just have 1 repo, with
>> > sub-repos (tests, docs, api, agent...) for its own organizational usage
>> > instead of X repos that expose others to murano's internal
>> organizational
>> > details.
>> >
>> > -Josh
>>
>>
>> Joshua,
>>
>> I agree that this huge number of repositories is confusing for newcomers.
>> I've
>> spent some time to understand mission of each of these repos. That's why
>> we
>> already did the cleanup :) [0]
>>
>> And I personally will do everything to prevent creation of new repo for
>> Murano.
>>
>> Here is the list of repositories targeted for the next Murano release
>> (Apr 17):
>> * murano-api
>> * murano-agent
>> * python-muranoclient
>> * murano-dashboard
>> * murano-docs
>>
>> The rest of these repos will be deprecated right after the release.  Also
>> we
>> will rename murano-api to just "murano". murano-api will include all the
>> Murano services, functionaltests for Tempest, Devstack scripts, developer
>> docs.
>> I guess we already can update README files in deprecated repos to avoid
>> further
>> confusion.
>>
>> I wouldn't agree that there should be just one repo. Almost every
>> OpenStack
>> project has it's own repo for python client. All the user docs are kept
>> in a
>> separate repo. Guest agent code should live in it's own repository to keep
>> number of dependencies as low as possible. I'd say there should be
>> required/comfortable minimum of repositories per project.
>>
>>
>> And one more nit correction:
>> OpenStack has it's own git repository [1]. We shoul avoid referring to
>> github
>> since it's just a convinient mirror, while [1] is an official
>> OpenStack repository.
>>
>> [0]
>> https://blueprints.launchpad.net/murano/+spec/repository-reorganization
>> [1] http://git.openstack.org/cgit/
>>
>>
>>
>> Thanks,
>> Ruslan
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Thanks,
> Dmitry Teselkin
> Deployment Engineer
> Mirantis
> http://www.mirantis.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]

2014-03-25 Thread Oleg Bondarev
Hi Vijay,
Currently Neutron LBaaS supports only namespace based implementation for
HAProxy.
You can however run LBaaS agent on the host other than network controller
node - in that
case HAProxy processes will be running on that host but still in namespaces.

Also there is an effort in Neutron regarding adding support of advanced
services in VMs [1].
After it is completed I hope it will be possible to adopt it in LBaaS and
run HAProxy in such a service VM.

[1] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

Thanks,
Oleg


On Tue, Mar 25, 2014 at 1:39 AM, Vijay B  wrote:

> Hi Eugene,
>
> Thanks for the reply! How/where is the agent configuration done for
> HAProxy? If I don't want to go with a network namespace based HAProxy
> process, but want to deploy my own HAProxy instance on a host outside of
> the network controller node, and make neutron deploy pools/VIPs on that
> HAProxy instance, does neutron currently support this scenario? If so, what
> are the configuration steps I will need to carry out to deploy HAProxy on a
> separate host (for example, where do I specify the ip address of the
> haproxy host, etc)?
>
> Regards,
> Vijay
>
>
> On Mon, Mar 24, 2014 at 2:04 PM, Eugene Nikanorov  > wrote:
>
>> Hi,
>>
>> HAProxy driver has not removed from the trunk, instead it became a base
>> for agent-based driver, so the only haproxy-specific thing in the plugin
>> driver is device driver name. Namespace driver is a device driver on the
>> agent side and it was there from the beginning.
>> The reason for the change is mere refactoring: it seems that solutions
>> that employ agents could share the same code with only device driver being
>> specific.
>>
>> So, everything is in place, HAProxy continues to be the default
>> implementation of Neutron LBaaS service. It supports spawning haproxy
>> processes on any host that runs lbaas agent.
>>
>> Thanks,
>> Eugene.
>>
>>
>>
>> On Tue, Mar 25, 2014 at 12:33 AM, Vijay B  wrote:
>>
>>> Hi,
>>>
>>> I'm looking at HAProxy support in Neutron, and I observe that the
>>> drivers/haproxy/plugin_driver.py file in the stable/havana release has been
>>> effectively removed from trunk (master), in that the plugin driver in the
>>> master simply points to the namespace driver. What was the reason to do
>>> this? Was the plugin driver in havana tested and documented? I can't seem
>>> to get hold of any relevant documentation that describes how to configure
>>> HAProxy LBs installed on separate boxes (and not brought up in network
>>> namespaces) - can anyone please point me to the same?
>>>
>>> Also, are there any plans to bring back the HAProxy plugin driver to
>>> talk to remote HAProxy instances?
>>>
>>> Thanks,
>>> Regards,
>>> Vijay
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-25 Thread Dmitry Teselkin
Ruslan,

What about murano-deployment repo? The most important part of it are
PowerSheel scripts, Windows Image Builder, package manifests, and some
other scripts that better to keep somewhere. Where do we plan to move them?


On Mon, Mar 24, 2014 at 10:29 PM, Ruslan Kamaldinov <
rkamaldi...@mirantis.com> wrote:

> On Mon, Mar 24, 2014 at 10:08 PM, Joshua Harlow 
> wrote:
> > Seeing that the following repos already exist, maybe there is some need
> for
> > cleanup?
> >
> > - https://github.com/stackforge/murano-agent
> > - https://github.com/stackforge/murano-api
> > - https://github.com/stackforge/murano-common
> > - https://github.com/stackforge/murano-conductor
> > - https://github.com/stackforge/murano-dashboard
> > - https://github.com/stackforge/murano-deployment
> > - https://github.com/stackforge/murano-docs
> > - https://github.com/stackforge/murano-metadataclient
> > - https://github.com/stackforge/murano-repository
> > - https://github.com/stackforge/murano-tests
> > ...(did I miss others?)
> >
> > Can we maybe not have more git repositories and instead figure out a way
> to
> > have 1 repository (perhaps with submodules?) ;-)
> >
> > It appears like murano is already exploding all over stackforge which
> makes
> > it hard to understand why yet another repo is needed. I understand why
> from
> > a code point of view, but it doesn't seem right from a code organization
> > point of view to continue adding repos. It seems like murano
> > (https://github.com/stackforge/murano) should just have 1 repo, with
> > sub-repos (tests, docs, api, agent...) for its own organizational usage
> > instead of X repos that expose others to murano's internal organizational
> > details.
> >
> > -Josh
>
>
> Joshua,
>
> I agree that this huge number of repositories is confusing for newcomers.
> I've
> spent some time to understand mission of each of these repos. That's why we
> already did the cleanup :) [0]
>
> And I personally will do everything to prevent creation of new repo for
> Murano.
>
> Here is the list of repositories targeted for the next Murano release (Apr
> 17):
> * murano-api
> * murano-agent
> * python-muranoclient
> * murano-dashboard
> * murano-docs
>
> The rest of these repos will be deprecated right after the release.  Also
> we
> will rename murano-api to just "murano". murano-api will include all the
> Murano services, functionaltests for Tempest, Devstack scripts, developer
> docs.
> I guess we already can update README files in deprecated repos to avoid
> further
> confusion.
>
> I wouldn't agree that there should be just one repo. Almost every OpenStack
> project has it's own repo for python client. All the user docs are kept in
> a
> separate repo. Guest agent code should live in it's own repository to keep
> number of dependencies as low as possible. I'd say there should be
> required/comfortable minimum of repositories per project.
>
>
> And one more nit correction:
> OpenStack has it's own git repository [1]. We shoul avoid referring to
> github
> since it's just a convinient mirror, while [1] is an official
> OpenStack repository.
>
> [0]
> https://blueprints.launchpad.net/murano/+spec/repository-reorganization
> [1] http://git.openstack.org/cgit/
>
>
>
> Thanks,
> Ruslan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,
Dmitry Teselkin
Deployment Engineer
Mirantis
http://www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Separating our Murano PL core in own package

2014-03-25 Thread Serg Melikyan
Joshua, I was talking about simple python sub-package inside existing
repository, in existing package. I am suggesting to add
muranoapi.engine. sub-package, and nothing more.


On Mon, Mar 24, 2014 at 10:29 PM, Ruslan Kamaldinov <
rkamaldi...@mirantis.com> wrote:

> On Mon, Mar 24, 2014 at 10:08 PM, Joshua Harlow 
> wrote:
> > Seeing that the following repos already exist, maybe there is some need
> for
> > cleanup?
> >
> > - https://github.com/stackforge/murano-agent
> > - https://github.com/stackforge/murano-api
> > - https://github.com/stackforge/murano-common
> > - https://github.com/stackforge/murano-conductor
> > - https://github.com/stackforge/murano-dashboard
> > - https://github.com/stackforge/murano-deployment
> > - https://github.com/stackforge/murano-docs
> > - https://github.com/stackforge/murano-metadataclient
> > - https://github.com/stackforge/murano-repository
> > - https://github.com/stackforge/murano-tests
> > ...(did I miss others?)
> >
> > Can we maybe not have more git repositories and instead figure out a way
> to
> > have 1 repository (perhaps with submodules?) ;-)
> >
> > It appears like murano is already exploding all over stackforge which
> makes
> > it hard to understand why yet another repo is needed. I understand why
> from
> > a code point of view, but it doesn't seem right from a code organization
> > point of view to continue adding repos. It seems like murano
> > (https://github.com/stackforge/murano) should just have 1 repo, with
> > sub-repos (tests, docs, api, agent...) for its own organizational usage
> > instead of X repos that expose others to murano's internal organizational
> > details.
> >
> > -Josh
>
>
> Joshua,
>
> I agree that this huge number of repositories is confusing for newcomers.
> I've
> spent some time to understand mission of each of these repos. That's why we
> already did the cleanup :) [0]
>
> And I personally will do everything to prevent creation of new repo for
> Murano.
>
> Here is the list of repositories targeted for the next Murano release (Apr
> 17):
> * murano-api
> * murano-agent
> * python-muranoclient
> * murano-dashboard
> * murano-docs
>
> The rest of these repos will be deprecated right after the release.  Also
> we
> will rename murano-api to just "murano". murano-api will include all the
> Murano services, functionaltests for Tempest, Devstack scripts, developer
> docs.
> I guess we already can update README files in deprecated repos to avoid
> further
> confusion.
>
> I wouldn't agree that there should be just one repo. Almost every OpenStack
> project has it's own repo for python client. All the user docs are kept in
> a
> separate repo. Guest agent code should live in it's own repository to keep
> number of dependencies as low as possible. I'd say there should be
> required/comfortable minimum of repositories per project.
>
>
> And one more nit correction:
> OpenStack has it's own git repository [1]. We shoul avoid referring to
> github
> since it's just a convinient mirror, while [1] is an official
> OpenStack repository.
>
> [0]
> https://blueprints.launchpad.net/murano/+spec/repository-reorganization
> [1] http://git.openstack.org/cgit/
>
>
>
> Thanks,
> Ruslan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Code review

2014-03-25 Thread Timur Nurlygayanov
Hi Murano team,

can we review this commit asap:
https://bugs.launchpad.net/murano/+bug/1291968?
This is fix for the critical issue in release 0.5:
#1291968
.

Thanks!

-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev