[openstack-dev] [Heat][Horizon] Liberty horizon and get_file workaround?

2016-04-21 Thread Jason Pascucci
Hi,

I wanted to add my yaml as new resources (via 
/etc/heat/environment.d/default.yaml, but we use some external files in the 
OS::Nova::Server personality section.

It looks like the heat cli handles that when you pass yaml to it, but I 
couldn't get it to work either through horizon, or even heat-cli when it was a 
get_file from inside of the new resources.
I can see why file:// might not work, but I sort of expected 
that at least http://blah would still work within horizon (if so, I could just 
stick it in swift somewhere, but alas, no soup).

What's the fastest path to a workaround?
I was thinking of making a new resource plugin that reads the 
path, and returns the contents so it could be used as a get_attr, essentially 
cribbing the code from the heat command line processing.
Is there a better/sane way?
Is there some conceptual thing I'm missing that makes this moot?

Thanks in advance,

JRPascucci
Juniper Networks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] os-vif status report

2016-04-21 Thread Angus Lees
In case it wasn't already assumed, anyone is welcome to contact me directly
(irc: gus, email, or in Austin) if they have questions or want help with
privsep integration work.  It's early days still and the docs aren't
extensive (ahem).

os-brick privsep change just recently merged (yay), and I have the bulk of
the neutron ip_lib conversion almost ready for review, so os-vif is a good
candidate to focus on for this cycle.

 - Gus

On Thu, 14 Apr 2016 at 01:52 Daniel P. Berrange  wrote:

> I won't be present at the forthcoming Austin summit, so to prepare other
> people in case there are f2f discussions, this is a rough status report
> on the os-vif progress
>
>
> os-vif core
> ---
>
> NB by os-vif core, I mean the python packages in the os_vif/ namespace.
>
> The object model for describing the various different VIF backend
> configurations is defined well enough that it should cover all the
> VIF types currently used by Nova libvirt driver, and probably all
> those needed by other virt drivers. The only exception is that we
> do not have a representation for the vmware 'dvs' VIF type. There's
> no real reason why not, other than the fact that we're concentrating
> on converting the libvirt nova driver first. These are dealt with
> by the os_vif.objects.VIFBase object and its subclasses.
>
>
> We now have an object model for describing client host capabilities.
> This is dealt with by the os_vif.objects.HostInfo versioned object
> and things is used. Currently this object provides details of all
> the os-vif plugins that are installed on the host, and which VIF
> configs objects each supports.  The intent is that the HostInfo
> object is serialized to JSON, and passed to Neutron by Nova when
> creating a port.  This allows Neutron to dynamically decide which
> plugin and which VIF config it wants to use for creating the port.
>
>
> The os_vif.PluginBase class which all plugins must inherit from
> has been enhanced so that plugins can declare configuration
> parameters they wish to support. This allows config options for
> the plugins to be included directly in the nova.conf file in
> a dedicated section per plugin. For example, the linux bridge
> plugin will have its parameters in a "[os_vif_linux_bridge]"
> section in nova.conf.  This lets us setup the deprecations
> correctly, so that when upgrading from older Nova, existing
> settings in nova.conf still apply to the plugins provided
> by os-vif.
>
>
> os-vif reference plugins
> 
>
> Originally the intention was that all plugins would live outside
> of the os-vif package. During discussions at the Nova mid-cycle
> meeting there was a strong preference to have the linux bridge
> and openvswitch plugin implementations be distributed as part of
> the os-vif package directly.
>
> As such we now have 'vif_plug_linux_bridge' and 'vif_plug_ovs'
> python packages as part of the os-vif module. Note that these
> are *not* under the os_vif python namespace, as the intention
> was to keep their code structured as if they were separate,
> so we can easily split them out again in future in we need to.
>
> Both the linux bridge and ovs plugins have now been converted
> over to use oslo.privsep instead of rootwrap for all the places
> where they need to run privileged commands.
>
>
> os-vif extra plugins
> 
>
> Jay has had GIT repositories created to hold the plugins for all
> the other VIF types the libvirt driver needs to support to have
> feature parity with Mitaka and earlier. AFAIK, no one has done
> any work to actually get the code for these working. This is not
> a blocker, since the way the Nova integration is written allows
> us to incrementally convert each VIF type over to use os-vif, so
> we avoid need for a "big bang".
>
>
> os-vif Nova integration
> ---
>
> I have a patch up for review against Nova that converts the libvirt
> driver to use os-vif. It only does the conversion for linux bridge
> and openvswitch, all other vif types fallback to using the current
> code, as mentioned above.  The unit tests for this pass locally,
> but I've not been able to verify its working correctly when run for
> real. There's almost certainly privsep related integration tasks to
> shake out - possibly as little as just installing the rootwrap filter
> needed to allow use of privsep. My focus right now is ironing this
> out so that I can verify linux bridge + ovs work with os-vif.
>
>
> There is a new job defined in the experimental queue that tests that
> can verify Nova against os-vif git master so we can get forwarning
> if something in os-vif will cause Nova to break. This should also
> let us verify that the integration is actually working in Nova CI
> before allowing it to actually merge.
>
>
> os-vif Neutron integration
> --
>
> As mentioned earlier we now have a HostInfo versioned object defined
> in os-vif which Nova will populate. We need to extend 

Re: [openstack-dev] [magnum] Seek advices for a licence issue

2016-04-21 Thread Jay Lau
I got confirmation from Mesosphere that we can use the open source DC/OS in
Magnum now, it is a good time to enhance the Mesos Bay to Open Source DCOS.

From Mesosphere
DC/OS software is licensed under the Apache License, so you should feel
free to use it within the terms of that license.
---

Thanks.

On Thu, Apr 21, 2016 at 5:35 AM, Hongbin Lu  wrote:

> Hi Mark,
>
>
>
> I have went though the announcement in details, From my point of view, it
> seems to resolve the license issue that was blocking us in before. I have
> included the Magnum team in ML to see if our team members have any comment.
>
>
>
> Thanks for the support from foundation.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Mark Collier [mailto:m...@openstack.org]
> *Sent:* April-19-16 12:36 PM
> *To:* Hongbin Lu
> *Cc:* foundat...@lists.openstack.org; Guang Ya GY Liu
> *Subject:* Re: [OpenStack Foundation] [magnum] Seek advices for a licence
> issue
>
>
>
> Hopefully today’s news that Mesosphere is open major sourcing components
> of DCOS under an Apache 2.0 license will make things easier:
>
>
>
> https://mesosphere.com/blog/2016/04/19/open-source-dcos/
>
>
>
> I’ll be interested to hear your take after you have time to look at it in
> more detail, Hongbin.
>
>
>
> Mark
>
>
>
>
>
>
>
> On Apr 9, 2016, at 10:02 AM, Hongbin Lu  wrote:
>
>
>
> Hi all,
>
>
>
> A brief introduction to myself. I am the Magnum Project Team Lead (PTL).
> Magnum is the OpenStack container service. I wrote this email because the
> Magnum team is seeking clarification for a licence issue for shipping
> third-party software (DCOS [1] in particular) and I was advised to consult
> OpenStack Board of Directors in this regards.
>
>
>
> Before getting into the question, I think it is better to provide some
> backgroup information. A feature provided by Magnum is to provision
> container management tool on top of a set of Nova instances. One of the
> container management tool Magnum supports is Apache Mesos [2]. Generally
> speaking, Magnum ships Mesos by providing a custom cloud image with the
> necessary packages pre-installed. So far, all the shipped components are
> open source with appropriate license, so we are good so far.
>
>
>
> Recently, one of our contributors suggested to extend the Mesos support to
> DCOS [3]. The Magnum team is unclear if there is a license issue for
> shipping DCOS, which looks like a close-source product but has community
> version in Amazon Web Services [4]. I want to know what are the appropriate
> actions Magnum team should take in this pursuit, or we should stop pursuing
> this direction further? Advices are greatly appreciated. Please let us know
> if we need to provide further information. Thanks.
>
>
>
> [1] https://docs.mesosphere.com/
>
> [2] http://mesos.apache.org/
>
> [3] https://blueprints.launchpad.net/magnum/+spec/mesos-dcos
>
> [4]
> https://docs.mesosphere.com/administration/installing/installing-community-edition/
>
>
>
> Best regards,
>
> Hongbin
>
>
>
>
>
>
>
> ___
> Foundation mailing list
> foundat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] release hiatus

2016-04-21 Thread Morgan Fainberg
Safe travels! See you in austin.

On Thu, Apr 21, 2016 at 4:22 PM, Tony Breeds 
wrote:

> On Thu, Apr 21, 2016 at 02:13:15PM -0400, Doug Hellmann wrote:
> > The release team is preparing for and traveling to the summit, just as
> > many of you are. With that in mind, we are going to hold off on
> > releasing anything until 2 May, unless there is some sort of critical
> > issue or gate blockage. Please feel free to submit release requests to
> > openstack/releases, but we'll only plan on processing any that indicate
> > critical issues in the commit messages.
>
> What's you preferred way to indicating to the release team that something
> is
> urgent?
>
> There's always the post review jump on IRC and ping y'all.  Just wondering
> if
> you have a preference for something else.
>
> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Link to the latest atomic image

2016-04-21 Thread Eli Qiao


@hongbin,

FYI, there is a patch from yolanda to using fedora atomic images built 
in our mirros https://review.openstack.org/#/c/306283/



On 2016年04月22日 10:41, Hongbin Lu wrote:


Hi team,

Based on a request, I created a link to the latest atomic image that 
Magnum is using: 
https://fedorapeople.org/groups/magnum/fedora-atomic-latest.qcow2 . We 
plan to keep this link pointing to the newest atomic image so that we 
can avoid updating the name of the image for every image upgrade. A 
ticket was created for updating the docs accordingly: 
https://bugs.launchpad.net/magnum/+bug/1573361 .


Best regards,

Hongbin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best Regards, Eli Qiao (乔立勇)
Intel OTC China
<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Link to the latest atomic image

2016-04-21 Thread Hongbin Lu
Hi team,

Based on a request, I created a link to the latest atomic image that Magnum is 
using: https://fedorapeople.org/groups/magnum/fedora-atomic-latest.qcow2 . We 
plan to keep this link pointing to the newest atomic image so that we can avoid 
updating the name of the image for every image upgrade. A ticket was created 
for updating the docs accordingly: 
https://bugs.launchpad.net/magnum/+bug/1573361 .

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More on the topic of DELIMITER, the Quota Management Library proposal

2016-04-21 Thread Amrith Kumar
Inline below ... thread is too long, will catch you in Austin.

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Thursday, April 21, 2016 8:08 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] More on the topic of DELIMITER, the Quota
> Management Library proposal
> 
> Hmm, where do I start... I think I will just cut to the two primary
> disagreements I have. And I will top-post because this email is way too
> big.
> 
> 1) On serializable isolation level.
> 
> No, you don't need it at all to prevent races in claiming. Just use a
> compare-and-update with retries strategy. Proof is here:
> 
> https://github.com/jaypipes/placement-bench/blob/master/placement.py#L97-
> L142
> 
> Works great and prevents multiple writers from oversubscribing any
> resource without relying on any particular isolation level at all.
> 
> The `generation` field in the inventories table is what allows multiple
> writers to ensure a consistent view of the data without needing to rely on
> heavy lock-based semantics and/or RDBMS-specific isolation levels.
> 

[amrith] this works for what it is doing, we can definitely do this. This will 
work at any isolation level, yes. I didn't want to go this route because it is 
going to still require an insert into another table recording what the actual 
'thing' is that is claiming the resource and that insert is going to be in a 
different transaction and managing those two transactions was what I wanted to 
avoid. I was hoping to avoid having two tables tracking claims, one showing the 
currently claimed quota and another holding the things that claimed that quota. 
Have to think again whether that is possible.

> 2) On reservations.
> 
> The reason I don't believe reservations are necessary to be in a quota
> library is because reservations add a concept of a time to a claim of some
> resource. You reserve some resource to be claimed at some point in the
> future and release those resources at a point further in time.
> 
> Quota checking doesn't look at what the state of some system will be at
> some point in the future. It simply returns whether the system *right
> now* can handle a request *right now* to claim a set of resources.
> 
> If you want reservation semantics for some resource, that's totally cool,
> but IMHO, a reservation service should live outside of the service that is
> actually responsible for providing resources to a consumer.
> Merging right-now quota checks and future-based reservations into the same
> library just complicates things unnecessarily IMHO.
> 

[amrith] extension of the above ...

> 3) On resizes.
> 
> Look, I recognize some users see some value in resizing their resources.
> That's fine. I personally think expand operations are fine, and that
> shrink operations are really the operations that should be prohibited in
> the API. But, whatever, I'm fine with resizing of requested resource
> amounts. My big point is if you don't have a separate table that stores
> quota_usages and instead only have a single table that stores the actual
> resource usage records, you don't have to do *any* quota check operations
> at all upon deletion of a resource. For modifying resource amounts (i.e. a
> resize) you merely need to change the calculation of requested resource
> amounts to account for the already-consumed usage amount.
> 
> Bottom line for me: I really won't support any proposal for a complex
> library that takes the resource claim process out of the hands of the
> services that own those resources. The simpler the interface of this
> library, the better.
> 

[amrith] my proposal would not but this email thread has got too long. Yes, 
simpler interface, will catch you in Austin.

> Best,
> -jay
> 
> On 04/19/2016 09:59 PM, Amrith Kumar wrote:
> >> -Original Message-
> >> From: Jay Pipes [mailto:jaypi...@gmail.com]
> >> Sent: Monday, April 18, 2016 2:54 PM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] More on the topic of DELIMITER, the
> >> Quota Management Library proposal
> >>
> >> On 04/16/2016 05:51 PM, Amrith Kumar wrote:
> >>> If we therefore assume that this will be a Quota Management Library,
> >>> it is safe to assume  that quotas are going to be managed on a
> >>> per-project basis, where participating projects will use this library.
> >>> I believe that it stands to reason that any data persistence will
> >>> have to be in a location decided by the individual project.
> >>
> >> Depends on what you mean by "any data persistence". If you are
> >> referring to the storage of quota values (per user, per tenant,
> >> global, etc) I think that should be done by the Keystone service.
> >> This data is essentially an attribute of the user or the tenant or the
> service endpoint itself (i.e.
> >> global defaults). This data also rarely changes and logically belongs
> >> to the service that manages users, tenants, and service endpoints:
> Keystone.
> >>
> >> If you 

[openstack-dev] 答复: Re: 答复: [probably forge email可能是仿冒邮件]Re: 答复: [probably forge email可能是仿冒邮件]Re: openstack-dev] [vitrage] vitrage alarms list

2016-04-21 Thread dong . wenjuan
Hi
I get the latest sources and delete the /etc/vitrage/vitrage.conf,do 
unstack.sh and stack.sh.
But deploy failed. The local.cnof is the same as before.
Is there any vitrage configures i missed?
Use openstack user create cli to create nova,glance and etc is successful.
Thanks for your help. :)

Here is the error log:

2016-04-22 02:00:01.646 | Initializing Vitrage
++/opt/stack/vitrage/devstack/plugin.sh:source:255  init_vitrage
++/opt/stack/vitrage/devstack/plugin.sh:init_vitrage:179 
_vitrage_create_accounts
++/opt/stack/vitrage/devstack/plugin.sh:_vitrage_create_accounts:69 
is_service_enabled vitrage-api
++functions-common:is_service_enabled:2047  local xtrace
+++functions-common:is_service_enabled:2048  set +o
+++functions-common:is_service_enabled:2048  grep xtrace
++functions-common:is_service_enabled:2048  xtrace='set -o xtrace'
++functions-common:is_service_enabled:2049  set +o xtrace
++functions-common:is_service_enabled:2077  return 0
++/opt/stack/vitrage/devstack/plugin.sh:_vitrage_create_accounts:71 
create_service_user vitrage admin
++lib/keystone:create_service_user:449  local role=admin
++lib/keystone:create_service_user:451  get_or_create_user vitrage 
stack Default
++functions-common:get_or_create_user:798   local user_id
++functions-common:get_or_create_user:799   [[ ! -z '' ]]
++functions-common:get_or_create_user:802   local email=
+++functions-common:get_or_create_user:816   openstack user create vitrage 
--password stack --domain=Default --or-show -f value -c id
Discovering versions from the identity service failed when creating the 
password plugin. Attempting to determine version from URL.
Could not determine a suitable URL for the plugin
++functions-common:get_or_create_user:814   user_id=
+functions-common:get_or_create_user:1 exit_trap
+./stack.sh:exit_trap:474  local r=1
++./stack.sh:exit_trap:475  jobs -p
+./stack.sh:exit_trap:475  jobs=
+./stack.sh:exit_trap:478  [[ -n '' ]]
+./stack.sh:exit_trap:484  kill_spinner
+./stack.sh:kill_spinner:370   '[' '!' -z '' ']'
+./stack.sh:exit_trap:486  [[ 1 -ne 0 ]]
+./stack.sh:exit_trap:487  echo 'Error on exit'



董文娟   Wenjuan Dong
控制器四部 / 无线产品   Controller Dept Ⅳ. / Wireless Product Operation
 


上海市浦东新区碧波路889号中兴通讯D3
D3, ZTE, No. 889, Bibo Rd.
T: +86 021 85922M: +86 13661996389
E: dong.wenj...@zte.com.cn
www.ztedevice.com





Eyal B  
2016-04-20 16:34

收件人
dong.wenj...@zte.com.cn
抄送
"OpenStack Development Mailing List (not for usage questions)" 

主题
Re: 答复: [probably forge email可能是仿冒邮件]Re: [openstack-dev] 答复: 
[probably forge email可能是仿冒邮件]Re: openstack-dev] [vitrage] vitrage 
alarms list






Hi,

Can you send the vitrage-graph.log ?
Is the vitrage-graph process running ?

We fixed some bugs with the devstack installation
Do you have the latest sources from the vitrage git repo ?

Can you get the latest sources (just do git pull)
delete the /etc/vitrage/vitrage.conf
do unstack.sh and stack.sh

and let me know if it works

Eyal

On Wed, Apr 20, 2016 at 10:35 AM,  wrote:

Hi, 
I exec all the vitrage cli and get the same error. 
The log error is from /var/log/apache2/vitrage.log 
Here is my config file.Thanks for your help~ 

  


BR 
dwj 




Eyal B  
2016-04-20 14:10 


请答复 给
"OpenStack Development Mailing List \(not for usage questions\)" <
openstack-dev@lists.openstack.org>


收件人
"OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org> 
抄送

主题
[probably forge email可能是仿冒邮件]Re: [openstack-dev] 答复: [probably 
forge email可能是仿冒邮件]Re: openstack-dev] [vitrage] vitrage alarms list








Hi, 

Can you send the local.conf file in your devstack folder ? 
Can you send the /etc/vitrage/vitrage.conf file ? 
If you do vitrage topology show do you get an error ? 
the log that you sent is it vitrage-api.log or vitrage.log ? 

Thanks 
Eyal 



On Wed, Apr 20, 2016 at 5:34 AM,  wrote: 

Hi, 
Thanks for your response. 
There is no error about the request of alarms list in vitrage-graph log. 
Here is the vitrage-api log about the alarms error. 

Thank you for your help~ :) 

2016-04-20 09:45:34.599585 2016-04-20 09:45:34.599 3700 DEBUG 
vitrage.service [-] static_physical.transformer= 
vitrage.datasources.static_physical.transformer.StaticPhysicalTransformer 
log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2525 
2016-04-20 09:45:34.599844 2016-04-20 09:45:34.599 3700 DEBUG 
vitrage.service [-] 

 
log_opt_values 
/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2527 
2016-04-20 09:45:34.797687 2016-04-20 09:45:34.797 3700 INFO root [-] 
Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt 
2016-04-20 09:45:34.848659 

Re: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what does it test?

2016-04-21 Thread Matt Riedemann

On 3/31/2016 7:31 AM, Znoinski, Waldemar wrote:


[WZ] See comments below about full/small wiki but would the below is enough or 
you'd want or to see more:
- networking-ci runs (with exceptions):
tempest.api.network
tempest.scenario.test_network_basic_ops

- nfv-ci runs (with exceptions):
tempest.api.compute on standard flavors with NFV features enabled
tempest.scenario (including intel-nfv-ci-tests - 
https://github.com/openstack/intel-nfv-ci-tests) on standard flavors with NFV 
features enabled


In a recent run of the Intel-NFV-CI job I don't see those custom 
scenario tests being run:


http://intel-openstack-ci-logs.ovh/compute-ci/refs/changes/68/267768/2/compute-nfv-flavors/20160116_231609/tempest/scenario-on-nfv-flavors/testr_results.html






 > From the wiki it looks like the Intel Networking CI tests ovs-dpdk but only 
for
 >Neutron. Could that be expanded to also test on Nova changes that hit a sub-
 >set of the nova tree?
[WZ] Yes, Networking CI is for neutron to test ovs-dpdk. It was also configured 
to trigger on openstack/nova changes when they affect nova/virt/libvirt/vif.py. 
It's currently disabled due to issue with Jenkins plugin we're seeing when 
having two jobs pointing at the same project simultaneously which causes to 
missed comments. Example [5]. We're still investigating one last option to get 
it working properly with the current setup. Even if we fail we're currently 
migrating to new CI setup (Openstack Infra's downstream-ci suite) and we'll 
reenable that ovs-dpdk testing on nova changes once we're migrated, 6 - 8 weeks 
from now.
Is there more you feel we should be testing our ovs-dpdk on when it comes to 
nova changes?


I'd think it should run on any change in nova that is under 
nova/network, or at least nova/network/linux_net.py and 
nova/network/model.py.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what does it test?

2016-04-21 Thread Matt Riedemann

On 3/30/2016 8:47 PM, yongli he wrote:

Hi, mriedem

Shaohe is on vacation. And Intel SRIOV CI  comment  to Neutron. running
the macvtap vnic  SRIOV testing and plus required neutron smoking test.

[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI

Regards
Yongli He




在 2016年03月30日 23:21, Matt Riedemann 写道:

Intel has a few third party CIs in the third party systems wiki [1].

I was talking with Moshe Levi today about expanding coverage for
mellanox CI in nova, today they run an SRIOV CI for vnic type
'direct'. I'd like them to also start running their 'macvtap' CI on
the same nova changes (that job only runs in neutron today I think).

I'm trying to see what we have for coverage on these different NFV
configurations, and because of limited resources to run NFV CI, don't
want to duplicate work here.

So I'm wondering what the various Intel NFV CI jobs run, specifically
the Intel Networking CI [2], Intel NFV CI [3] and Intel SRIOV CI [4].

From the wiki it looks like the Intel Networking CI tests ovs-dpdk but
only for Neutron. Could that be expanded to also test on Nova changes
that hit a sub-set of the nova tree?

I really don't know what the latter two jobs test as far as
configuration is concerned, the descriptions in the wikis are pretty
empty (please update those to be more specific).

Please also include in the wiki the recheck method for each CI so I
don't have to dig through Gerrit comments to find one.

[1] https://wiki.openstack.org/wiki/ThirdPartySystems
[2] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-Networking-CI
[3] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-NFV-CI
[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Mellanox is running an SRIOV job with macvtap on Nova now so we probably 
don't need the Intel one running on Nova changes too, unless there are 
differences in the environment that would warrant running that on Nova also.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More on the topic of DELIMITER, the Quota Management Library proposal

2016-04-21 Thread Monty Taylor

On 04/21/2016 07:07 PM, Jay Pipes wrote:

Hmm, where do I start... I think I will just cut to the two primary
disagreements I have. And I will top-post because this email is way too
big.

1) On serializable isolation level.

No, you don't need it at all to prevent races in claiming. Just use a
compare-and-update with retries strategy. Proof is here:

https://github.com/jaypipes/placement-bench/blob/master/placement.py#L97-L142


Works great and prevents multiple writers from oversubscribing any
resource without relying on any particular isolation level at all.

The `generation` field in the inventories table is what allows multiple
writers to ensure a consistent view of the data without needing to rely
on heavy lock-based semantics and/or RDBMS-specific isolation levels.

2) On reservations.

The reason I don't believe reservations are necessary to be in a quota
library is because reservations add a concept of a time to a claim of
some resource. You reserve some resource to be claimed at some point in
the future and release those resources at a point further in time.

Quota checking doesn't look at what the state of some system will be at
some point in the future. It simply returns whether the system *right
now* can handle a request *right now* to claim a set of resources.

If you want reservation semantics for some resource, that's totally
cool, but IMHO, a reservation service should live outside of the service
that is actually responsible for providing resources to a consumer.
Merging right-now quota checks and future-based reservations into the
same library just complicates things unnecessarily IMHO.

3) On resizes.

Look, I recognize some users see some value in resizing their resources.
That's fine. I personally think expand operations are fine, and that
shrink operations are really the operations that should be prohibited in
the API. But, whatever, I'm fine with resizing of requested resource
amounts. My big point is if you don't have a separate table that stores
quota_usages and instead only have a single table that stores the actual
resource usage records, you don't have to do *any* quota check
operations at all upon deletion of a resource. For modifying resource
amounts (i.e. a resize) you merely need to change the calculation of
requested resource amounts to account for the already-consumed usage
amount.

Bottom line for me: I really won't support any proposal for a complex
library that takes the resource claim process out of the hands of the
services that own those resources. The simpler the interface of this
library, the better.


I agree with every word that Jay has written here. I especially agree 
with point 1, and in fact have been in favor of that approach over the 
current system of table locks in the nova quota code for about as long 
as there has been nova quota code. But all three points are spot on.



On 04/19/2016 09:59 PM, Amrith Kumar wrote:

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, April 18, 2016 2:54 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] More on the topic of DELIMITER, the Quota
Management Library proposal

On 04/16/2016 05:51 PM, Amrith Kumar wrote:

If we therefore assume that this will be a Quota Management Library,
it is safe to assume  that quotas are going to be managed on a
per-project basis, where participating projects will use this library.
I believe that it stands to reason that any data persistence will have
to be in a location decided by the individual project.


Depends on what you mean by "any data persistence". If you are referring
to the storage of quota values (per user, per tenant, global, etc) I
think
that should be done by the Keystone service. This data is essentially an
attribute of the user or the tenant or the service endpoint itself (i.e.
global defaults). This data also rarely changes and logically belongs to
the service that manages users, tenants, and service endpoints:
Keystone.

If you are referring to the storage of resource usage records, yes, each
service project should own that data (and frankly, I don't see a need to
persist any quota usage data at all, as I mentioned in a previous
reply to
Attila).



[amrith] You make a distinction that I had made implicitly, and it is
important
to highlight it. Thanks for pointing it out. Yes, I meant both of the
above, and as stipulated. Global defaults in keystone (somehow, TBD) and
usage records, on a per-service basis.



That may not be a very interesting statement but the corollary is, I
think, a very significant statement; it cannot be assumed that the
quota management information for all participating projects is in the
same database.


It cannot be assumed that this information is even in a database at
all...



[amrith] I don't follow. If the service in question is to be scalable,
I think it
stands to reason that there must be some mechanism by which instances of
the service can share usage records (as you refer to them, 

Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-21 Thread Jay Pipes

On 04/20/2016 06:40 PM, Matt Riedemann wrote:

Note that I think the only time Nova gets details about ports in the API
during a server create request is when doing the network request
validation, and that's only if there is a fixed IP address or specific
port(s) in the request, otherwise Nova just gets the networks. [1]

[1]
https://github.com/openstack/nova/blob/ee7a01982611cdf8012a308fa49722146c51497f/nova/network/neutronv2/api.py#L1123


Actually, nova.network.neutronv3.api.API.allocate_for_instance() is 
*never* called by the Compute API service (though, strangely, 
deallocate_for_instance() *is* called by the Compute API service.


allocate_for_instance() is *only* ever called in the nova-compute service:

https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/compute/manager.py#L1388

I was actually on a hangout today with Carl, Miguel and Dan Smith 
talking about just this particular section of code with regards to 
routed networks IPAM handling.


What I believe we'd like to do is move to a model where we call out to 
Neutron here in the conductor:


https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L397

and ask Neutron to give us as much information about available subnet 
allocation pools and segment IDs as it can *before* we end up calling 
the scheduler here:


https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L415

Not only will the segment IDs allow us to more properly use network 
affinity in placement decisions, but doing this kind of "probing" for 
network information in the conductor is inherently more scalable than 
doing this all in allocate_for_instance() on the compute node while 
holding the giant COMPUTE_NODE_SEMAPHORE lock.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-21 Thread Cathy Zhang
I like Malini’s suggestion on meeting for a lunch to get to know each other, 
then continue on Thursday.
So let’s meet at "Salon C" for lunch from 12:30pm~1:50pm on Wednesday and then 
continue the discussion at Room 400 at 3:10pm Thursday.
Since Salon C is a big room, I will put a sign “Common Flow Classifier and OVS 
Agent Extension” on the table.

I have created an etherpad for the discussion. 
https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit

Thanks,
Cathy


From: Stephen Wong [mailto:stephen.kf.w...@gmail.com]
Sent: Thursday, April 21, 2016 1:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Cathy Zhang; Miguel Angel Ajo; Reedip
Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

+1 on Wednesday lunch

On Thu, Apr 21, 2016 at 12:02 PM, Ihar Hrachyshka 
> wrote:
Cathy Zhang > wrote:
Hi everyone,

We have room 400 at 3:10pm on Thursday available for discussion of the two 
topics.
Another option is to use the common room with roundtables in "Salon C" during 
Monday or Wednesday lunch time.

Room 400 at 3:10pm is a closed room while the Salon C is a big open room which 
can host 500 people.

I am Ok with either option. Let me know if anyone has a strong preference.

On Monday, I have two talks to do. First one is 2:50-3:30pm, second one is 
4:40-5:20pm. But lunch time should probably be fine if it leaves time for the 
actual lunch...

Thursday at 3:10pm also works for me.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More on the topic of DELIMITER, the Quota Management Library proposal

2016-04-21 Thread Jay Pipes
Hmm, where do I start... I think I will just cut to the two primary 
disagreements I have. And I will top-post because this email is way too big.


1) On serializable isolation level.

No, you don't need it at all to prevent races in claiming. Just use a 
compare-and-update with retries strategy. Proof is here:


https://github.com/jaypipes/placement-bench/blob/master/placement.py#L97-L142

Works great and prevents multiple writers from oversubscribing any 
resource without relying on any particular isolation level at all.


The `generation` field in the inventories table is what allows multiple 
writers to ensure a consistent view of the data without needing to rely 
on heavy lock-based semantics and/or RDBMS-specific isolation levels.


2) On reservations.

The reason I don't believe reservations are necessary to be in a quota 
library is because reservations add a concept of a time to a claim of 
some resource. You reserve some resource to be claimed at some point in 
the future and release those resources at a point further in time.


Quota checking doesn't look at what the state of some system will be at 
some point in the future. It simply returns whether the system *right 
now* can handle a request *right now* to claim a set of resources.


If you want reservation semantics for some resource, that's totally 
cool, but IMHO, a reservation service should live outside of the service 
that is actually responsible for providing resources to a consumer. 
Merging right-now quota checks and future-based reservations into the 
same library just complicates things unnecessarily IMHO.


3) On resizes.

Look, I recognize some users see some value in resizing their resources. 
That's fine. I personally think expand operations are fine, and that 
shrink operations are really the operations that should be prohibited in 
the API. But, whatever, I'm fine with resizing of requested resource 
amounts. My big point is if you don't have a separate table that stores 
quota_usages and instead only have a single table that stores the actual 
resource usage records, you don't have to do *any* quota check 
operations at all upon deletion of a resource. For modifying resource 
amounts (i.e. a resize) you merely need to change the calculation of 
requested resource amounts to account for the already-consumed usage amount.


Bottom line for me: I really won't support any proposal for a complex 
library that takes the resource claim process out of the hands of the 
services that own those resources. The simpler the interface of this 
library, the better.


Best,
-jay

On 04/19/2016 09:59 PM, Amrith Kumar wrote:

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, April 18, 2016 2:54 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] More on the topic of DELIMITER, the Quota
Management Library proposal

On 04/16/2016 05:51 PM, Amrith Kumar wrote:

If we therefore assume that this will be a Quota Management Library,
it is safe to assume  that quotas are going to be managed on a
per-project basis, where participating projects will use this library.
I believe that it stands to reason that any data persistence will have
to be in a location decided by the individual project.


Depends on what you mean by "any data persistence". If you are referring
to the storage of quota values (per user, per tenant, global, etc) I think
that should be done by the Keystone service. This data is essentially an
attribute of the user or the tenant or the service endpoint itself (i.e.
global defaults). This data also rarely changes and logically belongs to
the service that manages users, tenants, and service endpoints: Keystone.

If you are referring to the storage of resource usage records, yes, each
service project should own that data (and frankly, I don't see a need to
persist any quota usage data at all, as I mentioned in a previous reply to
Attila).



[amrith] You make a distinction that I had made implicitly, and it is important
to highlight it. Thanks for pointing it out. Yes, I meant both of the
above, and as stipulated. Global defaults in keystone (somehow, TBD) and
usage records, on a per-service basis.



That may not be a very interesting statement but the corollary is, I
think, a very significant statement; it cannot be assumed that the
quota management information for all participating projects is in the
same database.


It cannot be assumed that this information is even in a database at all...



[amrith] I don't follow. If the service in question is to be scalable, I think 
it
stands to reason that there must be some mechanism by which instances of
the service can share usage records (as you refer to them, and I like
that term). I think it stands to reason that there must be some
database, no?


A hypothetical service consuming the Delimiter library provides
requesters with some widgets, and wishes to track the widgets that it
has provisioned both on a per-user basis, and on the 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Fox, Kevin M
Yup, Murano indeed may be part of the solution. The problem is much larger then 
any one single OpenStack project though, so its good to have the discussions 
with the various projects to see where the pieces best fit. If Magnum at the 
end of the day rejects the idea that a COE abstraction is not in Magnum's best 
interest, some other OpenStack project will probably pick it up. But hopefully 
Magnum will commit to at least not having any hurtles that stand in the way of 
the workflow actually being implementable somewhere in the stack.

Thanks,
Kevin

From: Monty Taylor [mord...@inaugust.com]
Sent: Thursday, April 21, 2016 1:43 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

I believe you just described Murano.

On 04/21/2016 03:31 PM, Fox, Kevin M wrote:
> There are a few reasons, but the primary one that affects me is Its from the 
> app-catalog use case.
>
> To gain user support for a product like OpenStack, you need users. The easier 
> you make it to use, the more users you can potentially get.  Traditional 
> Operating Systems learned this a while back. Rather then make each OS user 
> have to be a developer and custom deploy every app they want to run, they 
> split the effort in such a way that Developers can provide software through 
> channels that Users that are not skilled Developers can consume and deploy. 
> The "App" culture in the mobile space it the epitome of that at the moment. 
> My grandmother fires up the app store on her phone, clicks install on 
> something interesting, and starts using it.
>
> Right now, Thats incredibly difficult in OpenStack. You have to find the 
> software your interested in, figure out which components your going to 
> consume (nova, magnum, which COE, etc) then use those api's to launch some 
> resource. Then after that resource is up, then you have to switch tools and 
> then use those tools to further launch things, ansible or kubectl or 
> whatever, then further deploy things.
>
> What I'm looking for, is a unified enough api, that a user can go into 
> horizon, go to the app catalog, find an interesting app, click install/run, 
> and then get a link to a service they can click on and start consuming the 
> app they want in the first place. The number of users that could use such an 
> interface, and consume OpenStack resources are several orders of magnitude 
> greater then the numbers that can manually deploy something ala the procedure 
> in the previous paragraph. More of that is good for Users, Developers, and 
> Operators.
>
> Does that help?
>
> Thanks,
> Kevin
>
>
> 
> From: Keith Bray [keith.b...@rackspace.com]
> Sent: Thursday, April 21, 2016 1:10 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
> abstraction for all COEs
>
> If you don¹t want a user to have to choose a COE, can¹t we just offer an
> option for the operator to mark a particular COE as the ³Default COE² that
> could be defaulted to if one isn¹t specified in the Bay create call?  If
> the operator didn¹t specify a default one, then the CLI/UI must submit one
> in the bay create call otherwise it would fail.
>
> Kevin, can you clarify Why you have to write scripts to deploy a container
> to the COE?   It can be made easy for the user to extract all the
> runtime/env vars needed for a user to just do ³docker run Š²  and poof,
> container running on Swarm on a Magnum bay.  Can you help me understand
> the script part of it?   I don¹t believe container users want an
> abstraction between them and their COE CLIŠ but, what I believe isn¹t
> important.  What I do think is important is that we not require OpenStack
> operators to run that abstraction layer to be running a ³magnum compliant²
> service.  It should either be an ³optional² API add-on or a separate API
> or separate project.  If some folks want an abstraction layer, then great,
> feel free to build it and even propose it under the OpenStack ecosystem..
> But, that abstraction would be a ³proxy API" over the COEs, and doesn¹t
> need to be part of Magnum¹s offering, as it would be targeted at the COE
> interactions and not the bay interactions (which is where Magnum scope is
> best focused).  I don¹t think Magnum should play in both these distinct
> domains (Bay interaction vs. COE interaction).  The former (bay
> interaction) is an infrastructure cloud thing (fits well with OpenStack),
> the latter (COE interaction) is an obfuscation of emerging technologies,
> which gets in to the Trap that Adrian mentioned.  The abstraction layer
> API will forever and always be drastically behind in trying to keep up
> with the COE innovation.
>
> In summary, an abstraction over the COEs would be best served as a
> different effort.  Magnum would be best focused on bay interactions 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Fox, Kevin M
+1. That's a very good list. Thanks for writing it up. :)

Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, April 21, 2016 4:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

Hi Monty,

I respect your position, but I want to point out that there is not only one 
human wants this. There are a group of people want this. I have been working 
for Magnum in about a year and a half. Along the way, I have been researching 
how to attract users to Magnum. My observation is there are two groups of 
potential users. The first group of users are generally in the domain of 
individual COEs and they want to use the native COE APIs. The second group of 
users are generally out of the domain and they want an OpenStack way to manage 
containers. Below are the specific use cases:
* Some people want to migrate the workload from VM to container
* Some people want to support hybrid deployment (VMs & containers) of their 
application
* Some people want to bring containers (in Magnum bays) to a Heat template, and 
enable connections between containers and other OpenStack resources
* Some people want to bring containers to Horizon
* Some people want to send container metrics to Ceilometer
* Some people want a portable experience across COEs
* Some people just want a container and don't want the complexities of others 
(COEs, bays, baymodels, etc.)

I think we need to research how large the second group of users is. Then, based 
on the data, we can decide if the LCD APIs should be part of Magnum, a Magnum 
plugin, or it should not exist. Thoughts?

Best regards,
Hongbin

> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: April-21-16 4:42 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
> On 04/21/2016 03:18 PM, Fox, Kevin M wrote:
> > Here's where we disagree.
>
> We may have to agree to disagree.
>
> > Your speaking for everyone in the world now, and all you need is one
> > counter example. I'll be that guy. Me. I want a common abstraction
> for
> > some common LCD stuff.
>
> We also disagree on this. Just because one human wants something does
> not make implementing that feature a good idea. In fact, good design is
> largely about appropriately and selectively saying no.
>
> Now I'm not going to pretend that we're good at design around here...
> we seem to very easily fall into the trap that your assertion presents.
> But in almost every one of those cases, having done so winds up having
> been a mistake.
>
> > Both Sahara and Trove have LCD abstractions for very common things.
> > Magnum should too.
> >
> > You are falsely assuming that if an LCD abstraction is provided, then
> > users cant use the raw api directly. This is false. There is no
> > either/or. You can have both. I would be against it too if they were
> > mutually exclusive. They are not.
>
> I'm not assuming that at all. I'm quite clearly asserting that the
> existence of an OpenStack LCD is a Bad Idea. This is a thing we
> disagree about.
>
> I think it's unfriendly to the upstreams in question. I think it does
> not provide significant enough value to the world to justify that
> unfriendliness. And also, https://xkcd.com/927/
>
> > Thanks, Kevin  From: Monty
> > Taylor [mord...@inaugust.com] Sent: Thursday, April 21, 2016 10:22 AM
> > To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
> > [magnum][app-catalog][all] Build unified abstraction for all COEs
> >
> > On 04/21/2016 11:03 AM, Tim Bell wrote:
> >>
> >>
> >> On 21/04/16 17:38, "Hongbin Lu"  wrote:
> >>
> >>>
> >>>
>  -Original Message- From: Adrian Otto
>  [mailto:adrian.o...@rackspace.com] Sent: April-21-16 10:32 AM
>  To: OpenStack Development Mailing List (not for usage
>  questions) Subject: Re: [openstack-dev] [magnum][app-catalog][all]
>  Build unified abstraction for all COEs
> 
> 
> > On Apr 20, 2016, at 2:49 PM, Joshua Harlow 
>  wrote:
> >
> > Thierry Carrez wrote:
> >> Adrian Otto wrote:
> >>> This pursuit is a trap. Magnum should focus on making
> >>> native container APIs available. We should not wrap APIs
> >>> with leaky abstractions. The lowest common denominator of
> >>> all COEs is an remarkably low value API that adds
> >>> considerable complexity to
>  Magnum
> >>> that will not strategically advance OpenStack. If we
> >>> instead focus our effort on making the COEs work better
> >>> on OpenStack, that would be a winning strategy. Support
> >>> and compliment our various COE
>  ecosystems.
> >
> > So I'm all for avoiding 'wrap APIs with leaky abstractions'
> > and 'making 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Fox, Kevin M
Amrith,

Very well thought out. Thanks. :)

I agree a nova driver that let you treat containers the same way as vm's, bare 
metal, and lxc containers would be a great thing, and if it could plug into 
magnum managed clusters well, would be awesome.

I think a bit of the conversation around it gets muddy when you start talking 
about the philosophy between lxc and docker containers. They are very 
different. lxc containers typically are heavy weight. You think of them more as 
a vm without the kernel. Multiple daemons run in them, you have a regular init 
system, etc. This isn't bad. It has some benefits. But it also has some 
drawbacks.

Docker's philosophy of software deployment has typically been much different 
then that, and doesn't lend itself to launching that way with nova. With 
docker, each individual services gets its own container, and are co'scheduled. 
Not at the ip level but even lower at the unix/filesystem level.

For example, with Trove, architected with the docker philosophy, you might have 
two containers, one for mysql which exports its unix socket to a second 
container for the guest agent, which talks to mysql over the shared socket. The 
benefit with this is, you only need one guest agent container for all of your 
different types of databases (mysql, postgres,mongodb,etc). Your db and your 
guest agent can even be different distro's and still will work. Its then also 
very easy to upgrade just the guest agent container without affecting the db 
container at all. You just delete/recreate that container, leaving the other 
container alone.

So, when you architect docker containers using this phylosophy, you can't 
really use nova as is, as an abstraction. You can't share unix sockets between 
container instances... But, this kind of functionality is very common in all 
the COE's and should be able to be easily made into an abstraction that all 
current COE's can easily launch. Hence thinking it might be best put in Magnum. 
A nova extension may work too... not sure. But seems more natural in Magnum to 
me.

Thanks,
Kevin


From: Amrith Kumar [amr...@tesora.com]
Sent: Thursday, April 21, 2016 2:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

As I was preparing some thoughts for the Board/TC meeting on Sunday that will 
discuss this topic, I made the notes below and was going to post them on a 
topic specific etherpad but I didn't find one.

I want to represent the view point of a consumer of compute services in 
OpenStack. Trove is a consumer of many OpenStack services, and one of the 
things I sometimes (infrequently) get asked is whether Trove supports 
containers. I have wondered about the utility of running databases in 
containers and after quizzing people who asked for container support, I was 
able to put them into three buckets and ranked them roughly by frequency.

2. containers are a very useful packaging construct; unionfs for VM's would be 
a great thing
3. containers launch faster than VM's
4. container performance is in some cases better than VM's

That's weird, what is #1, you may ask. Well, that was

1. containers are cool, it is currently the highest grossing buzzword

OK, so I ignored #1 and focused on #2-#4 and these are very relevant for Trove, 
I think.

While I realize that containers offer many capabilities, from the perspective 
of Trove, I have not found a compelling reason to treat it differently from any 
other compute capability. As a matter of fact, Trove works fine with bare metal 
(using the ironic driver) and with VM's using the various VM drivers. I even 
had all of Trove working with containers using nova-docker. I had to make some 
specific choices on my docker images but I got it all to work as a prototype.

My belief is that there are a group of use-cases where a common compute 
abstraction would be beneficial. In an earlier message on one of these threads, 
Adrian made a very good point[1] that "I suppose you were imagining an LCD 
approach. If that's what you want, just use the existing Nova API, and load 
different compute drivers on different host aggregates. A single Nova client 
can produce VM, BM (Ironic), and Container (lbvirt-lxc) instances all with a 
common API (Nova) if it's configured in this way. That's what we do. Flavors 
determine which compute type you get."

He then went on to say, "If what you meant is that you could tap into the power 
of all the unique characteristics of each of the various compute types (through 
some modular extensibility framework) you'll likely end up with complexity in 
Trove that is comparable to integrating with the native upstream APIs, along 
with the disadvantage of waiting for OpenStack to continually catch up to the 
pace of change of the various upstream systems on which it depends. This is a 
recipe for disappointment."

I've pondered this 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Fox, Kevin M
Thats cool. Hopefully something great will come of it. :)

Thanks for sharing the link. :)

Kevin

From: Joshua Harlow [harlo...@fastmail.com]
Sent: Thursday, April 21, 2016 2:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

I thought this was also what the goal of https://cncf.io/ was starting
to be? Maybe to early to tell if that standardization will be an real
outcome vs just an imagined outcome :-P

-Josh

Fox, Kevin M wrote:
> The COE's have a pressure not to standardize their api's between competing 
> COE's. If you can lock a user into your api, then they cant go to your 
> competitor.
>
> The standard api really needs to come from those invested in not being locked 
> in. OpenStack's been largely about that since the beginning. It may not 
> belong in Magnum, but I do believe it belongs in OpenStack.
>
> Thanks,
> Kevin
> 
> From: Steve Gordon [sgor...@redhat.com]
> Sent: Thursday, April 21, 2016 6:39 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
> abstraction for all COEs
>
> - Original Message -
>> From: "Hongbin Lu"
>> To: "OpenStack Development Mailing List (not for usage 
>> questions)"
>>> -Original Message-
>>> From: Keith Bray [mailto:keith.b...@rackspace.com]
>>> Sent: April-20-16 6:13 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>>> abstraction for all COEs
>>>
>>> Magnum doesn¹t have to preclude tight integration for single COEs you
>>> speak of.  The heavy lifting of tight integration of the COE in to
>>> OpenStack (so that it performs optimally with the infra) can be modular
>>> (where the work is performed by plug-in models to Magnum, not performed
>>> by Magnum itself. The tight integration can be done by leveraging
>>> existing technologies (Heat and/or choose your DevOps tool of choice:
>>> Chef/Ansible/etc). This allows interested community members to focus on
>>> tight integration of whatever COE they want, focusing specifically on
>> I agree that tight integration can be achieved by a plugin, but I think the
>> key question is who will do the work. If tight integration needs to be done,
>> I wonder why it is not part of the Magnum efforts.
>
> Why does the integration belong in Magnum though? To me it belongs in the 
> COEs themselves (e.g. their in-tree network/storage plugins) such that 
> someone can leverage them regardless of their choices regarding COE 
> deployment tooling (and yes that means Magnum should be able to leverage them 
> too)? I guess the issue is that in the above conversation we are overloading 
> the term "integration" which can be taken to mean different things...
>
> -Steve
>
>>  From my point of view,
>> pushing the work out doesn't seem to address the original pain, which is
>> some users don't want to explore the complexities of individual COEs.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] release hiatus

2016-04-21 Thread Tony Breeds
On Thu, Apr 21, 2016 at 02:13:15PM -0400, Doug Hellmann wrote:
> The release team is preparing for and traveling to the summit, just as
> many of you are. With that in mind, we are going to hold off on
> releasing anything until 2 May, unless there is some sort of critical
> issue or gate blockage. Please feel free to submit release requests to
> openstack/releases, but we'll only plan on processing any that indicate
> critical issues in the commit messages.

What's you preferred way to indicating to the release team that something is
urgent?

There's always the post review jump on IRC and ping y'all.  Just wondering if
you have a preference for something else.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Hongbin Lu
Hi Monty,

I respect your position, but I want to point out that there is not only one 
human wants this. There are a group of people want this. I have been working 
for Magnum in about a year and a half. Along the way, I have been researching 
how to attract users to Magnum. My observation is there are two groups of 
potential users. The first group of users are generally in the domain of 
individual COEs and they want to use the native COE APIs. The second group of 
users are generally out of the domain and they want an OpenStack way to manage 
containers. Below are the specific use cases:
* Some people want to migrate the workload from VM to container
* Some people want to support hybrid deployment (VMs & containers) of their 
application
* Some people want to bring containers (in Magnum bays) to a Heat template, and 
enable connections between containers and other OpenStack resources
* Some people want to bring containers to Horizon
* Some people want to send container metrics to Ceilometer
* Some people want a portable experience across COEs
* Some people just want a container and don't want the complexities of others 
(COEs, bays, baymodels, etc.)

I think we need to research how large the second group of users is. Then, based 
on the data, we can decide if the LCD APIs should be part of Magnum, a Magnum 
plugin, or it should not exist. Thoughts?

Best regards,
Hongbin 

> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: April-21-16 4:42 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> On 04/21/2016 03:18 PM, Fox, Kevin M wrote:
> > Here's where we disagree.
> 
> We may have to agree to disagree.
> 
> > Your speaking for everyone in the world now, and all you need is one
> > counter example. I'll be that guy. Me. I want a common abstraction
> for
> > some common LCD stuff.
> 
> We also disagree on this. Just because one human wants something does
> not make implementing that feature a good idea. In fact, good design is
> largely about appropriately and selectively saying no.
> 
> Now I'm not going to pretend that we're good at design around here...
> we seem to very easily fall into the trap that your assertion presents.
> But in almost every one of those cases, having done so winds up having
> been a mistake.
> 
> > Both Sahara and Trove have LCD abstractions for very common things.
> > Magnum should too.
> >
> > You are falsely assuming that if an LCD abstraction is provided, then
> > users cant use the raw api directly. This is false. There is no
> > either/or. You can have both. I would be against it too if they were
> > mutually exclusive. They are not.
> 
> I'm not assuming that at all. I'm quite clearly asserting that the
> existence of an OpenStack LCD is a Bad Idea. This is a thing we
> disagree about.
> 
> I think it's unfriendly to the upstreams in question. I think it does
> not provide significant enough value to the world to justify that
> unfriendliness. And also, https://xkcd.com/927/
> 
> > Thanks, Kevin  From: Monty
> > Taylor [mord...@inaugust.com] Sent: Thursday, April 21, 2016 10:22 AM
> > To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
> > [magnum][app-catalog][all] Build unified abstraction for all COEs
> >
> > On 04/21/2016 11:03 AM, Tim Bell wrote:
> >>
> >>
> >> On 21/04/16 17:38, "Hongbin Lu"  wrote:
> >>
> >>>
> >>>
>  -Original Message- From: Adrian Otto
>  [mailto:adrian.o...@rackspace.com] Sent: April-21-16 10:32 AM
>  To: OpenStack Development Mailing List (not for usage
>  questions) Subject: Re: [openstack-dev] [magnum][app-catalog][all]
>  Build unified abstraction for all COEs
> 
> 
> > On Apr 20, 2016, at 2:49 PM, Joshua Harlow 
>  wrote:
> >
> > Thierry Carrez wrote:
> >> Adrian Otto wrote:
> >>> This pursuit is a trap. Magnum should focus on making
> >>> native container APIs available. We should not wrap APIs
> >>> with leaky abstractions. The lowest common denominator of
> >>> all COEs is an remarkably low value API that adds
> >>> considerable complexity to
>  Magnum
> >>> that will not strategically advance OpenStack. If we
> >>> instead focus our effort on making the COEs work better
> >>> on OpenStack, that would be a winning strategy. Support
> >>> and compliment our various COE
>  ecosystems.
> >
> > So I'm all for avoiding 'wrap APIs with leaky abstractions'
> > and 'making COEs work better on OpenStack' but I do dislike
> > the part
>  about COEs (plural) because it is once again the old
>  non-opinionated problem that we (as a community) suffer from.
> >
> > Just my 2 cents, but I'd almost rather we pick one COE and
> > integrate that deeply/tightly with openstack, and yes if this

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Fox, Kevin M
Yeah. its good to disagree and talk through it. sometimes there just isn't a 
way to see eye to eye on something. thats fine too. I was just objecting to the 
assertion:

"I do not believe anyone in the world wants us to build an
> abstraction layer on top of the _use_ of swarm/k8s/mesos. People who
> want to use those technologies know what they want."

This felt very wrong to me to the conversation. I don't believe I'm alone in 
the desire for basic abstraction too, since multiple people are involved on the 
pro-coe-abstraction side of the conversation. Please don't try and dismiss the 
conversation that way or believe you know the only ways folks will want to use 
technology.

I'm just fine having the conversation that what folks are asking for, might not 
be a good idea for the community to support. Thats a fair thing to talk about. 
Lets continue that conversation.

Thanks,
Kevin


From: Monty Taylor [mord...@inaugust.com]
Sent: Thursday, April 21, 2016 1:41 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

On 04/21/2016 03:18 PM, Fox, Kevin M wrote:
> Here's where we disagree.

We may have to agree to disagree.

> Your speaking for everyone in the world now, and all you need is one
> counter example. I'll be that guy. Me. I want a common abstraction
> for some common LCD stuff.

We also disagree on this. Just because one human wants something does
not make implementing that feature a good idea. In fact, good design is
largely about appropriately and selectively saying no.

Now I'm not going to pretend that we're good at design around here... we
seem to very easily fall into the trap that your assertion presents. But
in almost every one of those cases, having done so winds up having been
a mistake.

> Both Sahara and Trove have LCD abstractions for very common things.
> Magnum should too.
>
> You are falsely assuming that if an LCD abstraction is provided, then
> users cant use the raw api directly. This is false. There is no
> either/or. You can have both. I would be against it too if they were
> mutually exclusive. They are not.

I'm not assuming that at all. I'm quite clearly asserting that the
existence of an OpenStack LCD is a Bad Idea. This is a thing we disagree
about.

I think it's unfriendly to the upstreams in question. I think it does
not provide significant enough value to the world to justify that
unfriendliness. And also, https://xkcd.com/927/

> Thanks, Kevin  From: Monty
> Taylor [mord...@inaugust.com] Sent: Thursday, April 21, 2016 10:22
> AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
> [magnum][app-catalog][all] Build unified abstraction for all COEs
>
> On 04/21/2016 11:03 AM, Tim Bell wrote:
>>
>>
>> On 21/04/16 17:38, "Hongbin Lu"  wrote:
>>
>>>
>>>
 -Original Message- From: Adrian Otto
 [mailto:adrian.o...@rackspace.com] Sent: April-21-16 10:32 AM
 To: OpenStack Development Mailing List (not for usage
 questions) Subject: Re: [openstack-dev]
 [magnum][app-catalog][all] Build unified abstraction for all
 COEs


> On Apr 20, 2016, at 2:49 PM, Joshua Harlow
> 
 wrote:
>
> Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making
>>> native container APIs available. We should not wrap APIs
>>> with leaky abstractions. The lowest common denominator of
>>> all COEs is an remarkably low value API that adds
>>> considerable complexity to
 Magnum
>>> that will not strategically advance OpenStack. If we
>>> instead focus our effort on making the COEs work better
>>> on OpenStack, that would be a winning strategy. Support
>>> and compliment our various COE
 ecosystems.
>
> So I'm all for avoiding 'wrap APIs with leaky abstractions'
> and 'making COEs work better on OpenStack' but I do dislike
> the part
 about COEs (plural) because it is once again the old
 non-opinionated problem that we (as a community) suffer from.
>
> Just my 2 cents, but I'd almost rather we pick one COE and
> integrate that deeply/tightly with openstack, and yes if this
> causes some part of the openstack community to be annoyed,
> meh, to bad. Sadly I have a feeling we are hurting ourselves
> by continuing to try to be
 everything
> and not picking anything (it's a general thing we, as a
> group, seem
 to
> be good at, lol). I mean I get the reason to just support all
> the things, but it feels like we as a community could just
> pick something, work together on figuring out how to pick
> one, using all these bright leaders we have to help make that
> possible (and yes this might piss some people off, to bad).
> Then work toward making that 

Re: [openstack-dev] [kolla] deploy kolla on ppc64

2016-04-21 Thread Michał Rostecki
On Thu, Apr 21, 2016 at 11:29 PM, Franck Barillaud  wrote:
> I've been using Kola to deploy Mitaka on x86 and it works great. Now I would
> like to do the same thing on IBM Power8 systems (ppc64). I've setup a local
> registry with an Ubuntu image.
> I've docker and a local registry running on a Power8 system. When I issue
> the following command:
>
> kolla-build --base ubuntu --type source --registry  :4000
> --push
>
> I get an 'exec format error' message. It seems that the build process pulls
> the Ubuntu amd64 image from the public registry and not the ppc64 image from
> the local registry. Is there a configuration parameter I can setup to force
> to pull the image from the local registry ?
>

You can provide the address of your registry as a part of the image
name in the --base option. So, i.e. :5000/ubuntu. It
should work.

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [releases] Behavior change in the bot for jenkins merge?

2016-04-21 Thread Nikhil Komawar
Thanks! somehow I missed it earlier.

On 4/11/16 9:53 PM, Clark Boylan wrote:
> On Mon, Apr 11, 2016, at 06:18 PM, Nikhil Komawar wrote:
>> Hi,
>>
>> I noticed on a recent merge to glance [1] that the bot updated the bug
>> [2] with comment from "in progress" to "fix released" vs. earlier
>> behavior "fix committed". Is that behavior on purpose or issue with the
>> bot?
>>
>> [1] https://review.openstack.org/#/c/304184/
>> [2] https://bugs.launchpad.net/glance/+bug/1568894
> This was an intentional behavior change. See
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081612.html
>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] deploy kolla on ppc64

2016-04-21 Thread Franck Barillaud
I've been using Kola to deploy Mitaka on x86 and it works great. Now I 
would like to do the same thing on IBM Power8 systems (ppc64). I've setup 
a local registry with an Ubuntu image. 
I've docker and a local registry running on a Power8 system. When I issue 
the following command:

kolla-build --base ubuntu --type source --registry  :4000 
--push

I get an 'exec format error' message. It seems that the build process 
pulls the Ubuntu amd64 image from the public registry and not the ppc64 
image from the local registry. Is there a configuration parameter I can 
setup to force to pull the image from the local registry ?   


Regards,
Franck Barillaud
Cloud Architect
Master Inventor
Ext Phone: (512) 286-5242Tie Line: 363-5242
e-mail: fbari...@us.ibm.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Amrith Kumar
As I was preparing some thoughts for the Board/TC meeting on Sunday that will 
discuss this topic, I made the notes below and was going to post them on a 
topic specific etherpad but I didn't find one.

I want to represent the view point of a consumer of compute services in 
OpenStack. Trove is a consumer of many OpenStack services, and one of the 
things I sometimes (infrequently) get asked is whether Trove supports 
containers. I have wondered about the utility of running databases in 
containers and after quizzing people who asked for container support, I was 
able to put them into three buckets and ranked them roughly by frequency.

2. containers are a very useful packaging construct; unionfs for VM's would be 
a great thing
3. containers launch faster than VM's
4. container performance is in some cases better than VM's

That's weird, what is #1, you may ask. Well, that was

1. containers are cool, it is currently the highest grossing buzzword

OK, so I ignored #1 and focused on #2-#4 and these are very relevant for Trove, 
I think.

While I realize that containers offer many capabilities, from the perspective 
of Trove, I have not found a compelling reason to treat it differently from any 
other compute capability. As a matter of fact, Trove works fine with bare metal 
(using the ironic driver) and with VM's using the various VM drivers. I even 
had all of Trove working with containers using nova-docker. I had to make some 
specific choices on my docker images but I got it all to work as a prototype.

My belief is that there are a group of use-cases where a common compute 
abstraction would be beneficial. In an earlier message on one of these threads, 
Adrian made a very good point[1] that "I suppose you were imagining an LCD 
approach. If that's what you want, just use the existing Nova API, and load 
different compute drivers on different host aggregates. A single Nova client 
can produce VM, BM (Ironic), and Container (lbvirt-lxc) instances all with a 
common API (Nova) if it's configured in this way. That's what we do. Flavors 
determine which compute type you get."

He then went on to say, "If what you meant is that you could tap into the power 
of all the unique characteristics of each of the various compute types (through 
some modular extensibility framework) you'll likely end up with complexity in 
Trove that is comparable to integrating with the native upstream APIs, along 
with the disadvantage of waiting for OpenStack to continually catch up to the 
pace of change of the various upstream systems on which it depends. This is a 
recipe for disappointment."

I've pondered this a while and it is still my belief that there are a class of 
use-cases, and I submit to you that I believe that Trove is one of them, where 
the LCD is sufficient in the area of compute. I say this knowing full well that 
in the area of storage this is likely not the case and we are discussing how we 
can better integrate with storage in a manner akin to what Adrian says later in 
his reply [1].

I submit to you that there are likely other situations where an LCD approach is 
sufficient, and there are most definitely situations where an LCD approach is 
not sufficient, and one would benefit from "tap[ping] into the power of all the 
unique characteristics of each of the various compute types".

I'm not proposing that we must have only one or the other.

I believe that OpenStack should provide both. It should equally provide Magnum, 
a mechanism to tap into all the finer aspects of containers, should one want 
it, and also a common compute abstraction through some means whereby a user 
could get a LCD.

I don't believe that Magnum can (or intends to) allow a user to provision VM's 
or bare-metal servers (nor should it). But, I believe that a common compute API 
that provides the LCD and determines whether in some way (potentially through 
flavors) whether the request should be a bare-metal server, a VM, or a 
container, has value too.

Specifically, what I'm wondering is why there isn't interest in a driver/plugin 
for Nova that will provide an LCD container capability from Magnum. I am sure 
there's likely a good reason for this, that's one of the things I was 
definitely looking to learn in the course of the board/TC meeting.

Thanks, and my apologies for writing long emails.

-amrith

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-April/091982.html



> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: Thursday, April 21, 2016 4:42 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> On 04/21/2016 03:18 PM, Fox, Kevin M wrote:
> > Here's where we disagree.
> 
> We may have to agree to disagree.
> 
> > Your speaking for everyone in the world now, and all you need is one
> > counter example. I'll be that guy. Me. I want a common abstraction for
> > some common LCD stuff.
> 
> We 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Amrith Kumar


> -Original Message-
> From: Keith Bray [mailto:keith.b...@rackspace.com]
> Sent: Thursday, April 21, 2016 5:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> 100% agreed on all your points… with the addition that the level of
> functionality you are asking for doesn’t need to be baked into an API
> service such as Magnum.  I.e., Magnum doesn’t have to be the thing
> providing the easy-button app deployment — Magnum isn’t and shouldn’t be a
> Docker Hub alternative, a Tutum alternative, etc.  A Horizon UI, App
> Catalog UI, or OpenStack CLI on top of Heat, Murano, Solum, Magnum, etc.
> etc. can all provide this by pulling together the underlying API
> services/technologies to give users the easy app deployment buttons.   I
> don’t think Magnum should do everything (or next thing we know we’ll be
> trying to make Magnum a PaaS, or make it a CircleCI, or … Ok, I’ve gotten
> carried away).  Hopefully my position is understood, and no problem if
> folks disagree with me.  I’d just rather compartmentalize domain concerns
> and scope Magnum to something focused, achievable, agnostic, and easy for
> operators to adopt first. User traction will not be helped by increasing
> service/operator complexity.  I’ll have to go look at the latest Trove and
> Sahara APIs to see how LCD is incorporated, and would love feedback from
> Trove and Sahara operators on the value vs. customer confusion or operator
> overhead they get from those LCDs if they are required parts of the
> services.

[amrith] Keith, I'm happy to chat with you about how Trove does all that. I 
work on Trove, not an operator.

> 
> Thanks,
> -Keith
> 
> On 4/21/16, 3:31 PM, "Fox, Kevin M"  wrote:
> 
> >There are a few reasons, but the primary one that affects me is Its
> >from the app-catalog use case.
> >
> >To gain user support for a product like OpenStack, you need users. The
> >easier you make it to use, the more users you can potentially get.
> >Traditional Operating Systems learned this a while back. Rather then
> >make each OS user have to be a developer and custom deploy every app
> >they want to run, they split the effort in such a way that Developers
> >can provide software through channels that Users that are not skilled
> >Developers can consume and deploy. The "App" culture in the mobile
> >space it the epitome of that at the moment. My grandmother fires up the
> >app store on her phone, clicks install on something interesting, and
> starts using it.
> >
> >Right now, Thats incredibly difficult in OpenStack. You have to find
> >the software your interested in, figure out which components your going
> >to consume (nova, magnum, which COE, etc) then use those api's to
> >launch some resource. Then after that resource is up, then you have to
> >switch tools and then use those tools to further launch things, ansible
> >or kubectl or whatever, then further deploy things.
> >
> >What I'm looking for, is a unified enough api, that a user can go into
> >horizon, go to the app catalog, find an interesting app, click
> >install/run, and then get a link to a service they can click on and
> >start consuming the app they want in the first place. The number of
> >users that could use such an interface, and consume OpenStack resources
> >are several orders of magnitude greater then the numbers that can
> >manually deploy something ala the procedure in the previous paragraph.
> >More of that is good for Users, Developers, and Operators.
> >
> >Does that help?
> >
> >Thanks,
> >Kevin
> >
> >
> >
> >From: Keith Bray [keith.b...@rackspace.com]
> >Sent: Thursday, April 21, 2016 1:10 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> >abstraction for all COEs
> >
> >If you don¹t want a user to have to choose a COE, can¹t we just offer
> >an option for the operator to mark a particular COE as the ³Default
> >COE² that could be defaulted to if one isn¹t specified in the Bay
> >create call?  If the operator didn¹t specify a default one, then the
> >CLI/UI must submit one in the bay create call otherwise it would fail.
> >
> >Kevin, can you clarify Why you have to write scripts to deploy a
> container
> >to the COE?   It can be made easy for the user to extract all the
> >runtime/env vars needed for a user to just do ³docker run Š²  and poof,
> >container running on Swarm on a Magnum bay.  Can you help me understand
> >the script part of it?   I don¹t believe container users want an
> >abstraction between them and their COE CLIŠ but, what I believe isn¹t
> >important.  What I do think is important is that we not require
> >OpenStack operators to run that abstraction layer to be running a
> >³magnum compliant² service.  It should either be an 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Keith Bray
100% agreed on all your points… with the addition that the level of
functionality you are asking for doesn’t need to be baked into an API
service such as Magnum.  I.e., Magnum doesn’t have to be the thing
providing the easy-button app deployment — Magnum isn’t and shouldn’t be a
Docker Hub alternative, a Tutum alternative, etc.  A Horizon UI, App
Catalog UI, or OpenStack CLI on top of Heat, Murano, Solum, Magnum, etc.
etc. can all provide this by pulling together the underlying API
services/technologies to give users the easy app deployment buttons.   I
don’t think Magnum should do everything (or next thing we know we’ll be
trying to make Magnum a PaaS, or make it a CircleCI, or … Ok, I’ve gotten
carried away).  Hopefully my position is understood, and no problem if
folks disagree with me.  I’d just rather compartmentalize domain concerns
and scope Magnum to something focused, achievable, agnostic, and easy for
operators to adopt first. User traction will not be helped by increasing
service/operator complexity.  I’ll have to go look at the latest Trove and
Sahara APIs to see how LCD is incorporated, and would love feedback from
Trove and Sahara operators on the value vs. customer confusion or operator
overhead they get from those LCDs if they are required parts of the
services.

Thanks,
-Keith

On 4/21/16, 3:31 PM, "Fox, Kevin M"  wrote:

>There are a few reasons, but the primary one that affects me is Its from
>the app-catalog use case.
>
>To gain user support for a product like OpenStack, you need users. The
>easier you make it to use, the more users you can potentially get.
>Traditional Operating Systems learned this a while back. Rather then make
>each OS user have to be a developer and custom deploy every app they want
>to run, they split the effort in such a way that Developers can provide
>software through channels that Users that are not skilled Developers can
>consume and deploy. The "App" culture in the mobile space it the epitome
>of that at the moment. My grandmother fires up the app store on her
>phone, clicks install on something interesting, and starts using it.
>
>Right now, Thats incredibly difficult in OpenStack. You have to find the
>software your interested in, figure out which components your going to
>consume (nova, magnum, which COE, etc) then use those api's to launch
>some resource. Then after that resource is up, then you have to switch
>tools and then use those tools to further launch things, ansible or
>kubectl or whatever, then further deploy things.
>
>What I'm looking for, is a unified enough api, that a user can go into
>horizon, go to the app catalog, find an interesting app, click
>install/run, and then get a link to a service they can click on and start
>consuming the app they want in the first place. The number of users that
>could use such an interface, and consume OpenStack resources are several
>orders of magnitude greater then the numbers that can manually deploy
>something ala the procedure in the previous paragraph. More of that is
>good for Users, Developers, and Operators.
>
>Does that help?
>
>Thanks,
>Kevin
>
>
>
>From: Keith Bray [keith.b...@rackspace.com]
>Sent: Thursday, April 21, 2016 1:10 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>abstraction for all COEs
>
>If you don¹t want a user to have to choose a COE, can¹t we just offer an
>option for the operator to mark a particular COE as the ³Default COE² that
>could be defaulted to if one isn¹t specified in the Bay create call?  If
>the operator didn¹t specify a default one, then the CLI/UI must submit one
>in the bay create call otherwise it would fail.
>
>Kevin, can you clarify Why you have to write scripts to deploy a container
>to the COE?   It can be made easy for the user to extract all the
>runtime/env vars needed for a user to just do ³docker run Š²  and poof,
>container running on Swarm on a Magnum bay.  Can you help me understand
>the script part of it?   I don¹t believe container users want an
>abstraction between them and their COE CLIŠ but, what I believe isn¹t
>important.  What I do think is important is that we not require OpenStack
>operators to run that abstraction layer to be running a ³magnum compliant²
>service.  It should either be an ³optional² API add-on or a separate API
>or separate project.  If some folks want an abstraction layer, then great,
>feel free to build it and even propose it under the OpenStack ecosystem..
>But, that abstraction would be a ³proxy API" over the COEs, and doesn¹t
>need to be part of Magnum¹s offering, as it would be targeted at the COE
>interactions and not the bay interactions (which is where Magnum scope is
>best focused).  I don¹t think Magnum should play in both these distinct
>domains (Bay interaction vs. COE interaction).  The former (bay
>interaction) is an infrastructure cloud thing (fits well 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Joshua Harlow
I thought this was also what the goal of https://cncf.io/ was starting 
to be? Maybe to early to tell if that standardization will be an real 
outcome vs just an imagined outcome :-P


-Josh

Fox, Kevin M wrote:

The COE's have a pressure not to standardize their api's between competing 
COE's. If you can lock a user into your api, then they cant go to your 
competitor.

The standard api really needs to come from those invested in not being locked 
in. OpenStack's been largely about that since the beginning. It may not belong 
in Magnum, but I do believe it belongs in OpenStack.

Thanks,
Kevin

From: Steve Gordon [sgor...@redhat.com]
Sent: Thursday, April 21, 2016 6:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

- Original Message -

From: "Hongbin Lu"
To: "OpenStack Development Mailing List (not for usage 
questions)"

-Original Message-
From: Keith Bray [mailto:keith.b...@rackspace.com]
Sent: April-20-16 6:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
abstraction for all COEs

Magnum doesn¹t have to preclude tight integration for single COEs you
speak of.  The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed
by Magnum itself. The tight integration can be done by leveraging
existing technologies (Heat and/or choose your DevOps tool of choice:
Chef/Ansible/etc). This allows interested community members to focus on
tight integration of whatever COE they want, focusing specifically on

I agree that tight integration can be achieved by a plugin, but I think the
key question is who will do the work. If tight integration needs to be done,
I wonder why it is not part of the Magnum efforts.


Why does the integration belong in Magnum though? To me it belongs in the COEs themselves 
(e.g. their in-tree network/storage plugins) such that someone can leverage them 
regardless of their choices regarding COE deployment tooling (and yes that means Magnum 
should be able to leverage them too)? I guess the issue is that in the above conversation 
we are overloading the term "integration" which can be taken to mean different 
things...

-Steve


 From my point of view,
pushing the work out doesn't seem to address the original pain, which is
some users don't want to explore the complexities of individual COEs.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-21 Thread Stephen Wong
+1 on Wednesday lunch

On Thu, Apr 21, 2016 at 12:02 PM, Ihar Hrachyshka 
wrote:

> Cathy Zhang  wrote:
>
> Hi everyone,
>>
>> We have room 400 at 3:10pm on Thursday available for discussion of the
>> two topics.
>> Another option is to use the common room with roundtables in "Salon C"
>> during Monday or Wednesday lunch time.
>>
>> Room 400 at 3:10pm is a closed room while the Salon C is a big open room
>> which can host 500 people.
>>
>> I am Ok with either option. Let me know if anyone has a strong preference.
>>
>
> On Monday, I have two talks to do. First one is 2:50-3:30pm, second one is
> 4:40-5:20pm. But lunch time should probably be fine if it leaves time for
> the actual lunch...
>
> Thursday at 3:10pm also works for me.
>
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Monty Taylor

On 04/21/2016 03:18 PM, Fox, Kevin M wrote:

Here's where we disagree.


We may have to agree to disagree.


Your speaking for everyone in the world now, and all you need is one
counter example. I'll be that guy. Me. I want a common abstraction
for some common LCD stuff.


We also disagree on this. Just because one human wants something does 
not make implementing that feature a good idea. In fact, good design is 
largely about appropriately and selectively saying no.


Now I'm not going to pretend that we're good at design around here... we 
seem to very easily fall into the trap that your assertion presents. But 
in almost every one of those cases, having done so winds up having been 
a mistake.



Both Sahara and Trove have LCD abstractions for very common things.
Magnum should too.

You are falsely assuming that if an LCD abstraction is provided, then
users cant use the raw api directly. This is false. There is no
either/or. You can have both. I would be against it too if they were
mutually exclusive. They are not.


I'm not assuming that at all. I'm quite clearly asserting that the 
existence of an OpenStack LCD is a Bad Idea. This is a thing we disagree 
about.


I think it's unfriendly to the upstreams in question. I think it does 
not provide significant enough value to the world to justify that 
unfriendliness. And also, https://xkcd.com/927/



Thanks, Kevin  From: Monty
Taylor [mord...@inaugust.com] Sent: Thursday, April 21, 2016 10:22
AM To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
[magnum][app-catalog][all] Build unified abstraction for all COEs

On 04/21/2016 11:03 AM, Tim Bell wrote:



On 21/04/16 17:38, "Hongbin Lu"  wrote:





-Original Message- From: Adrian Otto
[mailto:adrian.o...@rackspace.com] Sent: April-21-16 10:32 AM
To: OpenStack Development Mailing List (not for usage
questions) Subject: Re: [openstack-dev]
[magnum][app-catalog][all] Build unified abstraction for all
COEs



On Apr 20, 2016, at 2:49 PM, Joshua Harlow


wrote:


Thierry Carrez wrote:

Adrian Otto wrote:

This pursuit is a trap. Magnum should focus on making
native container APIs available. We should not wrap APIs
with leaky abstractions. The lowest common denominator of
all COEs is an remarkably low value API that adds
considerable complexity to

Magnum

that will not strategically advance OpenStack. If we
instead focus our effort on making the COEs work better
on OpenStack, that would be a winning strategy. Support
and compliment our various COE

ecosystems.


So I'm all for avoiding 'wrap APIs with leaky abstractions'
and 'making COEs work better on OpenStack' but I do dislike
the part

about COEs (plural) because it is once again the old
non-opinionated problem that we (as a community) suffer from.


Just my 2 cents, but I'd almost rather we pick one COE and
integrate that deeply/tightly with openstack, and yes if this
causes some part of the openstack community to be annoyed,
meh, to bad. Sadly I have a feeling we are hurting ourselves
by continuing to try to be

everything

and not picking anything (it's a general thing we, as a
group, seem

to

be good at, lol). I mean I get the reason to just support all
the things, but it feels like we as a community could just
pick something, work together on figuring out how to pick
one, using all these bright leaders we have to help make that
possible (and yes this might piss some people off, to bad).
Then work toward making that something

great

and move on…


The key issue preventing the selection of only one COE is that
this area is moving very quickly. If we would have decided what
to pick at the time the Magnum idea was created, we would have
selected Docker. If you look at it today, you might pick
something else. A few months down the road, there may be yet
another choice that is more compelling. The fact that a cloud
operator can integrate services with OpenStack, and have the
freedom to offer support for a selection of COE’s is a form of
insurance against the risk of picking the wrong one. Our
compute service offers a choice of hypervisors, our block
storage service offers a choice of storage hardware drivers,
our networking service allows a choice of network drivers.
Magnum is following the same pattern of choice that has made
OpenStack compelling for a very diverse community. That design
consideration was intentional.

Over time, we can focus the majority of our effort on deep
integration with COEs that users select the most. I’m convinced
it’s still too early to bet the farm on just one choice.


If Magnum want to avoid the risk of picking the wrong COE, that
mean the risk is populated to all our users. They might pick a
COE and explore the its complexities. Then they find out another
COE is more compelling and their integration work is wasted. I
wonder if we can do better by taking the risk and provide
insurance for our users? I am trying 

Re: [openstack-dev] [Fuel] snapshot tool

2016-04-21 Thread Dmitry Sutyagin
Team,

A "bicycle" will have to be present anyway, as a code which interacts with
Ansible, because as far as I understand Ansible on it's own cannot provide
all the functionality in one go, so a wrapper for it will have to be
present anyway.

I think me and Alexander we will look into converting Timmy into
Ansible-based tool. One way to go would be to make Ansible a backend option
for Timmy (ssh being the alternative).

I agree that the folder-driven structure is not easy to manipulate, but you
don't want to put all your scripts inside Ansible playbooks, that would
also be a mess. Something in-between would work well - folder structure for
available
scripts, and playbooks which link to them via -script: ,
generated statically (default) or dynamically if need be.

Also, I imagine some functions might not be directly possible with Ansible,
such as parallel stdout delivery of binary data into separate files (Timmy
pulls logs compressed on the fly on the node side through ssh, to avoid
using any unnecessary disk space on env nodes and local machine). So again,
for maximum efficiency and specifc tasks a separate tool might be required,
apart of Ansible.



On Wed, Apr 20, 2016 at 5:36 PM, Dmitriy Novakovskiy <
dnovakovs...@mirantis.com> wrote:

> There's a thread on openstack-dev, but
> - nobody replied there (I checked this morning)
> - I can't link PROD tickets there :)
>
>
> On Thursday, April 21, 2016, Mike Scherbakov 
> wrote:
>
>> Guys,
>> how did it turn into openstack-dev from mos-dev, without any tags and
>> original messages... ?
>>
>> Please be careful when replying... There is a different email thread
>> started in OpenStack dev, with [Fuel] in subject..
>>
>> On Wed, Apr 20, 2016 at 10:08 AM Dmitry Nikishov 
>> wrote:
>>
>>> Dmitry,
>>>
>>> I mean, currently shotgun fetches services' configuration along with
>>> astute.yaml. These files contain passwords, keys, tokens. I beleive, these
>>> should be sanitized. Or, better yet, there should be an option to sanitize
>>> sensitive data from fetched files.
>>>
>>>
>>> Aleksandr,
>>>
>>> Currently Fuel has a service non-root account with passwordless sudo
>>> enabled. This may change in the future (the passwordless part), however,
>>> now I don't see an issue there.
>>> Additionally, it is possible for users to configure sudo for the
>>> user-facing account however they like.
>>>
>>> In regards to have this tool to use a non-root accounts, there are 2
>>> items:
>>> - execute commands, that require elevated privileges (the easy part --
>>> user has to be able to execute these commands with sudo and without
>>> password)
>>> - copy files, that this user doesn't have read privileges for.
>>>
>>> For the second item, there are 2 possible solutions:
>>> 1. Give the non-root user read privileges for these files.
>>> Pros:
>>> - More straightforward, generally acceptable way
>>> Cons:
>>> - Requires additional implementation to give permissions to the user
>>> - (?) Not very extensible: to allow copying a new file, we'd have to
>>> first add it to the tool's config, and somehow implement adding read
>>> permissions
>>>
>>> 2. Somehow allow to copy these files with sudo.
>>> Pros:
>>> - More simple implementation: we'll just need to make sure that the user
>>> can do passwordless sudo
>>> - Extensible: to add more files, it's enough to just specify them in the
>>> tool's configuration.
>>> Cons:
>>> - Non-obvious, obscure way
>>> - Relies on having to be able to do something like "sudo cat
>>> /path/to/file", which is not much better that just giving the user read
>>> privileges. In fact, the only difference between this and giving the user
>>> the read rights is that it is possible to allow "sudo cat" for files, that
>>> don't yet exist, whereas giving permissions requires that these files
>>> already are on the filesystem.
>>>
>>> What way do you think is more appropriate?
>>>
>>>
>>> On Wed, Apr 20, 2016 at 5:28 AM, Aleksandr Dobdin 
>>> wrote:
>>>
 Dmitry,

 You can create a non-root user account without root privileges but you
 need to add it to appropriate groups and configure sudo permissions (even
 though you add this user to root group, it will fail with iptables command
 for example) to get config files and launch requested commands.I
 suppose that it is possible to note this possibility in the documentation
 and provide a customer with detailed instructions on how to setup this user
 account.There are some logs that will also be missing from the
 snapshot with the message permission denied (only the root user has
 access to some files with 0600 mask)
 This user account could be specified into config.yaml (ssh -> opts
 option)

 Sincerely yours,
 Aleksandr Dobdin
 Senior Operations Engineer
 Mirantis
 ​Inc.​



 __

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Monty Taylor

I believe you just described Murano.

On 04/21/2016 03:31 PM, Fox, Kevin M wrote:

There are a few reasons, but the primary one that affects me is Its from the 
app-catalog use case.

To gain user support for a product like OpenStack, you need users. The easier you make it 
to use, the more users you can potentially get.  Traditional Operating Systems learned 
this a while back. Rather then make each OS user have to be a developer and custom deploy 
every app they want to run, they split the effort in such a way that Developers can 
provide software through channels that Users that are not skilled Developers can consume 
and deploy. The "App" culture in the mobile space it the epitome of that at the 
moment. My grandmother fires up the app store on her phone, clicks install on something 
interesting, and starts using it.

Right now, Thats incredibly difficult in OpenStack. You have to find the 
software your interested in, figure out which components your going to consume 
(nova, magnum, which COE, etc) then use those api's to launch some resource. 
Then after that resource is up, then you have to switch tools and then use 
those tools to further launch things, ansible or kubectl or whatever, then 
further deploy things.

What I'm looking for, is a unified enough api, that a user can go into horizon, 
go to the app catalog, find an interesting app, click install/run, and then get 
a link to a service they can click on and start consuming the app they want in 
the first place. The number of users that could use such an interface, and 
consume OpenStack resources are several orders of magnitude greater then the 
numbers that can manually deploy something ala the procedure in the previous 
paragraph. More of that is good for Users, Developers, and Operators.

Does that help?

Thanks,
Kevin



From: Keith Bray [keith.b...@rackspace.com]
Sent: Thursday, April 21, 2016 1:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

If you don¹t want a user to have to choose a COE, can¹t we just offer an
option for the operator to mark a particular COE as the ³Default COE² that
could be defaulted to if one isn¹t specified in the Bay create call?  If
the operator didn¹t specify a default one, then the CLI/UI must submit one
in the bay create call otherwise it would fail.

Kevin, can you clarify Why you have to write scripts to deploy a container
to the COE?   It can be made easy for the user to extract all the
runtime/env vars needed for a user to just do ³docker run Š²  and poof,
container running on Swarm on a Magnum bay.  Can you help me understand
the script part of it?   I don¹t believe container users want an
abstraction between them and their COE CLIŠ but, what I believe isn¹t
important.  What I do think is important is that we not require OpenStack
operators to run that abstraction layer to be running a ³magnum compliant²
service.  It should either be an ³optional² API add-on or a separate API
or separate project.  If some folks want an abstraction layer, then great,
feel free to build it and even propose it under the OpenStack ecosystem..
But, that abstraction would be a ³proxy API" over the COEs, and doesn¹t
need to be part of Magnum¹s offering, as it would be targeted at the COE
interactions and not the bay interactions (which is where Magnum scope is
best focused).  I don¹t think Magnum should play in both these distinct
domains (Bay interaction vs. COE interaction).  The former (bay
interaction) is an infrastructure cloud thing (fits well with OpenStack),
the latter (COE interaction) is an obfuscation of emerging technologies,
which gets in to the Trap that Adrian mentioned.  The abstraction layer
API will forever and always be drastically behind in trying to keep up
with the COE innovation.

In summary, an abstraction over the COEs would be best served as a
different effort.  Magnum would be best focused on bay interactions and
should not try to pick a COE winner or require an operator to run a
lowest-common-demonitor API abstraction.

Thanks for listening to my soap-box.
-Keith



On 4/21/16, 2:36 PM, "Fox, Kevin M"  wrote:


I agree with that, and thats why providing some bare minimum abstraction
will help the users not have to choose a COE themselves. If we can't
decide, why can they? If all they want to do is launch a container, they
should be able to script up "magnum launch-container foo/bar:latest" and
get one. That script can then be relied upon.

Today, they have to write scripts to deploy to the specific COE they have
chosen. If they chose Docker, and something better comes out, they have
to go rewrite a bunch of stuff to target the new, better thing. This puts
a lot of work on others.

Do I think we can provide an abstraction that prevents them from ever
having to rewrite scripts? No. There are a lot of features in the COE

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Fox, Kevin M
There are a few reasons, but the primary one that affects me is Its from the 
app-catalog use case.

To gain user support for a product like OpenStack, you need users. The easier 
you make it to use, the more users you can potentially get.  Traditional 
Operating Systems learned this a while back. Rather then make each OS user have 
to be a developer and custom deploy every app they want to run, they split the 
effort in such a way that Developers can provide software through channels that 
Users that are not skilled Developers can consume and deploy. The "App" culture 
in the mobile space it the epitome of that at the moment. My grandmother fires 
up the app store on her phone, clicks install on something interesting, and 
starts using it.

Right now, Thats incredibly difficult in OpenStack. You have to find the 
software your interested in, figure out which components your going to consume 
(nova, magnum, which COE, etc) then use those api's to launch some resource. 
Then after that resource is up, then you have to switch tools and then use 
those tools to further launch things, ansible or kubectl or whatever, then 
further deploy things.

What I'm looking for, is a unified enough api, that a user can go into horizon, 
go to the app catalog, find an interesting app, click install/run, and then get 
a link to a service they can click on and start consuming the app they want in 
the first place. The number of users that could use such an interface, and 
consume OpenStack resources are several orders of magnitude greater then the 
numbers that can manually deploy something ala the procedure in the previous 
paragraph. More of that is good for Users, Developers, and Operators.

Does that help?

Thanks,
Kevin



From: Keith Bray [keith.b...@rackspace.com]
Sent: Thursday, April 21, 2016 1:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

If you don¹t want a user to have to choose a COE, can¹t we just offer an
option for the operator to mark a particular COE as the ³Default COE² that
could be defaulted to if one isn¹t specified in the Bay create call?  If
the operator didn¹t specify a default one, then the CLI/UI must submit one
in the bay create call otherwise it would fail.

Kevin, can you clarify Why you have to write scripts to deploy a container
to the COE?   It can be made easy for the user to extract all the
runtime/env vars needed for a user to just do ³docker run Š²  and poof,
container running on Swarm on a Magnum bay.  Can you help me understand
the script part of it?   I don¹t believe container users want an
abstraction between them and their COE CLIŠ but, what I believe isn¹t
important.  What I do think is important is that we not require OpenStack
operators to run that abstraction layer to be running a ³magnum compliant²
service.  It should either be an ³optional² API add-on or a separate API
or separate project.  If some folks want an abstraction layer, then great,
feel free to build it and even propose it under the OpenStack ecosystem..
But, that abstraction would be a ³proxy API" over the COEs, and doesn¹t
need to be part of Magnum¹s offering, as it would be targeted at the COE
interactions and not the bay interactions (which is where Magnum scope is
best focused).  I don¹t think Magnum should play in both these distinct
domains (Bay interaction vs. COE interaction).  The former (bay
interaction) is an infrastructure cloud thing (fits well with OpenStack),
the latter (COE interaction) is an obfuscation of emerging technologies,
which gets in to the Trap that Adrian mentioned.  The abstraction layer
API will forever and always be drastically behind in trying to keep up
with the COE innovation.

In summary, an abstraction over the COEs would be best served as a
different effort.  Magnum would be best focused on bay interactions and
should not try to pick a COE winner or require an operator to run a
lowest-common-demonitor API abstraction.

Thanks for listening to my soap-box.
-Keith



On 4/21/16, 2:36 PM, "Fox, Kevin M"  wrote:

>I agree with that, and thats why providing some bare minimum abstraction
>will help the users not have to choose a COE themselves. If we can't
>decide, why can they? If all they want to do is launch a container, they
>should be able to script up "magnum launch-container foo/bar:latest" and
>get one. That script can then be relied upon.
>
>Today, they have to write scripts to deploy to the specific COE they have
>chosen. If they chose Docker, and something better comes out, they have
>to go rewrite a bunch of stuff to target the new, better thing. This puts
>a lot of work on others.
>
>Do I think we can provide an abstraction that prevents them from ever
>having to rewrite scripts? No. There are a lot of features in the COE
>world in flight right now and we dont want to solidify an api around 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Fox, Kevin M
Here's where we disagree.

Your speaking for everyone in the world now, and all you need is one counter 
example. I'll be that guy. Me. I want a common abstraction for some common LCD 
stuff.

Both Sahara and Trove have LCD abstractions for very common things. Magnum 
should too.

You are falsely assuming that if an LCD abstraction is provided, then users 
cant use the raw api directly. This is false. There is no either/or. You can 
have both. I would be against it too if they were mutually exclusive. They are 
not.

Thanks,
Kevin

From: Monty Taylor [mord...@inaugust.com]
Sent: Thursday, April 21, 2016 10:22 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

On 04/21/2016 11:03 AM, Tim Bell wrote:
>
>
> On 21/04/16 17:38, "Hongbin Lu"  wrote:
>
>>
>>
>>> -Original Message-
>>> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
>>> Sent: April-21-16 10:32 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>>> abstraction for all COEs
>>>
>>>
 On Apr 20, 2016, at 2:49 PM, Joshua Harlow 
>>> wrote:

 Thierry Carrez wrote:
> Adrian Otto wrote:
>> This pursuit is a trap. Magnum should focus on making native
>> container APIs available. We should not wrap APIs with leaky
>> abstractions. The lowest common denominator of all COEs is an
>> remarkably low value API that adds considerable complexity to
>>> Magnum
>> that will not strategically advance OpenStack. If we instead focus
>> our effort on making the COEs work better on OpenStack, that would
>> be a winning strategy. Support and compliment our various COE
>>> ecosystems.

 So I'm all for avoiding 'wrap APIs with leaky abstractions' and
 'making COEs work better on OpenStack' but I do dislike the part
>>> about COEs (plural) because it is once again the old non-opinionated
>>> problem that we (as a community) suffer from.

 Just my 2 cents, but I'd almost rather we pick one COE and integrate
 that deeply/tightly with openstack, and yes if this causes some part
 of the openstack community to be annoyed, meh, to bad. Sadly I have a
 feeling we are hurting ourselves by continuing to try to be
>>> everything
 and not picking anything (it's a general thing we, as a group, seem
>>> to
 be good at, lol). I mean I get the reason to just support all the
 things, but it feels like we as a community could just pick something,
 work together on figuring out how to pick one, using all these bright
 leaders we have to help make that possible (and yes this might piss
 some people off, to bad). Then work toward making that something
>>> great
 and move on…
>>>
>>> The key issue preventing the selection of only one COE is that this
>>> area is moving very quickly. If we would have decided what to pick at
>>> the time the Magnum idea was created, we would have selected Docker. If
>>> you look at it today, you might pick something else. A few months down
>>> the road, there may be yet another choice that is more compelling. The
>>> fact that a cloud operator can integrate services with OpenStack, and
>>> have the freedom to offer support for a selection of COE’s is a form of
>>> insurance against the risk of picking the wrong one. Our compute
>>> service offers a choice of hypervisors, our block storage service
>>> offers a choice of storage hardware drivers, our networking service
>>> allows a choice of network drivers. Magnum is following the same
>>> pattern of choice that has made OpenStack compelling for a very diverse
>>> community. That design consideration was intentional.
>>>
>>> Over time, we can focus the majority of our effort on deep integration
>>> with COEs that users select the most. I’m convinced it’s still too
>>> early to bet the farm on just one choice.
>>
>> If Magnum want to avoid the risk of picking the wrong COE, that mean the 
>> risk is populated to all our users. They might pick a COE and explore the 
>> its complexities. Then they find out another COE is more compelling and 
>> their integration work is wasted. I wonder if we can do better by taking the 
>> risk and provide insurance for our users? I am trying to understand the 
>> rationales that prevents us to improve the integration between COEs and 
>> OpenStack. Personally, I don't like to end up with a situation that "this is 
>> the pain from our users, but we cannot do anything".
>
> We’re running Magnum and have requests from our user communities for 
> Kubernetes, Docker Swarm and Mesos. The use cases are significantly different 
> and can justify the selection of different technologies. We’re offering 
> Kubernetes and Docker Swarm now and adding Mesos. If I was only to offer one, 
> they’d build their own at 

Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-21 Thread Boden Russell


On 4/21/16 1:38 PM, Joshua Harlow wrote:
> This might be harder in retrying, but I think I can help u make
> something that will work, since retrying has a way to provide a custom
> delay function.

Thanks for that. My question was if this might be useful as a new
backoff in retrying (vs a custom delay function in oslo or something).


> https://github.com/rholder/retrying/blob/master/retrying.py#L65
> 
> 'wait_exponential_max'?

When expressed in milliseconds; bingo!
Thanks for doing some of my work :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Sean Dague
On 04/21/2016 04:04 PM, Monty Taylor wrote:
> On 04/21/2016 02:08 PM, Devananda van der Veen wrote:
>> The first cross-project design summit tracks were held at the following
>> summit, in Atlanta, though I recall it lacking the necessary
>> participation to be successful. Today, we have many more avenues to
>> discuss important topics affecting all (or more than one) projects. The
>> improved transparency into those discussions is beneficial to everyone;
>> the perceived exclusivity of the "core party" is helpful to no one.
>>
>> So, in summary, I believe this party served a good purpose in Hong Kong
>> and Atlanta. While it provided some developers with a quiet evening for
>> discussions to happen in Paris, Vancouver, and Tokyo, we now have other
>> (better) venues for the discussions this party once facilitated, and it
>> has outlived its purpose.
>>
>> For what it's worth, I would be happy to see it replaced with smaller
>> gatherings around cross-project initiatives. I continue to believe that
>> one of the most important aspects of our face-to-face gatherings, as a
>> community, is building the camaraderie and social connections between
>> developers, both within and across corporate and project boundaries.
> 
> I was in the middle of an email that said some of this, but this says it
> better.
> 
> So, ++

Agree. I'd like to thank Mark for this contribution to our community. I
know that I had interactions with folks at these events that I probably
wouldn't have otherwise, that led to understanding different parts of
our project space and culture.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Keith Bray
If you don¹t want a user to have to choose a COE, can¹t we just offer an
option for the operator to mark a particular COE as the ³Default COE² that
could be defaulted to if one isn¹t specified in the Bay create call?  If
the operator didn¹t specify a default one, then the CLI/UI must submit one
in the bay create call otherwise it would fail.

Kevin, can you clarify Why you have to write scripts to deploy a container
to the COE?   It can be made easy for the user to extract all the
runtime/env vars needed for a user to just do ³docker run Š²  and poof,
container running on Swarm on a Magnum bay.  Can you help me understand
the script part of it?   I don¹t believe container users want an
abstraction between them and their COE CLIŠ but, what I believe isn¹t
important.  What I do think is important is that we not require OpenStack
operators to run that abstraction layer to be running a ³magnum compliant²
service.  It should either be an ³optional² API add-on or a separate API
or separate project.  If some folks want an abstraction layer, then great,
feel free to build it and even propose it under the OpenStack ecosystem..
But, that abstraction would be a ³proxy API" over the COEs, and doesn¹t
need to be part of Magnum¹s offering, as it would be targeted at the COE
interactions and not the bay interactions (which is where Magnum scope is
best focused).  I don¹t think Magnum should play in both these distinct
domains (Bay interaction vs. COE interaction).  The former (bay
interaction) is an infrastructure cloud thing (fits well with OpenStack),
the latter (COE interaction) is an obfuscation of emerging technologies,
which gets in to the Trap that Adrian mentioned.  The abstraction layer
API will forever and always be drastically behind in trying to keep up
with the COE innovation.

In summary, an abstraction over the COEs would be best served as a
different effort.  Magnum would be best focused on bay interactions and
should not try to pick a COE winner or require an operator to run a
lowest-common-demonitor API abstraction.

Thanks for listening to my soap-box.
-Keith



On 4/21/16, 2:36 PM, "Fox, Kevin M"  wrote:

>I agree with that, and thats why providing some bare minimum abstraction
>will help the users not have to choose a COE themselves. If we can't
>decide, why can they? If all they want to do is launch a container, they
>should be able to script up "magnum launch-container foo/bar:latest" and
>get one. That script can then be relied upon.
>
>Today, they have to write scripts to deploy to the specific COE they have
>chosen. If they chose Docker, and something better comes out, they have
>to go rewrite a bunch of stuff to target the new, better thing. This puts
>a lot of work on others.
>
>Do I think we can provide an abstraction that prevents them from ever
>having to rewrite scripts? No. There are a lot of features in the COE
>world in flight right now and we dont want to solidify an api around them
>yet. We shouldn't even try that. But can we cover a few common things
>now? Yeah.
>
>Thanks,
>Kevin
>
>From: Adrian Otto [adrian.o...@rackspace.com]
>Sent: Thursday, April 21, 2016 7:32 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>abstraction for all COEs
>
>> On Apr 20, 2016, at 2:49 PM, Joshua Harlow 
>>wrote:
>>
>> Thierry Carrez wrote:
>>> Adrian Otto wrote:
 This pursuit is a trap. Magnum should focus on making native container
 APIs available. We should not wrap APIs with leaky abstractions. The
 lowest common denominator of all COEs is an remarkably low value API
 that adds considerable complexity to Magnum that will not
 strategically advance OpenStack. If we instead focus our effort on
 making the COEs work better on OpenStack, that would be a winning
 strategy. Support and compliment our various COE ecosystems.
>>
>> So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
>> COEs work better on OpenStack' but I do dislike the part about COEs
>>(plural) because it is once again the old non-opinionated problem that
>>we (as a community) suffer from.
>>
>> Just my 2 cents, but I'd almost rather we pick one COE and integrate
>>that deeply/tightly with openstack, and yes if this causes some part of
>>the openstack community to be annoyed, meh, to bad. Sadly I have a
>>feeling we are hurting ourselves by continuing to try to be everything
>>and not picking anything (it's a general thing we, as a group, seem to
>>be good at, lol). I mean I get the reason to just support all the
>>things, but it feels like we as a community could just pick something,
>>work together on figuring out how to pick one, using all these bright
>>leaders we have to help make that possible (and yes this might piss some
>>people off, to bad). Then work toward making that something great and
>>move onŠ
>
>The 

Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Monty Taylor

On 04/21/2016 02:08 PM, Devananda van der Veen wrote:

On Thu, Apr 21, 2016 at 9:07 AM, Michael Krotscheck
> wrote:

Hey everyone-

So, HPE is seeking sponsors to continue the core party. The reasons
are varied - internal sponsors have moved to other projects, the Big
Tent has drastically increased the # of cores, and the upcoming
summit format change creates quite a bit of uncertainty on
everything surrounding the summit.

Furthermore, the existence of the Core party has been...
contentious. Some believe it's exclusionary, others think it's
inappropriate, yet others think it's a good way to thank those of
use who agree to be constantly pestered for code reviews.

I'm writing this message for two reasons - mostly, to kick off a
discussion on whether the party is worthwhile.


The rationale for the creation of the first "core party" in Hong Kong
was to facilitate a setting for informal discussions that could bring
about a consensus on potentially-contentious cross-project topics, when
there was no other time or location that brought together all the TC
members, PTLs, and project core reviewers -- many of whom did not yet
know each other. Note that Hong Kong was the first summit where the
Technical Committee was composed of elected members, not just PTLs, and
we did not have a separate day at the design summit to discuss
cross-project issues.

The first cross-project design summit tracks were held at the following
summit, in Atlanta, though I recall it lacking the necessary
participation to be successful. Today, we have many more avenues to
discuss important topics affecting all (or more than one) projects. The
improved transparency into those discussions is beneficial to everyone;
the perceived exclusivity of the "core party" is helpful to no one.

So, in summary, I believe this party served a good purpose in Hong Kong
and Atlanta. While it provided some developers with a quiet evening for
discussions to happen in Paris, Vancouver, and Tokyo, we now have other
(better) venues for the discussions this party once facilitated, and it
has outlived its purpose.

For what it's worth, I would be happy to see it replaced with smaller
gatherings around cross-project initiatives. I continue to believe that
one of the most important aspects of our face-to-face gatherings, as a
community, is building the camaraderie and social connections between
developers, both within and across corporate and project boundaries.


I was in the middle of an email that said some of this, but this says it 
better.


So, ++


-Devananda


Secondly, to signal to other organizations that this promotional
opportunity is available.

Personally, I appreciate being thanked for my work. I do not
necessarily need to be thanked in this fashion, however as the past
venues have been far more subdued than the Tuesday night events
(think cocktail party), it's a welcome mid-week respite for this
overwhelmed little introvert. I don't want to see it go, but I will
understand if it does.

Some numbers, for those who like them (Thanks to Mark Atwood for
providing them):

Total repos: 1010
Total approvers: 1085
Repos for official teams: 566
OpenStack repo approvers: 717
Repos under release management: 90
Managed release repo approvers: 281

Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Fox, Kevin M
I agree with that, and thats why providing some bare minimum abstraction will 
help the users not have to choose a COE themselves. If we can't decide, why can 
they? If all they want to do is launch a container, they should be able to 
script up "magnum launch-container foo/bar:latest" and get one. That script can 
then be relied upon.

Today, they have to write scripts to deploy to the specific COE they have 
chosen. If they chose Docker, and something better comes out, they have to go 
rewrite a bunch of stuff to target the new, better thing. This puts a lot of 
work on others.

Do I think we can provide an abstraction that prevents them from ever having to 
rewrite scripts? No. There are a lot of features in the COE world in flight 
right now and we dont want to solidify an api around them yet. We shouldn't 
even try that. But can we cover a few common things now? Yeah.

Thanks,
Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Thursday, April 21, 2016 7:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

> On Apr 20, 2016, at 2:49 PM, Joshua Harlow  wrote:
>
> Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> making the COEs work better on OpenStack, that would be a winning
>>> strategy. Support and compliment our various COE ecosystems.
>
> So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
> COEs work better on OpenStack' but I do dislike the part about COEs (plural) 
> because it is once again the old non-opinionated problem that we (as a 
> community) suffer from.
>
> Just my 2 cents, but I'd almost rather we pick one COE and integrate that 
> deeply/tightly with openstack, and yes if this causes some part of the 
> openstack community to be annoyed, meh, to bad. Sadly I have a feeling we are 
> hurting ourselves by continuing to try to be everything and not picking 
> anything (it's a general thing we, as a group, seem to be good at, lol). I 
> mean I get the reason to just support all the things, but it feels like we as 
> a community could just pick something, work together on figuring out how to 
> pick one, using all these bright leaders we have to help make that possible 
> (and yes this might piss some people off, to bad). Then work toward making 
> that something great and move on…

The key issue preventing the selection of only one COE is that this area is 
moving very quickly. If we would have decided what to pick at the time the 
Magnum idea was created, we would have selected Docker. If you look at it 
today, you might pick something else. A few months down the road, there may be 
yet another choice that is more compelling. The fact that a cloud operator can 
integrate services with OpenStack, and have the freedom to offer support for a 
selection of COE’s is a form of insurance against the risk of picking the wrong 
one. Our compute service offers a choice of hypervisors, our block storage 
service offers a choice of storage hardware drivers, our networking service 
allows a choice of network drivers. Magnum is following the same pattern of 
choice that has made OpenStack compelling for a very diverse community. That 
design consideration was intentional.

Over time, we can focus the majority of our effort on deep integration with 
COEs that users select the most. I’m convinced it’s still too early to bet the 
farm on just one choice.

Adrian

>> I'm with Adrian on that one. I've attended a lot of container-oriented
>> conferences over the past year and my main takeaway is that this new
>> crowd of potential users is not interested (at all) in an
>> OpenStack-specific lowest common denominator API for COEs. They want to
>> take advantage of the cool features in Kubernetes API or the versatility
>> of Mesos. They want to avoid caring about the infrastructure provider
>> bit (and not deploy Mesos or Kubernetes themselves).
>>
>> Let's focus on the infrastructure provider bit -- that is what we do and
>> what the ecosystem wants us to provide.
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-21 Thread Joshua Harlow

Boden Russell wrote:

I haven't spent much time on this, so the answers below are a first
approximation based on a quick visual inspection (e.g. subject to change
when I get a chance to hack on some code).

On 4/21/16 12:10 PM, Salvatore Orlando wrote:

Can you share more details on the "few things we need" that
retrying is lacking?


(a) Some of our existing code uses a 'stepping' scheme (fist N attempts
with timeout T, next M attempts with timeout U, etc.). For example [1].
This could also be tackled using chaining.


This might be harder in retrying, but I think I can help u make 
something that will work, since retrying has a way to provide a custom 
delay function.



(b) It doesn't appear retrying supports capping (ceiling) exponential
sleep times as we do in [2].


https://github.com/rholder/retrying/blob/master/retrying.py#L65

'wait_exponential_max'?




Do you think oslo_messaging would be a good target? Or do you think it
should go somewhere else?


My initial thought was to implement it as a subclass of oslo_messaging's
RPCClient [3] with a nice way for consumers to configure the
backoff/retry magic. If consumers want a backing off client, then they
use the new subclass.


[1] https://github.com/openstack/nova/blob/master/nova/conductor/api.py#L147
[2] https://review.openstack.org/#/c/280595/
[3]
https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/rpc/client.py#L208

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Fox, Kevin M
The COE's have a pressure not to standardize their api's between competing 
COE's. If you can lock a user into your api, then they cant go to your 
competitor.

The standard api really needs to come from those invested in not being locked 
in. OpenStack's been largely about that since the beginning. It may not belong 
in Magnum, but I do believe it belongs in OpenStack.

Thanks,
Kevin

From: Steve Gordon [sgor...@redhat.com]
Sent: Thursday, April 21, 2016 6:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

- Original Message -
> From: "Hongbin Lu" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> > -Original Message-
> > From: Keith Bray [mailto:keith.b...@rackspace.com]
> > Sent: April-20-16 6:13 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> > abstraction for all COEs
> >
> > Magnum doesn¹t have to preclude tight integration for single COEs you
> > speak of.  The heavy lifting of tight integration of the COE in to
> > OpenStack (so that it performs optimally with the infra) can be modular
> > (where the work is performed by plug-in models to Magnum, not performed
> > by Magnum itself. The tight integration can be done by leveraging
> > existing technologies (Heat and/or choose your DevOps tool of choice:
> > Chef/Ansible/etc). This allows interested community members to focus on
> > tight integration of whatever COE they want, focusing specifically on
>
> I agree that tight integration can be achieved by a plugin, but I think the
> key question is who will do the work. If tight integration needs to be done,
> I wonder why it is not part of the Magnum efforts.

Why does the integration belong in Magnum though? To me it belongs in the COEs 
themselves (e.g. their in-tree network/storage plugins) such that someone can 
leverage them regardless of their choices regarding COE deployment tooling (and 
yes that means Magnum should be able to leverage them too)? I guess the issue 
is that in the above conversation we are overloading the term "integration" 
which can be taken to mean different things...

-Steve

> From my point of view,
> pushing the work out doesn't seem to address the original pain, which is
> some users don't want to explore the complexities of individual COEs.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Fox, Kevin M
+1.

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, April 21, 2016 7:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

> -Original Message-
> From: Steve Gordon [mailto:sgor...@redhat.com]
> Sent: April-21-16 9:39 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
> - Original Message -
> > From: "Hongbin Lu" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > > -Original Message-
> > > From: Keith Bray [mailto:keith.b...@rackspace.com]
> > > Sent: April-20-16 6:13 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build
> > > unified abstraction for all COEs
> > >
> > > Magnum doesn¹t have to preclude tight integration for single COEs
> > > you speak of.  The heavy lifting of tight integration of the COE in
> > > to OpenStack (so that it performs optimally with the infra) can be
> > > modular (where the work is performed by plug-in models to Magnum,
> > > not performed by Magnum itself. The tight integration can be done
> by
> > > leveraging existing technologies (Heat and/or choose your DevOps
> tool of choice:
> > > Chef/Ansible/etc). This allows interested community members to
> focus
> > > on tight integration of whatever COE they want, focusing
> > > specifically on
> >
> > I agree that tight integration can be achieved by a plugin, but I
> > think the key question is who will do the work. If tight integration
> > needs to be done, I wonder why it is not part of the Magnum efforts.
>
> Why does the integration belong in Magnum though? To me it belongs in
> the COEs themselves (e.g. their in-tree network/storage plugins) such
> that someone can leverage them regardless of their choices regarding
> COE deployment tooling (and yes that means Magnum should be able to
> leverage them too)? I guess the issue is that in the above conversation
> we are overloading the term "integration" which can be taken to mean
> different things...

I can clarify. I mean to introduce abstractions to allow tight integration 
between COEs and OpenStack. For example,

$ magnum container-create --volume= --net= ...

I agree with you that such integration should be supported by the COEs 
themselves. If it does, Magnum will leverage it (anyone can leverage it as well 
regardless of they are using Magnum or not). If it doesn't (the reality), 
Magnum could add support for that via its abstraction layer. For your question 
about why such integration belongs in Magnum, my answer is that the work needs 
to be done in one place so that everyone can leverage it instead of 
re-inventing their own solutions. Magnum is the OpenStack container service so 
it is nature for Magnum to take it IMHO.

>
> -Steve
>
> > From my point of view,
> > pushing the work out doesn't seem to address the original pain, which
> > is some users don't want to explore the complexities of individual
> COEs.
>
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-04-21 Thread Hongbin Lu
Ricardo,

That is great! It is good to hear Magnum works well in your side.

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: April-21-16 1:48 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> The thread is a month old, but I sent a shorter version of this to
> Daneyon before with some info on the things we dealt with to get Magnum
> deployed successfully. We wrapped it up in a post (there's a video
> linked there with some demos at the end):
> 
> http://openstack-in-production.blogspot.ch/2016/04/containers-and-cern-
> cloud.html
> 
> Hopefully the pointers to the relevant blueprints for some of the
> issues we found will be useful for others.
> 
> Cheers,
>   Ricardo
> 
> On Fri, Mar 18, 2016 at 3:42 PM, Ricardo Rocha 
> wrote:
> > Hi.
> >
> > We're running a Magnum pilot service - which means it's being
> > maintained just like all other OpenStack services and running on the
> > production infrastructure, but only available to a subset of tenants
> > for a start.
> >
> > We're learning a lot in the process and will happily report on this
> in
> > the next couple weeks.
> >
> > The quick summary is that it's looking good and stable with a few
> > hicks in the setup, which are handled by patches already under review.
> > The one we need the most is the trustee user (USER_TOKEN in the bay
> > heat params is preventing scaling after the token expires), but with
> > the review in good shape we look forward to try it very soon.
> >
> > Regarding barbican we'll keep you posted, we're working on the
> missing
> > puppet bits.
> >
> > Ricardo
> >
> > On Fri, Mar 18, 2016 at 2:30 AM, Daneyon Hansen (danehans)
> >  wrote:
> >> Adrian/Hongbin,
> >>
> >> Thanks for taking the time to provide your input on this matter.
> After reviewing your feedback, my takeaway is that Magnum is not ready
> for production without implementing Barbican or some other future
> feature such as the Keystone option Adrian provided.
> >>
> >> All,
> >>
> >> Is anyone using Magnum in production? If so, I would appreciate your
> input.
> >>
> >> -Daneyon Hansen
> >>
> >>> On Mar 17, 2016, at 6:16 PM, Adrian Otto 
> wrote:
> >>>
> >>> Hongbin,
> >>>
> >>> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
> >>>
> >>> Keystone credentials store:
> >>> http://specs.openstack.org/openstack/keystone-
> specs/api/v3/identity-
> >>> api-v3.html#credentials-v3-credentials
> >>>
> >>> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key to
> decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount of
> code in Magnum would be small, as the API already exists. We would need
> a library function to encrypt and decrypt the data, and ideally a way
> to select different encryption algorithms in case one is judged weak at
> some point in the future, justifying the use of an alternate.
> >>>
> >>> Adrian
> >>>
>  On Mar 17, 2016, at 4:55 PM, Adrian Otto
>  wrote:
> 
>  Hongbin,
> 
> > On Mar 17, 2016, at 2:25 PM, Hongbin Lu 
> wrote:
> >
> > Adrian,
> >
> > I think we need a boarder set of inputs in this matter, so I
> moved the discussion from whiteboard back to here. Please check my
> replies inline.
> >
> >> I would like to get a clear problem statement written for this.
> >> As I see it, the problem is that there is no safe place to put
> certificates in clouds that do not run Barbican.
> >> It seems the solution is to make it easy to add Barbican such
> that it's included in the setup for Magnum.
> > No, the solution is to explore an non-Barbican solution to store
> certificates securely.
> 
>  I am seeking more clarity about why a non-Barbican solution is
> desired. Why is there resistance to adopting both Magnum and Barbican
> together? I think the answer is that people think they can make Magnum
> work with really old clouds that were set up before Barbican was
> introduced. That expectation is simply not reasonable. If there were a
> way to easily add Barbican to older clouds, perhaps this reluctance
> would melt away.
> 
> >> Magnum should not be in the business of credential storage when
> there is an existing service focused on that need.
> >>
> >> Is there an issue with running Barbican on older clouds?
> >> Anyone can choose to use the builtin option with Magnum if hey
> don't have Barbican.
> >> A known limitation of 

Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Devananda van der Veen
On Thu, Apr 21, 2016 at 9:07 AM, Michael Krotscheck 
wrote:

> Hey everyone-
>
> So, HPE is seeking sponsors to continue the core party. The reasons are
> varied - internal sponsors have moved to other projects, the Big Tent has
> drastically increased the # of cores, and the upcoming summit format change
> creates quite a bit of uncertainty on everything surrounding the summit.
>
> Furthermore, the existence of the Core party has been... contentious. Some
> believe it's exclusionary, others think it's inappropriate, yet others
> think it's a good way to thank those of use who agree to be constantly
> pestered for code reviews.
>
> I'm writing this message for two reasons - mostly, to kick off a
> discussion on whether the party is worthwhile.
>

The rationale for the creation of the first "core party" in Hong Kong was
to facilitate a setting for informal discussions that could bring about a
consensus on potentially-contentious cross-project topics, when there was
no other time or location that brought together all the TC members, PTLs,
and project core reviewers -- many of whom did not yet know each other.
Note that Hong Kong was the first summit where the Technical Committee was
composed of elected members, not just PTLs, and we did not have a separate
day at the design summit to discuss cross-project issues.

The first cross-project design summit tracks were held at the following
summit, in Atlanta, though I recall it lacking the necessary participation
to be successful. Today, we have many more avenues to discuss important
topics affecting all (or more than one) projects. The improved transparency
into those discussions is beneficial to everyone; the perceived exclusivity
of the "core party" is helpful to no one.

So, in summary, I believe this party served a good purpose in Hong Kong and
Atlanta. While it provided some developers with a quiet evening for
discussions to happen in Paris, Vancouver, and Tokyo, we now have other
(better) venues for the discussions this party once facilitated, and it has
outlived its purpose.

For what it's worth, I would be happy to see it replaced with smaller
gatherings around cross-project initiatives. I continue to believe that one
of the most important aspects of our face-to-face gatherings, as a
community, is building the camaraderie and social connections between
developers, both within and across corporate and project boundaries.

-Devananda




> Secondly, to signal to other organizations that this promotional
> opportunity is available.
>
> Personally, I appreciate being thanked for my work. I do not necessarily
> need to be thanked in this fashion, however as the past venues have been
> far more subdued than the Tuesday night events (think cocktail party), it's
> a welcome mid-week respite for this overwhelmed little introvert. I don't
> want to see it go, but I will understand if it does.
>
> Some numbers, for those who like them (Thanks to Mark Atwood for providing
> them):
>
> Total repos: 1010
> Total approvers: 1085
> Repos for official teams: 566
> OpenStack repo approvers: 717
> Repos under release management: 90
> Managed release repo approvers: 281
>
> Michael
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-21 Thread Ihar Hrachyshka

Cathy Zhang  wrote:


Hi everyone,

We have room 400 at 3:10pm on Thursday available for discussion of the  
two topics.
Another option is to use the common room with roundtables in "Salon C"  
during Monday or Wednesday lunch time.


Room 400 at 3:10pm is a closed room while the Salon C is a big open room  
which can host 500 people.


I am Ok with either option. Let me know if anyone has a strong preference.


On Monday, I have two talks to do. First one is 2:50-3:30pm, second one is  
4:40-5:20pm. But lunch time should probably be fine if it leaves time for  
the actual lunch...


Thursday at 3:10pm also works for me.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-21 Thread Boden Russell
I haven't spent much time on this, so the answers below are a first
approximation based on a quick visual inspection (e.g. subject to change
when I get a chance to hack on some code).

On 4/21/16 12:10 PM, Salvatore Orlando wrote:
> Can you share more details on the "few things we need" that
> retrying is lacking?

(a) Some of our existing code uses a 'stepping' scheme (fist N attempts
with timeout T, next M attempts with timeout U, etc.). For example [1].
This could also be tackled using chaining.
(b) It doesn't appear retrying supports capping (ceiling) exponential
sleep times as we do in [2].

> Do you think oslo_messaging would be a good target? Or do you think it
> should go somewhere else?

My initial thought was to implement it as a subclass of oslo_messaging's
RPCClient [3] with a nice way for consumers to configure the
backoff/retry magic. If consumers want a backing off client, then they
use the new subclass.


[1] https://github.com/openstack/nova/blob/master/nova/conductor/api.py#L147
[2] https://review.openstack.org/#/c/280595/
[3]
https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/rpc/client.py#L208

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-21 Thread Bhandaru, Malini K
I vote for Monday to get the ball rolling, meet the interested parties, and 
Continue on Thursday at 3:10 in a quieter setting ... so we leave with some 
consensus.
Thanks Cathy!
Malini

-Original Message-
From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com] 
Sent: Thursday, April 21, 2016 11:43 AM
To: Cathy Zhang ; OpenStack Development Mailing List 
(not for usage questions) ; Ihar Hrachyshka 
; Vikram Choudhary ; Sean M. 
Collins ; Haim Daniel ; Mathieu Rohon 
; Shaughnessy, David ; 
Eichberger, German ; Henry Fourie 
; arma...@gmail.com; Miguel Angel Ajo 
; Reedip ; Thierry 
Carrez 
Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Hi everyone,

We have room 400 at 3:10pm on Thursday available for discussion of the two 
topics. 
Another option is to use the common room with roundtables in "Salon C" during 
Monday or Wednesday lunch time.

Room 400 at 3:10pm is a closed room while the Salon C is a big open room which 
can host 500 people.

I am Ok with either option. Let me know if anyone has a strong preference. 

Thanks,
Cathy


-Original Message-
From: Cathy Zhang
Sent: Thursday, April 14, 2016 1:23 PM
To: OpenStack Development Mailing List (not for usage questions); 'Ihar 
Hrachyshka'; Vikram Choudhary; 'Sean M. Collins'; 'Haim Daniel'; 'Mathieu 
Rohon'; 'Shaughnessy, David'; 'Eichberger, German'; Cathy Zhang; Henry Fourie; 
'arma...@gmail.com'
Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Thanks for everyone's reply! 

Here is the summary based on the replies I received: 

1.  We should have a meet-up for these two topics. The "to" list are the people 
who have interest in these topics. 
I am thinking about around lunch time on Tuesday or Wednesday since some of 
us will fly back on Friday morning/noon. 
If this time is OK with everyone, I will find a place and let you know 
where and what time to meet. 

2.  There is a bug opened for the QoS Flow Classifier 
https://bugs.launchpad.net/neutron/+bug/1527671
We can either change the bug title and modify the bug details or start with a 
new one for the common FC which provides info on all requirements needed by all 
relevant use cases. There is a bug opened for OVS agent extension 
https://bugs.launchpad.net/neutron/+bug/1517903

3.  There are some very rough, ugly as Sean put it:-), and preliminary work on 
common FC https://github.com/openstack/neutron-classifier which we can see how 
to leverage. There is also a SFC API spec which covers the FC API for SFC usage 
https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst,
the following is the CLI version of the Flow Classifier for your reference:

neutron flow-classifier-create [-h]
[--description ]
[--protocol ]
[--ethertype ]
[--source-port :]
[--destination-port :]
[--source-ip-prefix ]
[--destination-ip-prefix ]
[--logical-source-port ]
[--logical-destination-port ]
[--l7-parameters ] FLOW-CLASSIFIER-NAME

The corresponding code is here 
https://github.com/openstack/networking-sfc/tree/master/networking_sfc/extensions

4.  We should come up with a formal Neutron spec for FC and another one for OVS 
Agent extension and get everyone's review and approval. Here is the etherpad 
catching our previous requirement discussion on OVS agent (Thanks David for the 
link! I remember we had this discussion before) 
https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion


More inline. 

Thanks,
Cathy


-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
Sent: Thursday, April 14, 2016 3:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Cathy Zhang  wrote:

> Hi everyone,
> Per Armando’s request, Louis and I are looking into the following 
> features for Newton cycle.
> · Neutron Common FC used for SFC, QoS, Tap as a service etc.,
> · OVS Agent extension
> Some of you might know that we already developed a FC in 
> networking-sfc project and QoS also has a FC. It makes sense that we 
> have one common FC in Neutron that could be shared by SFC, QoS, Tap as a 
> service etc.
> features in Neutron.

I don’t actually know of any classifier in QoS. It’s only planned to emerge, 
but there are no specs or anything specific to the feature.

Anyway, I agree that classifier API belongs to core neutron and should be 
reused by all 

Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Shamail Tahir
On Thu, Apr 21, 2016 at 2:43 PM, Tim Bell  wrote:

>
> On 21/04/16 19:40, "Doug Hellmann"  wrote:
>
> >Excerpts from Thierry Carrez's message of 2016-04-21 18:22:53 +0200:
> >> Michael Krotscheck wrote:
> >>
> >>
> >> So.. while I understand the need for calmer parties during the week, I
> >> think the general trends is to have less parties and more small group
> >> dinners. I would be fine with HPE sponsoring more project team dinners
> >> instead :)
> >
> >That fits my vision of the new event, which is less focused on big
> >glitzy events and more on small socializing opportunities.
>
> At OSCON, I remember some very useful discussions where tables had signs
> showing
> the topics for socializing. While I have appreciated the core reviewers
> (and others)
> events, I think there are better formats given the massive expansion of
> the projects
> and ecosystem, which reduce the chances for informal discussions.
>
+1

This is a great idea.. The topics could even be sourced similar to how
lightning talks are determined.  If there is a topic that you are
interested then post/write it down and others can +1 it.

>
> I remember in the OpenStack Boston summit when there was a table marked
> ‘Puppet’ which was one of the most
> productive discussions I have had in the OpenStack summits (Thanks Dan :-)
>
> Tim
>
> >
> >Doug
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Tim Bell

On 21/04/16 19:40, "Doug Hellmann"  wrote:

>Excerpts from Thierry Carrez's message of 2016-04-21 18:22:53 +0200:
>> Michael Krotscheck wrote:
>>
>> 
>> So.. while I understand the need for calmer parties during the week, I 
>> think the general trends is to have less parties and more small group 
>> dinners. I would be fine with HPE sponsoring more project team dinners 
>> instead :)
>
>That fits my vision of the new event, which is less focused on big
>glitzy events and more on small socializing opportunities.

At OSCON, I remember some very useful discussions where tables had signs 
showing 
the topics for socializing. While I have appreciated the core reviewers (and 
others)
events, I think there are better formats given the massive expansion of the 
projects
and ecosystem, which reduce the chances for informal discussions.

I remember in the OpenStack Boston summit when there was a table marked 
‘Puppet’ which was one of the most
productive discussions I have had in the OpenStack summits (Thanks Dan :-)

Tim

>
>Doug
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-21 Thread Cathy Zhang
Hi everyone,

We have room 400 at 3:10pm on Thursday available for discussion of the two 
topics. 
Another option is to use the common room with roundtables in "Salon C" during 
Monday or Wednesday lunch time.

Room 400 at 3:10pm is a closed room while the Salon C is a big open room which 
can host 500 people.

I am Ok with either option. Let me know if anyone has a strong preference. 

Thanks,
Cathy


-Original Message-
From: Cathy Zhang 
Sent: Thursday, April 14, 2016 1:23 PM
To: OpenStack Development Mailing List (not for usage questions); 'Ihar 
Hrachyshka'; Vikram Choudhary; 'Sean M. Collins'; 'Haim Daniel'; 'Mathieu 
Rohon'; 'Shaughnessy, David'; 'Eichberger, German'; Cathy Zhang; Henry Fourie; 
'arma...@gmail.com'
Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Thanks for everyone's reply! 

Here is the summary based on the replies I received: 

1.  We should have a meet-up for these two topics. The "to" list are the people 
who have interest in these topics. 
I am thinking about around lunch time on Tuesday or Wednesday since some of 
us will fly back on Friday morning/noon. 
If this time is OK with everyone, I will find a place and let you know 
where and what time to meet. 

2.  There is a bug opened for the QoS Flow Classifier 
https://bugs.launchpad.net/neutron/+bug/1527671
We can either change the bug title and modify the bug details or start with a 
new one for the common FC which provides info on all requirements needed by all 
relevant use cases. There is a bug opened for OVS agent extension 
https://bugs.launchpad.net/neutron/+bug/1517903

3.  There are some very rough, ugly as Sean put it:-), and preliminary work on 
common FC https://github.com/openstack/neutron-classifier which we can see how 
to leverage. There is also a SFC API spec which covers the FC API for SFC usage 
https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst,
the following is the CLI version of the Flow Classifier for your reference:

neutron flow-classifier-create [-h]
[--description ]
[--protocol ]
[--ethertype ]
[--source-port :]
[--destination-port :]
[--source-ip-prefix ]
[--destination-ip-prefix ]
[--logical-source-port ]
[--logical-destination-port ]
[--l7-parameters ] FLOW-CLASSIFIER-NAME

The corresponding code is here 
https://github.com/openstack/networking-sfc/tree/master/networking_sfc/extensions

4.  We should come up with a formal Neutron spec for FC and another one for OVS 
Agent extension and get everyone's review and approval. Here is the etherpad 
catching our previous requirement discussion on OVS agent (Thanks David for the 
link! I remember we had this discussion before) 
https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion


More inline. 

Thanks,
Cathy


-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
Sent: Thursday, April 14, 2016 3:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Cathy Zhang  wrote:

> Hi everyone,
> Per Armando’s request, Louis and I are looking into the following 
> features for Newton cycle.
> · Neutron Common FC used for SFC, QoS, Tap as a service etc.,
> · OVS Agent extension
> Some of you might know that we already developed a FC in 
> networking-sfc project and QoS also has a FC. It makes sense that we 
> have one common FC in Neutron that could be shared by SFC, QoS, Tap as a 
> service etc.
> features in Neutron.

I don’t actually know of any classifier in QoS. It’s only planned to emerge, 
but there are no specs or anything specific to the feature.

Anyway, I agree that classifier API belongs to core neutron and should be 
reused by all interested subprojects from there.

> Different features may extend OVS agent and add different new OVS flow 
> tables to support their new functionality. A mechanism is needed to 
> ensure consistent OVS flow table modification when multiple features 
> co-exist. AFAIK, there is some preliminary work on this, but it is not 
> a complete solution yet.

I think there is no formal spec or anything, just some emails around there.

That said, I don’t follow why it’s a requirement for SFC to switch to l2 agent 
extension mechanism. Even today, with SFC maintaining its own agent, there are 
no clear guarantees for flow priorities that would avoid all possible conflicts.

Cathy> There is no requirement for SFC to switch. My understanding is that 
current L2 agent extension does not solve the conflicting entry issue if two 
features inject the same priority table entry. I think this new L2 agent effort 
is try to come up with a mechanism to resolve this issue. Of course if each 
feature( SFC or Qos) uses its own agent, then there is no 

Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-21 Thread Joshua Harlow

Boden Russell wrote:

On 4/20/16 3:29 PM, Doug Hellmann wrote:

Yes, please, let's try to make that work and contribute upstream if we
need minor modifications, before we create something new.


We can leverage the 'retrying' module (already in global requirements).
It lacks a few things we need, but those can be implemented using its
existing "hooks" today, or, working with the module owner(s) to push a
few changes that we need (the later probably provides the "greatest good").

Assuming we'll leverage 'retrying', I was thinking the initial goals
here are:
(a) Ensure 'retrying' supports the behaviors we need for our usages in
neutron + nova (see [1] - [5] on my initial note) today. Implementation
details TBD.
(b) Implement a "Backing off RPC client" in oslo, inspired by [1].
(c) Update nova + neutron to use the "common implementation(s)" rather
than 1-offs.

This sounds fun and I'm happy to take it on. However, I probably won't
make much progress until after the summit for obvious reasons. I'll plan
to lead with code, if a RFE/spec/other is needed please let me know.



I'm fine with either RFE/spec/code, whatever makes u happy. I'd rather 
have a contributor work in a way that makes them feel happy, then force 
a process that makes them feel unhappy, especially IMHO for something 
like this.



Additional comments welcomed.


Thanks

[1] https://review.openstack.org/#/c/280595

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-21 Thread Joshua Harlow

Salvatore Orlando wrote:


On 21 April 2016 at 16:54, Boden Russell > wrote:

On 4/20/16 3:29 PM, Doug Hellmann wrote:
>  Yes, please, let's try to make that work and contribute upstream if we
>  need minor modifications, before we create something new.

We can leverage the 'retrying' module (already in global requirements).
It lacks a few things we need, but those can be implemented using its
existing "hooks" today, or, working with the module owner(s) to push a
few changes that we need (the later probably provides the "greatest
good").


Retrying (even if mostly a 1-man effort) already has a history of
contribution from different sources, including a few OpenStack
contributors as well.
It hasn't had many commits in the past 12 months, but this does not mean
new PRs won't be accepted.
Starting a new library for something like this really feels like NIH.



Yes please (as a person that has contributed to that library); I know 
the retrying library isn't perfect, but let's IMHO do our due diligence 
there before we go off and make something else. I know that's not always 
an easy proposition (or sometimes even the shortest path) but  think it 
is our responsibility to at least try (the library isn't that huge, and 
it is pretty targeted at doing a small thing, so its not like there is a 
massive amount of code or a massive amount of history...)



As for hooks vs contributions this really depends on what you need to
add. Can you share more details on the "few things we need" that
retrying is lacking?
(and I apologise if you shared them earlier in this thread - I did not
read all of it)


Assuming we'll leverage 'retrying', I was thinking the initial goals
here are:
(a) Ensure 'retrying' supports the behaviors we need for our usages in
neutron + nova (see [1] - [5] on my initial note) today. Implementation
details TBD.
(b) Implement a "Backing off RPC client" in oslo, inspired by [1].


Do you think oslo_messaging would be a good target? Or do you think it
should go somewhere else?

(c) Update nova + neutron to use the "common implementation(s)" rather
than 1-offs.

This sounds fun and I'm happy to take it on. However, I probably won't

make much progress until after the summit for obvious reasons. I'll plan
to lead with code, if a RFE/spec/other is needed please let me know.


Additional comments welcomed.


Thanks

[1] https://review.openstack.org/#/c/280595

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-21 Thread Matt Riedemann

On 4/11/2016 3:49 PM, Matt Riedemann wrote:

A few people have been asking about planning for the nova midcycle for
newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work
the best. R-14 is close to the US July 4th holiday, R-13 is during the
week of the US July 4th holiday, and R-12 is the week of the n-2 milestone.

R-16 is too close to the summit IMO, and R-10 is pushing it out too far
in the release. I'd be open to R-14 though but don't know what other
people's plans are.

As far as a venue is concerned, I haven't heard any offers from
companies to host yet. If no one brings it up by the summit, I'll see if
hosting in Rochester, MN at the IBM site is a possibility.

[1] http://releases.openstack.org/newton/schedule.html



We discussed this in the nova meeting last week [1] but I never replied 
to this ML thread.


We agreed to have the Nova midcycle at Intel in Hillsboro, OR on July 19-21.

I'll be working with Intel on the details and will be posting a form for 
people to sign up so we can get a rough headcount.


[1] 
http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-04-14-21.00.log.html


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2016-04-21 17:54:37 +:
> On 2016-04-21 13:40:15 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > I didn't realize the tag was being used that way. I agree it's
> > completely inappropriate, and I wish someone had asked.
> [...]
> 
> It's likely seen by some as a big-tent proxy for the old integrated
> vs. incubated distinction.

I'm sure that's it, even though that's not really what it's about. I
hoped to deprecate that tag anyway, so this is just another reason to do
so.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] release hiatus

2016-04-21 Thread Doug Hellmann
The release team is preparing for and traveling to the summit, just as
many of you are. With that in mind, we are going to hold off on
releasing anything until 2 May, unless there is some sort of critical
issue or gate blockage. Please feel free to submit release requests to
openstack/releases, but we'll only plan on processing any that indicate
critical issues in the commit messages.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-21 Thread Salvatore Orlando
On 21 April 2016 at 16:54, Boden Russell  wrote:

> On 4/20/16 3:29 PM, Doug Hellmann wrote:
> > Yes, please, let's try to make that work and contribute upstream if we
> > need minor modifications, before we create something new.
>
> We can leverage the 'retrying' module (already in global requirements).
> It lacks a few things we need, but those can be implemented using its
> existing "hooks" today, or, working with the module owner(s) to push a
> few changes that we need (the later probably provides the "greatest good").
>

Retrying (even if mostly a 1-man effort) already has a history of
contribution from different sources, including a few OpenStack contributors
as well.
It hasn't had many commits in the past 12 months, but this does not mean
new PRs won't be accepted.
Starting a new library for something like this really feels like NIH.

As for hooks vs contributions this really depends on what you need to add.
Can you share more details on the "few things we need" that retrying is
lacking?
(and I apologise if you shared them earlier in this thread - I did not read
all of it)


>
> Assuming we'll leverage 'retrying', I was thinking the initial goals
> here are:
> (a) Ensure 'retrying' supports the behaviors we need for our usages in
> neutron + nova (see [1] - [5] on my initial note) today. Implementation
> details TBD.
> (b) Implement a "Backing off RPC client" in oslo, inspired by [1].
>

Do you think oslo_messaging would be a good target? Or do you think it
should go somewhere else?


> (c) Update nova + neutron to use the "common implementation(s)" rather
> than 1-offs.
>
> This sounds fun and I'm happy to take it on. However, I probably won't

make much progress until after the summit for obvious reasons. I'll plan
> to lead with code, if a RFE/spec/other is needed please let me know.


> Additional comments welcomed.
>
>
> Thanks
>
> [1] https://review.openstack.org/#/c/280595
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Question about OpenStack Code of Conduct

2016-04-21 Thread Jeremy Stanley
On 2016-04-21 17:54:56 + (+), Adrian Otto wrote:
> Below is an excerpt from:
> https://www.openstack.org/legal/community-code-of-conduct/
> 
> "When we disagree, we consult others. Disagreements, both social
> and technical, happen all the time and the OpenStack community is
> no exception. It is important that we resolve disagreements and
> differing views constructively and with the help of the community
> and community processes. We have the Technical Board, the User
> Committee, and a series of other governance bodies which help to
> decide the right course for OpenStack. There are also Project Core
> Teams and Project Technical Leads, who may be able to help us
> figure out the best direction for OpenStack. When our goals differ
> dramatically, we encourage the creation of alternative
> implementations, so that the community can test new ideas and
> contribute to the discussion.”
> 
> Does the “Technical Board” mentioned above mean “Technical
> Committee” or “Foundation board of directors”? It is not clear to
> me when consulting our list of governance bodies[1]. It’s
> mentioned along with the “User Committee”, so I think the text
> actually meant “Technical Committee”. Who can clarify this
> ambiguity?

This question would probably be better asked on the
foundat...@lists.openstack.org mailing list since the openstack-dev
audience doesn't generally have direct control over content on
www.openstack.org/legal, but the wording seems to have been
partially copied from the Ubuntu community's[1] which refers to
their analogue[2] of our TC.

[1] https://launchpad.net/codeofconduct/1.0.1
[2] https://wiki.ubuntu.com/TechnicalBoard
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-04-21 Thread Michael Krotscheck
On Thu, Apr 21, 2016 at 10:21 AM Monty Taylor  wrote:

> Neat! Maybe let's find a time at the summit to sit down and look through
> things. I'm guessing that adding a second language consumer to the
> config will raise a ton of useful questions around documentation, data
> format, etc - but those will likely be super valuable to find, document
> or fix.


Fair warning- I'm not volunteering to fix that. I'm ok using an established
format, but [big honkin' list of JS work] is going to take precedence.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-04-21 Thread Ricardo Rocha
Hi.

The thread is a month old, but I sent a shorter version of this to
Daneyon before with some info on the things we dealt with to get
Magnum deployed successfully. We wrapped it up in a post (there's a
video linked there with some demos at the end):

http://openstack-in-production.blogspot.ch/2016/04/containers-and-cern-cloud.html

Hopefully the pointers to the relevant blueprints for some of the
issues we found will be useful for others.

Cheers,
  Ricardo

On Fri, Mar 18, 2016 at 3:42 PM, Ricardo Rocha  wrote:
> Hi.
>
> We're running a Magnum pilot service - which means it's being
> maintained just like all other OpenStack services and running on the
> production infrastructure, but only available to a subset of tenants
> for a start.
>
> We're learning a lot in the process and will happily report on this in
> the next couple weeks.
>
> The quick summary is that it's looking good and stable with a few
> hicks in the setup, which are handled by patches already under review.
> The one we need the most is the trustee user (USER_TOKEN in the bay
> heat params is preventing scaling after the token expires), but with
> the review in good shape we look forward to try it very soon.
>
> Regarding barbican we'll keep you posted, we're working on the missing
> puppet bits.
>
> Ricardo
>
> On Fri, Mar 18, 2016 at 2:30 AM, Daneyon Hansen (danehans)
>  wrote:
>> Adrian/Hongbin,
>>
>> Thanks for taking the time to provide your input on this matter. After 
>> reviewing your feedback, my takeaway is that Magnum is not ready for 
>> production without implementing Barbican or some other future feature such 
>> as the Keystone option Adrian provided.
>>
>> All,
>>
>> Is anyone using Magnum in production? If so, I would appreciate your input.
>>
>> -Daneyon Hansen
>>
>>> On Mar 17, 2016, at 6:16 PM, Adrian Otto  wrote:
>>>
>>> Hongbin,
>>>
>>> One alternative we could discuss as an option for operators that have a 
>>> good reason not to use Barbican, is to use Keystone.
>>>
>>> Keystone credentials store: 
>>> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials
>>>
>>> The contents are stored in plain text in the Keystone DB, so we would want 
>>> to generate an encryption key per bay, encrypt the certificate and store it 
>>> in keystone. We would then use the same key to decrypt it upon reading the 
>>> key back. This might be an acceptable middle ground for clouds that will 
>>> not or can not run Barbican. This should work for any OpenStack cloud since 
>>> Grizzly. The total amount of code in Magnum would be small, as the API 
>>> already exists. We would need a library function to encrypt and decrypt the 
>>> data, and ideally a way to select different encryption algorithms in case 
>>> one is judged weak at some point in the future, justifying the use of an 
>>> alternate.
>>>
>>> Adrian
>>>
 On Mar 17, 2016, at 4:55 PM, Adrian Otto  wrote:

 Hongbin,

> On Mar 17, 2016, at 2:25 PM, Hongbin Lu  wrote:
>
> Adrian,
>
> I think we need a boarder set of inputs in this matter, so I moved the 
> discussion from whiteboard back to here. Please check my replies inline.
>
>> I would like to get a clear problem statement written for this.
>> As I see it, the problem is that there is no safe place to put 
>> certificates in clouds that do not run Barbican.
>> It seems the solution is to make it easy to add Barbican such that it's 
>> included in the setup for Magnum.
> No, the solution is to explore an non-Barbican solution to store 
> certificates securely.

 I am seeking more clarity about why a non-Barbican solution is desired. 
 Why is there resistance to adopting both Magnum and Barbican together? I 
 think the answer is that people think they can make Magnum work with 
 really old clouds that were set up before Barbican was introduced. That 
 expectation is simply not reasonable. If there were a way to easily add 
 Barbican to older clouds, perhaps this reluctance would melt away.

>> Magnum should not be in the business of credential storage when there is 
>> an existing service focused on that need.
>>
>> Is there an issue with running Barbican on older clouds?
>> Anyone can choose to use the builtin option with Magnum if hey don't 
>> have Barbican.
>> A known limitation of that approach is that certificates are not 
>> replicated.
> I guess the *builtin* option you referred is simply placing the 
> certificates to local file system. A few of us had concerns on this 
> approach (In particular, Tom Cammann has gave -2 on the review [1]) 
> because it cannot scale beyond a single conductor. Finally, we made a 
> compromise to land this option and use it for testing/debugging only. In 
> 

Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Jeremy Stanley
On 2016-04-21 13:40:15 -0400 (-0400), Doug Hellmann wrote:
[...]
> I didn't realize the tag was being used that way. I agree it's
> completely inappropriate, and I wish someone had asked.
[...]

It's likely seen by some as a big-tent proxy for the old integrated
vs. incubated distinction.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Leadership training dates - please confirm attendance

2016-04-21 Thread Doug Hellmann
Excerpts from Colette Alexander's message of 2016-04-21 08:07:52 -0700:
> >
> >
> > >> Colette Alexander wrote:
> > >>> Hi everyone!
> > >>>
> > >>> Quick summary of where we're at with leadership training: dates are
> > >>> confirmed as available with ZingTrain, and we're finalizing trainers
> > >>> with them right now. *June 28/29th in Ann Arbor, Michigan.*
> > >>>
> > >>> https://etherpad.openstack.org/p/Leadershiptraining
> 
> 
> Hi everyone,
> 
> Just checking in on this - if you're a current or past member of the TC and
> haven't yet signed up on the etherpad [0] and would like to attend
> training, please do so by tomorrow if you can! If you're waiting on travel
> approval or something else before you confirm, but want me to hold you a
> spot, just ping me on IRC and let me know.
> 
> If you'd like to go to leadership training and you're *not* a past or
> current TC member, stay tuned - I'll know about free spots and will send
> out information during the summit next week.
> 
> Thank you!
> 
> -colette/gothicmindfood
> 
> [0] https://etherpad.openstack.org/p/Leadershiptraining

I've been waiting to have a chance to confer with folks in Austin. Are
we under a deadline to get a head-count?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2016-04-21 18:22:53 +0200:
> Michael Krotscheck wrote:
> > So, HPE is seeking sponsors to continue the core party. The reasons are
> > varied - internal sponsors have moved to other projects, the Big Tent
> > has drastically increased the # of cores, and the upcoming summit format
> > change creates quite a bit of uncertainty on everything surrounding the
> > summit.
> >
> > Furthermore, the existence of the Core party has been... contentious.
> > Some believe it's exclusionary, others think it's inappropriate, yet
> > others think it's a good way to thank those of use who agree to be
> > constantly pestered for code reviews.
> >
> > I'm writing this message for two reasons - mostly, to kick off a
> > discussion on whether the party is worthwhile. Secondly, to signal to
> > other organizations that this promotional opportunity is available.
> >
> > Personally, I appreciate being thanked for my work. I do not necessarily
> > need to be thanked in this fashion, however as the past venues have been
> > far more subdued than the Tuesday night events (think cocktail party),
> > it's a welcome mid-week respite for this overwhelmed little introvert. I
> > don't want to see it go, but I will understand if it does.
> >
> > Some numbers, for those who like them (Thanks to Mark Atwood for
> > providing them):
> >
> > Total repos: 1010
> > Total approvers: 1085
> > Repos for official teams: 566
> > OpenStack repo approvers: 717
> > Repos under release management: 90
> > Managed release repo approvers: 281
> 
> I think it's inappropriate because it gives a wrong incentive to become 
> a core reviewer. Core reviewing should just be a duty you sign up to, 
> not necessarily a way to get into a cool party. It was also a bit 
> exclusive of other types of contributions.
> 
> Apparently in Austin the group was reduced to only release:managed 
> repositories. This tag is to describe which repositories the release 
> team is comfortable handling. I think it's inappropriate to reuse it to 
> single out a subgroup of cool folks, and if that became a tradition the 
> release team would face pressure from repositories to get the tag that 
> are totally unrelated to what the tag describes.

I didn't realize the tag was being used that way. I agree it's completely
inappropriate, and I wish someone had asked.

> 
> So.. while I understand the need for calmer parties during the week, I 
> think the general trends is to have less parties and more small group 
> dinners. I would be fine with HPE sponsoring more project team dinners 
> instead :)

That fits my vision of the new event, which is less focused on big
glitzy events and more on small socializing opportunities.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Newton Design Summit sessions planning

2016-04-21 Thread Vitaly Kramskikh
Folks,

I'd like to request workroom sessions swap.

I planned to lead a discussion of Fuel UI modularization on Wed
11.00-11.40, but at the same time there will be discussion of handling JS
dependencies of Horizon which I'd really like to attend.

So I request to swap my discussion with discussion of finalizing of HA
reference architecture with event-based control and fencing led by V.
Kuklin on Thu 11.00-11.40.

Do you have any objections?

2016-04-14 17:55 GMT+03:00 Alexey Shtokolov :

> Hi, +1 from my side.
>
> ---
> WBR, Alexey Shtokolov
>
> 2016-04-14 16:47 GMT+03:00 Evgeniy L :
>
>> Hi, no problem from my side.
>>
>> On Thu, Apr 14, 2016 at 10:53 AM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> I'd like to request workrooms sessions swap.
>>>
>>> We have a session about Fuel/Ironic integration and I'd like
>>> this session not to overlap with Ironic sessions, so Ironic
>>> team could attend Fuel sessions. At the same time, we have
>>> a session about orchestration engine and it would be great to
>>> invite there people from Mistral and Heat.
>>>
>>> My suggestion is as follows:
>>>
>>> Wed:
>>> 9:50 Astute -> Mistral/Heat/???
>>> Thu:
>>> 9.00 Fuel/Ironic/Ironic-inspector
>>>
>>> If there are any objections, please let me know asap.
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Fri, Apr 1, 2016 at 9:47 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Dear colleagues,

 Looks like we have final version sessions layout [1]
 for Austin design summit. We have 3 fishbows,
 11 workrooms, full day meetup.

 Here you can find some useful information about design
 summit [2]. All session leads must read this page,
 be prepared for their sessions (agenda, slides if needed,
 etherpads for collaborative work, etc.) and follow
 the recommendations given in "At the Design Summit" section.

 Here is Fuel session planning etherpad [3]. Almost all suggested
 topics have been put there. Please put links to slide decks
 and etherpads next to respective sessions. Here is the
 page [4] where other teams publish their planning pads.

 If session leads want for some reason to swap their slots it must
 be requested in this ML thread. If for some reason session lead
 can not lead his/her session, it must be announced in this ML thread.

 Fuel sessions are:
 ===
 Fishbowls:
 ===
 Wed:
 15:30-16:10
 16:30:17:10
 17:20-18:00

 ===
 Workrooms:
 ===
 Wed:
 9:00-9:40
 9:50-10:30
 11:00-11:40
 11:50-12:30
 13:50-14:30
 14:40-15:20
 Thu:
 9:00-9:40
 9:50-10:30
 11:00-11:40
 11:50-12:30
 13:30-14:10

 ===
 Meetup:
 ===
 Fri:
 9:00-12:30
 14:00-17:30

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/attachments/20160331/d59d38b7/attachment.pdf
 [2] https://wiki.openstack.org/wiki/Design_Summit
 [3] https://etherpad.openstack.org/p/fuel-newton-summit-planning
 [4] https://wiki.openstack.org/wiki/Design_Summit/Planning

 Thanks.

 Vladimir Kozhukalov

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Monty Taylor

On 04/21/2016 11:03 AM, Tim Bell wrote:



On 21/04/16 17:38, "Hongbin Lu"  wrote:





-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-21-16 10:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
abstraction for all COEs



On Apr 20, 2016, at 2:49 PM, Joshua Harlow 

wrote:


Thierry Carrez wrote:

Adrian Otto wrote:

This pursuit is a trap. Magnum should focus on making native
container APIs available. We should not wrap APIs with leaky
abstractions. The lowest common denominator of all COEs is an
remarkably low value API that adds considerable complexity to

Magnum

that will not strategically advance OpenStack. If we instead focus
our effort on making the COEs work better on OpenStack, that would
be a winning strategy. Support and compliment our various COE

ecosystems.


So I'm all for avoiding 'wrap APIs with leaky abstractions' and
'making COEs work better on OpenStack' but I do dislike the part

about COEs (plural) because it is once again the old non-opinionated
problem that we (as a community) suffer from.


Just my 2 cents, but I'd almost rather we pick one COE and integrate
that deeply/tightly with openstack, and yes if this causes some part
of the openstack community to be annoyed, meh, to bad. Sadly I have a
feeling we are hurting ourselves by continuing to try to be

everything

and not picking anything (it's a general thing we, as a group, seem

to

be good at, lol). I mean I get the reason to just support all the
things, but it feels like we as a community could just pick something,
work together on figuring out how to pick one, using all these bright
leaders we have to help make that possible (and yes this might piss
some people off, to bad). Then work toward making that something

great

and move on…


The key issue preventing the selection of only one COE is that this
area is moving very quickly. If we would have decided what to pick at
the time the Magnum idea was created, we would have selected Docker. If
you look at it today, you might pick something else. A few months down
the road, there may be yet another choice that is more compelling. The
fact that a cloud operator can integrate services with OpenStack, and
have the freedom to offer support for a selection of COE’s is a form of
insurance against the risk of picking the wrong one. Our compute
service offers a choice of hypervisors, our block storage service
offers a choice of storage hardware drivers, our networking service
allows a choice of network drivers. Magnum is following the same
pattern of choice that has made OpenStack compelling for a very diverse
community. That design consideration was intentional.

Over time, we can focus the majority of our effort on deep integration
with COEs that users select the most. I’m convinced it’s still too
early to bet the farm on just one choice.


If Magnum want to avoid the risk of picking the wrong COE, that mean the risk is 
populated to all our users. They might pick a COE and explore the its complexities. Then 
they find out another COE is more compelling and their integration work is wasted. I 
wonder if we can do better by taking the risk and provide insurance for our users? I am 
trying to understand the rationales that prevents us to improve the integration between 
COEs and OpenStack. Personally, I don't like to end up with a situation that "this 
is the pain from our users, but we cannot do anything".


We’re running Magnum and have requests from our user communities for 
Kubernetes, Docker Swarm and Mesos. The use cases are significantly different 
and can justify the selection of different technologies. We’re offering 
Kubernetes and Docker Swarm now and adding Mesos. If I was only to offer one, 
they’d build their own at considerable cost to them and the IT department.

Magnum allows me to make them all available under the single umbrella of quota, 
capacity planning, identity and resource lifecycle. As experience is gained, we 
may make a recommendation for those who do not have a strong need but I am 
pleased to be able to offer all of them under the single framework.

Since we’re building on the native APIs for the COEs, the effect from the 
operator side to add new engines is really very small (compared to trying to 
explain to the user that they’re wrong in choosing something different from the 
IT department).

BTW, our users also really appreciate using the native APIs.

Some more details at 
http://superuser.openstack.org/articles/openstack-magnum-on-the-cern-production-cloud
 and we’ll give more under the hood details in a further blog.



Yes!!!

This is 100% where the value of magnum comes from to me. It's about 
end-user choice, and about a sane way for operators to enable that 
end-user choice.


I do not believe anyone in the world wants us to build an abstraction 
layer on top of 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Monty Taylor

On 04/21/2016 11:01 AM, Flavio Percoco wrote:

On 21/04/16 12:26 +0200, Thierry Carrez wrote:

Joshua Harlow wrote:

Thierry Carrez wrote:

Adrian Otto wrote:

This pursuit is a trap. Magnum should focus on making native container
APIs available. We should not wrap APIs with leaky abstractions. The
lowest common denominator of all COEs is an remarkably low value API
that adds considerable complexity to Magnum that will not
strategically advance OpenStack. If we instead focus our effort on
making the COEs work better on OpenStack, that would be a winning
strategy. Support and compliment our various COE ecosystems.


So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
COEs work better on OpenStack' but I do dislike the part about COEs
(plural) because it is once again the old non-opinionated problem that
we (as a community) suffer from.

Just my 2 cents, but I'd almost rather we pick one COE and integrate
that deeply/tightly with openstack, and yes if this causes some part of
the openstack community to be annoyed, meh, to bad. Sadly I have a
feeling we are hurting ourselves by continuing to try to be everything
and not picking anything (it's a general thing we, as a group, seem to
be good at, lol). I mean I get the reason to just support all the
things, but it feels like we as a community could just pick something,
work together on figuring out how to pick one, using all these bright
leaders we have to help make that possible (and yes this might piss some
people off, to bad). Then work toward making that something great and
move on...


I see where you come from, but I think this is a bit different from,
say, our choice to support multiple DLMs through Tooz instead of just
picking ZooKeeper.

I like to say that OpenStack solves the infrastructure provider
problem: what should I install over my datacenter to serve the needs
of all my end users. Some want VMs, some want bare metal, some want a
Docker host, some want a Kubernetes cluster, some want a Mesos
cluster. If we explicitly choose to, say, not support Mesos to only
support Kubernetes users, we are no longer a universal solution for
that infrastructure provider. He may deploy OpenStack but then will
have to tell his end users that they can do everything but Mesos,
and/or deploy a Mesos cluster manually on the side if his users end up
deciding they want one.

So while I agree we should get more opinionated on the
implementation/deployer-side options (weeding out less supported
options/drivers and driving more interoperability), I think we need to
support as many infrastructure use cases as we can.

Happy to talk about that with you next week :)



+1 to the above! Magnum's goal (as also mentioned by Kevin in another
email) is
similar to what Trove and Sahara do. I do not believe it should be
opinionated.
It solves a different set of issues and it sits in the provisioning
plane next
to other services akin.


Totally agree.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-04-21 Thread Monty Taylor

On 04/21/2016 10:35 AM, Michael Krotscheck wrote:



On Thu, Apr 21, 2016 at 8:28 AM Monty Taylor > wrote:

On 04/21/2016 10:05 AM, Hayes, Graham wrote:
 > On 21/04/2016 15:39, Michael Krotscheck wrote:
 >> used piecemeal, however trying to maintain code consistency across
 >> multiple different projects is a hard lesson that others have
already
 >> learned for us. Let’s not do that again.

I'd love to chat about js-openstacklib supporting clouds.yaml files for
config.


I see absolutely ZERO reason why we shouldn't just use an existing
configuration format. Except, well, maybe this: https://xkcd.com/927/

The only existing user of a custom config file we have right now is
ironic-webclient, and its code is really more "how do I load it" rather
than "how do I parse it. Updating it should not be difficult, and as we
don't have a release yet, I don't anticipate any backwards compatibility
issues.

Code, for reference:
http://git.openstack.org/cgit/openstack/ironic-webclient/tree/app/js/modules/openstack/configuration.js


Neat! Maybe let's find a time at the summit to sit down and look through 
things. I'm guessing that adding a second language consumer to the 
config will raise a ton of useful questions around documentation, data 
format, etc - but those will likely be super valuable to find, document 
or fix.


Woohoo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-04-21 Thread Monty Taylor

On 04/21/2016 10:32 AM, Michael Krotscheck wrote:

On Thu, Apr 21, 2016 at 8:10 AM Hayes, Graham > wrote:

On 21/04/2016 15:39, Michael Krotscheck wrote:

python-openstackclient does require the creation of a new repo for each
project (unless you are one of the chosen few).

Does this mean you will accept all projects to the library, or just
selected projects?


In a perfect world, we'd accept everyone. I have some questions about
things like "Does devstack fall down if we try to gate on every service
ever", and how to package things so we can meet both the "gimme
everything" and the "I just want one service" users, however those
strike me as solvable problems.


FWIW, our policy in shade for adding new service support has been that 
adding the service to our existing devstack gate jobs does not break 
things, and that all new code must come with functional tests that run 
against a live devstack with the service in question enabled. So far it 
has worked well - we have not had a large land rush of people trying to 
get stuff in, but when they have showed up there has been a clear 
expectation on what it means that has nothing to do with whether or not 
I like the service.


A case in point that worth mentioning ... last cycle Yolanda started 
work on adding magnum support to shade - but adding magnum to our 
devstack config at that time increased the failure rate too much because 
the magnum devstack config was downloading atomic images from fedora. So 
- we disabled it again ... and yolanda went and worked with 
diskimage-builder to add support for building atomic images. And then we 
added a job that builds atomic images and uploads them to tarballs.o.o 
and then made sure the magnum devstack could consume those.


Now that all of those things are true, we're about to re-enable magnum 
support because we're confident that having magnum in our gate is a 
solid thing.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Thierry Carrez

Thierry Carrez wrote:

[...]
I think it's inappropriate because it gives a wrong incentive to become
a core reviewer. Core reviewing should just be a duty you sign up to,
not necessarily a way to get into a cool party. It was also a bit
exclusive of other types of contributions.

Apparently in Austin the group was reduced to only release:managed
repositories. This tag is to describe which repositories the release
team is comfortable handling. I think it's inappropriate to reuse it to
single out a subgroup of cool folks, and if that became a tradition the
release team would face pressure from repositories to get the tag that
are totally unrelated to what the tag describes.


Small precision, since I realize after posting this might be taken the 
wrong way:


Don't get me wrong, HPE is of course free to invite whoever they want to 
their party :) But since you asked for opinions, my personal wish if it 
continues would be that it is renamed "the HPE VIP party" rather than 
partially tie it to specific rights or tags we happen to use upstream.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Glare and future of artifacts in OpenStack.

2016-04-21 Thread Mikhail Fedosin
Hello!

Today I'm happy to present you a demo of a new service called Glare (means
GLance Artifact REpository) which will be used as a unified catalog of
artifacts in OpenStack. This service appeared in Mitaka in February
and it succeeded
Glance v3 API, that has become the experimental version of Glare v0.1 API.
Currently we're working on stable v1 implementation and I believe it will
be available in Newton. Here I present a demo of stable Glare v1 and its
features that are already implemented.

The first video is a description of Glare service, its purposes, current
status and future development.
https://www.youtube.com/watch?v=XgpEdycRp9Y
Slides are located here:
https://docs.google.com/presentation/d/1WQoBenlp-0vD1t7mpPgQuepDmlPUXq2LOfRYnurZx74/edit#slide=id.p

Then it comes the demo. I have 3 videos that cover all basic features we
have at the moment:
1. Interaction with Glance and existing images. It may be useful for
App-Catalog when you import new image from it with Glare and use it through
Glance.
https://www.youtube.com/watch?v=flrlCpqwWzI

2. Sorting and filtering with Glare. Since Glare supports artifact
versioning in SemVer I present how user can sort and filter his images by
version with special range operators.
https://www.youtube.com/watch?v=ha3SLFZl_jw

3. Demonstration of Heat template artifact type and setting custom
locations for artifacts.
https://www.youtube.com/watch?v=EzEOJvKMUzo

We have dedicated Glare design session on Wednesday 27th of April at 2-40
PM. Will be glad if you may join us there.
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9162?goback=1

Best regards,
Mikhail Fedosin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Flavio Percoco

On 21/04/16 12:26 +0200, Thierry Carrez wrote:

Joshua Harlow wrote:

Thierry Carrez wrote:

Adrian Otto wrote:

This pursuit is a trap. Magnum should focus on making native container
APIs available. We should not wrap APIs with leaky abstractions. The
lowest common denominator of all COEs is an remarkably low value API
that adds considerable complexity to Magnum that will not
strategically advance OpenStack. If we instead focus our effort on
making the COEs work better on OpenStack, that would be a winning
strategy. Support and compliment our various COE ecosystems.


So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
COEs work better on OpenStack' but I do dislike the part about COEs
(plural) because it is once again the old non-opinionated problem that
we (as a community) suffer from.

Just my 2 cents, but I'd almost rather we pick one COE and integrate
that deeply/tightly with openstack, and yes if this causes some part of
the openstack community to be annoyed, meh, to bad. Sadly I have a
feeling we are hurting ourselves by continuing to try to be everything
and not picking anything (it's a general thing we, as a group, seem to
be good at, lol). I mean I get the reason to just support all the
things, but it feels like we as a community could just pick something,
work together on figuring out how to pick one, using all these bright
leaders we have to help make that possible (and yes this might piss some
people off, to bad). Then work toward making that something great and
move on...


I see where you come from, but I think this is a bit different from, 
say, our choice to support multiple DLMs through Tooz instead of just 
picking ZooKeeper.


I like to say that OpenStack solves the infrastructure provider 
problem: what should I install over my datacenter to serve the needs 
of all my end users. Some want VMs, some want bare metal, some want a 
Docker host, some want a Kubernetes cluster, some want a Mesos 
cluster. If we explicitly choose to, say, not support Mesos to only 
support Kubernetes users, we are no longer a universal solution for 
that infrastructure provider. He may deploy OpenStack but then will 
have to tell his end users that they can do everything but Mesos, 
and/or deploy a Mesos cluster manually on the side if his users end up 
deciding they want one.


So while I agree we should get more opinionated on the 
implementation/deployer-side options (weeding out less supported 
options/drivers and driving more interoperability), I think we need to 
support as many infrastructure use cases as we can.


Happy to talk about that with you next week :)



+1 to the above! Magnum's goal (as also mentioned by Kevin in another email) is
similar to what Trove and Sahara do. I do not believe it should be opinionated.
It solves a different set of issues and it sits in the provisioning plane next
to other services akin.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Thierry Carrez

Michael Krotscheck wrote:

So, HPE is seeking sponsors to continue the core party. The reasons are
varied - internal sponsors have moved to other projects, the Big Tent
has drastically increased the # of cores, and the upcoming summit format
change creates quite a bit of uncertainty on everything surrounding the
summit.

Furthermore, the existence of the Core party has been... contentious.
Some believe it's exclusionary, others think it's inappropriate, yet
others think it's a good way to thank those of use who agree to be
constantly pestered for code reviews.

I'm writing this message for two reasons - mostly, to kick off a
discussion on whether the party is worthwhile. Secondly, to signal to
other organizations that this promotional opportunity is available.

Personally, I appreciate being thanked for my work. I do not necessarily
need to be thanked in this fashion, however as the past venues have been
far more subdued than the Tuesday night events (think cocktail party),
it's a welcome mid-week respite for this overwhelmed little introvert. I
don't want to see it go, but I will understand if it does.

Some numbers, for those who like them (Thanks to Mark Atwood for
providing them):

Total repos: 1010
Total approvers: 1085
Repos for official teams: 566
OpenStack repo approvers: 717
Repos under release management: 90
Managed release repo approvers: 281


I think it's inappropriate because it gives a wrong incentive to become 
a core reviewer. Core reviewing should just be a duty you sign up to, 
not necessarily a way to get into a cool party. It was also a bit 
exclusive of other types of contributions.


Apparently in Austin the group was reduced to only release:managed 
repositories. This tag is to describe which repositories the release 
team is comfortable handling. I think it's inappropriate to reuse it to 
single out a subgroup of cool folks, and if that became a tradition the 
release team would face pressure from repositories to get the tag that 
are totally unrelated to what the tag describes.


So.. while I understand the need for calmer parties during the week, I 
think the general trends is to have less parties and more small group 
dinners. I would be fine with HPE sponsoring more project team dinners 
instead :)


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Raj Patel
There¹s one more issue with lowest common denominator API. Every time a
new version of native client is released, magnum will be responsible for
making those sure the common denominator API works with that version of
native client. Since the native client will always have more
functions/features than the common denominator API, it means the end users
will have to use native client for some operations and magnum API for
others.

-Raj

Raj Patel
raj.pa...@rackspace.com 



On 4/21/16, 11:03 AM, "Tim Bell"  wrote:

>
>
>On 21/04/16 17:38, "Hongbin Lu"  wrote:
>
>>
>>
>>> -Original Message-
>>> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
>>> Sent: April-21-16 10:32 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>>> abstraction for all COEs
>>> 
>>> 
>>> > On Apr 20, 2016, at 2:49 PM, Joshua Harlow 
>>> wrote:
>>> >
>>> > Thierry Carrez wrote:
>>> >> Adrian Otto wrote:
>>> >>> This pursuit is a trap. Magnum should focus on making native
>>> >>> container APIs available. We should not wrap APIs with leaky
>>> >>> abstractions. The lowest common denominator of all COEs is an
>>> >>> remarkably low value API that adds considerable complexity to
>>> Magnum
>>> >>> that will not strategically advance OpenStack. If we instead focus
>>> >>> our effort on making the COEs work better on OpenStack, that would
>>> >>> be a winning strategy. Support and compliment our various COE
>>> ecosystems.
>>> >
>>> > So I'm all for avoiding 'wrap APIs with leaky abstractions' and
>>> > 'making COEs work better on OpenStack' but I do dislike the part
>>> about COEs (plural) because it is once again the old non-opinionated
>>> problem that we (as a community) suffer from.
>>> >
>>> > Just my 2 cents, but I'd almost rather we pick one COE and integrate
>>> > that deeply/tightly with openstack, and yes if this causes some part
>>> > of the openstack community to be annoyed, meh, to bad. Sadly I have a
>>> > feeling we are hurting ourselves by continuing to try to be
>>> everything
>>> > and not picking anything (it's a general thing we, as a group, seem
>>> to
>>> > be good at, lol). I mean I get the reason to just support all the
>>> > things, but it feels like we as a community could just pick
>>>something,
>>> > work together on figuring out how to pick one, using all these bright
>>> > leaders we have to help make that possible (and yes this might piss
>>> > some people off, to bad). Then work toward making that something
>>> great
>>> > and move onŠ
>>> 
>>> The key issue preventing the selection of only one COE is that this
>>> area is moving very quickly. If we would have decided what to pick at
>>> the time the Magnum idea was created, we would have selected Docker. If
>>> you look at it today, you might pick something else. A few months down
>>> the road, there may be yet another choice that is more compelling. The
>>> fact that a cloud operator can integrate services with OpenStack, and
>>> have the freedom to offer support for a selection of COE¹s is a form of
>>> insurance against the risk of picking the wrong one. Our compute
>>> service offers a choice of hypervisors, our block storage service
>>> offers a choice of storage hardware drivers, our networking service
>>> allows a choice of network drivers. Magnum is following the same
>>> pattern of choice that has made OpenStack compelling for a very diverse
>>> community. That design consideration was intentional.
>>> 
>>> Over time, we can focus the majority of our effort on deep integration
>>> with COEs that users select the most. I¹m convinced it¹s still too
>>> early to bet the farm on just one choice.
>>
>>If Magnum want to avoid the risk of picking the wrong COE, that mean the
>>risk is populated to all our users. They might pick a COE and explore
>>the its complexities. Then they find out another COE is more compelling
>>and their integration work is wasted. I wonder if we can do better by
>>taking the risk and provide insurance for our users? I am trying to
>>understand the rationales that prevents us to improve the integration
>>between COEs and OpenStack. Personally, I don't like to end up with a
>>situation that "this is the pain from our users, but we cannot do
>>anything".
>
>We¹re running Magnum and have requests from our user communities for
>Kubernetes, Docker Swarm and Mesos. The use cases are significantly
>different and can justify the selection of different technologies. We¹re
>offering Kubernetes and Docker Swarm now and adding Mesos. If I was only
>to offer one, they¹d build their own at considerable cost to them and the
>IT department.
>
>Magnum allows me to make them all available under the single umbrella of
>quota, capacity planning, identity and resource lifecycle. As experience
>is gained, we may make a recommendation for those who do 

Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Morgan Fainberg
On Thu, Apr 21, 2016 at 9:07 AM, Michael Krotscheck 
wrote:

> Hey everyone-
>
> So, HPE is seeking sponsors to continue the core party. The reasons are
> varied - internal sponsors have moved to other projects, the Big Tent has
> drastically increased the # of cores, and the upcoming summit format change
> creates quite a bit of uncertainty on everything surrounding the summit.
>
> Furthermore, the existence of the Core party has been... contentious. Some
> believe it's exclusionary, others think it's inappropriate, yet others
> think it's a good way to thank those of use who agree to be constantly
> pestered for code reviews.
>
> I'm writing this message for two reasons - mostly, to kick off a
> discussion on whether the party is worthwhile. Secondly, to signal to other
> organizations that this promotional opportunity is available.
>
> Personally, I appreciate being thanked for my work. I do not necessarily
> need to be thanked in this fashion, however as the past venues have been
> far more subdued than the Tuesday night events (think cocktail party), it's
> a welcome mid-week respite for this overwhelmed little introvert. I don't
> want to see it go, but I will understand if it does.
>
> Some numbers, for those who like them (Thanks to Mark Atwood for providing
> them):
>
> Total repos: 1010
> Total approvers: 1085
> Repos for official teams: 566
> OpenStack repo approvers: 717
> Repos under release management: 90
> Managed release repo approvers: 281
>
> Michael
>

Personally, I am in the camp that the core party is something that should
not be continued because of it's somewhat exclusionary aspects. I don't
have an alternative in mind to fill the space of the party. I often find
that given an open/free night something a bit more subdued (even than the
core party) can occur, it just tends to be in a number of smaller groups.
Having a further respite from the speed/intensity of the summit (an open
night with even less plans!), would be welcome to many of us.

Also consider the split summit proposal and if the core party really has a
place in the new proposed summit/project gathering world.

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]informal meetup during summit

2016-04-21 Thread Zane Bitter

On 20/04/16 13:00, Rico Lin wrote:

Hi team
Let plan for more informal meetup(relax) time! Let all heaters and any
other projects can have fun and chance for technical discussions together.

After discuss in meeting, we will have a pre-meetup-meetup on Friday
morning to have a cup of cafe or some food. Would like to ask if anyone
knows any nice place for this meetup?:)


According to 
https://www.openstack.org/summit/austin-2016/guide-to-austin/ if we line 
up at Franklin's at 7am then we can be eating barbeque by 11 and still 
make it back in time for the afternoon meetup :))



Also open for other chance for all can go out for a nice dinner and
beer. Right now seems maybe Monday or Friday night could be the best
candidate for this wonderful task, what all think about this? :)


+1. I'll be around on Friday, but I imagine a few people will be leaving 
so Monday is probably better.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Michael Krotscheck
Hey everyone-

So, HPE is seeking sponsors to continue the core party. The reasons are
varied - internal sponsors have moved to other projects, the Big Tent has
drastically increased the # of cores, and the upcoming summit format change
creates quite a bit of uncertainty on everything surrounding the summit.

Furthermore, the existence of the Core party has been... contentious. Some
believe it's exclusionary, others think it's inappropriate, yet others
think it's a good way to thank those of use who agree to be constantly
pestered for code reviews.

I'm writing this message for two reasons - mostly, to kick off a discussion
on whether the party is worthwhile. Secondly, to signal to other
organizations that this promotional opportunity is available.

Personally, I appreciate being thanked for my work. I do not necessarily
need to be thanked in this fashion, however as the past venues have been
far more subdued than the Tuesday night events (think cocktail party), it's
a welcome mid-week respite for this overwhelmed little introvert. I don't
want to see it go, but I will understand if it does.

Some numbers, for those who like them (Thanks to Mark Atwood for providing
them):

Total repos: 1010
Total approvers: 1085
Repos for official teams: 566
OpenStack repo approvers: 717
Repos under release management: 90
Managed release repo approvers: 281

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Tim Bell


On 21/04/16 17:38, "Hongbin Lu"  wrote:

>
>
>> -Original Message-
>> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
>> Sent: April-21-16 10:32 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>> 
>> 
>> > On Apr 20, 2016, at 2:49 PM, Joshua Harlow 
>> wrote:
>> >
>> > Thierry Carrez wrote:
>> >> Adrian Otto wrote:
>> >>> This pursuit is a trap. Magnum should focus on making native
>> >>> container APIs available. We should not wrap APIs with leaky
>> >>> abstractions. The lowest common denominator of all COEs is an
>> >>> remarkably low value API that adds considerable complexity to
>> Magnum
>> >>> that will not strategically advance OpenStack. If we instead focus
>> >>> our effort on making the COEs work better on OpenStack, that would
>> >>> be a winning strategy. Support and compliment our various COE
>> ecosystems.
>> >
>> > So I'm all for avoiding 'wrap APIs with leaky abstractions' and
>> > 'making COEs work better on OpenStack' but I do dislike the part
>> about COEs (plural) because it is once again the old non-opinionated
>> problem that we (as a community) suffer from.
>> >
>> > Just my 2 cents, but I'd almost rather we pick one COE and integrate
>> > that deeply/tightly with openstack, and yes if this causes some part
>> > of the openstack community to be annoyed, meh, to bad. Sadly I have a
>> > feeling we are hurting ourselves by continuing to try to be
>> everything
>> > and not picking anything (it's a general thing we, as a group, seem
>> to
>> > be good at, lol). I mean I get the reason to just support all the
>> > things, but it feels like we as a community could just pick something,
>> > work together on figuring out how to pick one, using all these bright
>> > leaders we have to help make that possible (and yes this might piss
>> > some people off, to bad). Then work toward making that something
>> great
>> > and move on…
>> 
>> The key issue preventing the selection of only one COE is that this
>> area is moving very quickly. If we would have decided what to pick at
>> the time the Magnum idea was created, we would have selected Docker. If
>> you look at it today, you might pick something else. A few months down
>> the road, there may be yet another choice that is more compelling. The
>> fact that a cloud operator can integrate services with OpenStack, and
>> have the freedom to offer support for a selection of COE’s is a form of
>> insurance against the risk of picking the wrong one. Our compute
>> service offers a choice of hypervisors, our block storage service
>> offers a choice of storage hardware drivers, our networking service
>> allows a choice of network drivers. Magnum is following the same
>> pattern of choice that has made OpenStack compelling for a very diverse
>> community. That design consideration was intentional.
>> 
>> Over time, we can focus the majority of our effort on deep integration
>> with COEs that users select the most. I’m convinced it’s still too
>> early to bet the farm on just one choice.
>
>If Magnum want to avoid the risk of picking the wrong COE, that mean the risk 
>is populated to all our users. They might pick a COE and explore the its 
>complexities. Then they find out another COE is more compelling and their 
>integration work is wasted. I wonder if we can do better by taking the risk 
>and provide insurance for our users? I am trying to understand the rationales 
>that prevents us to improve the integration between COEs and OpenStack. 
>Personally, I don't like to end up with a situation that "this is the pain 
>from our users, but we cannot do anything".

We’re running Magnum and have requests from our user communities for 
Kubernetes, Docker Swarm and Mesos. The use cases are significantly different 
and can justify the selection of different technologies. We’re offering 
Kubernetes and Docker Swarm now and adding Mesos. If I was only to offer one, 
they’d build their own at considerable cost to them and the IT department.

Magnum allows me to make them all available under the single umbrella of quota, 
capacity planning, identity and resource lifecycle. As experience is gained, we 
may make a recommendation for those who do not have a strong need but I am 
pleased to be able to offer all of them under the single framework.

Since we’re building on the native APIs for the COEs, the effect from the 
operator side to add new engines is really very small (compared to trying to 
explain to the user that they’re wrong in choosing something different from the 
IT department).

BTW, our users also really appreciate using the native APIs.

Some more details at 
http://superuser.openstack.org/articles/openstack-magnum-on-the-cern-production-cloud
 and we’ll give more under the hood details in a further blog.

Tim

>
>> 
>> Adrian
>> 
>> >> I'm with Adrian on that one. 

Re: [openstack-dev] [heat]informal meetup during summit

2016-04-21 Thread David F Flanders
+1 re Mon, though Fri could work as well.

On Thu, Apr 21, 2016 at 3:55 AM, Jay Dobies  wrote:

>
>
> On 4/20/16 1:00 PM, Rico Lin wrote:
>
>> Hi team
>> Let plan for more informal meetup(relax) time! Let all heaters and any
>> other projects can have fun and chance for technical discussions together.
>>
>> After discuss in meeting, we will have a pre-meetup-meetup on Friday
>> morning to have a cup of cafe or some food. Would like to ask if anyone
>> knows any nice place for this meetup?:)
>>
>> Also open for other chance for all can go out for a nice dinner and
>> beer. Right now seems maybe Monday or Friday night could be the best
>> candidate for this wonderful task, what all think about this? :)
>>
>
> I really like both of these ideas. I haven't met most of you and it'll be
> good to see everyone in a non-Heat light.
>
> I'm available both Monday and Friday nights. I haven't looked at the
> schedule for Monday night to see what else is planned, but that's my vote
> since I suspect people may be leaving on Friday night.
>
>
>>
>> --
>> May The Force of OpenStack Be With You,
>>
>> */Rico Lin
>> Chief OpenStack Technologist, inwinSTACK
>> /*irc: ricolin
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
=
Twitter: @DFFlanders 
Skype: david.flanders
Based in Melbourne, Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [RefStack] Weekly RefStack IRC meetings moved from Monday to Tuesday

2016-04-21 Thread Catherine Cuong Diep

Hello folks.

As we agreed at the last IRC meeting [1], RefStack weekly IRC meetings will
be moved to Tuesday at 19:00 UTC [2].  In addition, the next two meetings
are skipped.  We will resume weekly IRC meeting  on Tuesday May 10, 2016.

[1]
http://eavesdrop.openstack.org/meetings/refstack/2016/refstack.2016-04-18-19.03.log.txt
[2] http://eavesdrop.openstack.org/#RefStack_Development_Meeting


Catherine Diep
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-21 Thread Jeremy Stanley
On 2016-04-21 14:05:17 +1200 (+1200), Robert Collins wrote:
> On 20 April 2016 at 03:00, Jeremy Stanley  wrote:
[...]
> > When we were firming up the constraints idea in Vancouver, if my
> > memory is correct (which it quite often is not these days), part of
> > the long tail Robert suggested was that once constraints usage in
> > the CI is widespread we could consider resolving it from individual
> > requirements lists in participating projects, drop the version
> > specifiers from the global requirements list entirely and stop
> > trying to actively synchronize requirement version ranges in
> > individual projects.
[...]
> 
> I think I suggested that we could remove the *versions* from
> global-requirements. Constraints being in a single place is a
> necessary tool unless (we have atomic-multi-branch commits via zuul ||
> we never depend on two projects agreeing on compatible versions of
> libraries in the CI jobs that run for any given project).
[...]

Yep, that's what I was trying to convey above. We still need to
resolve upper-constraints.txt from something, and there was debate
as to whether it would be effective to generate it from the
unversioned requirements list in global/requirements or whether we
would need to resolve it from an aggregation of the still-versioned
requirements files in participating projects. Also briefly touched
on was the option of possibly dropping version specifiers from
individual project requirements files.

> Atomic multi-branch commits in zuul would allow us to fix
> multi-project wedging issues if constraints are federated out to
> multiple trees.
[...]

This still runs counter to the desire to serialize changes proposed
on different branches for the purpose of confirming upgrades from
one branch to another aren't broken by one change and then quietly
fixed by another.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-04-21 Thread Michael Krotscheck
On Thu, Apr 21, 2016 at 8:28 AM Monty Taylor  wrote:

> On 04/21/2016 10:05 AM, Hayes, Graham wrote:
> > On 21/04/2016 15:39, Michael Krotscheck wrote:
> >> used piecemeal, however trying to maintain code consistency across
> >> multiple different projects is a hard lesson that others have already
> >> learned for us. Let’s not do that again.
>
> I'd love to chat about js-openstacklib supporting clouds.yaml files for
> config.
>

I see absolutely ZERO reason why we shouldn't just use an existing
configuration format. Except, well, maybe this: https://xkcd.com/927/

The only existing user of a custom config file we have right now is
ironic-webclient, and its code is really more "how do I load it" rather
than "how do I parse it. Updating it should not be difficult, and as we
don't have a release yet, I don't anticipate any backwards compatibility
issues.

Code, for reference:
http://git.openstack.org/cgit/openstack/ironic-webclient/tree/app/js/modules/openstack/configuration.js

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Hongbin Lu


> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: April-21-16 10:32 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> 
> > On Apr 20, 2016, at 2:49 PM, Joshua Harlow 
> wrote:
> >
> > Thierry Carrez wrote:
> >> Adrian Otto wrote:
> >>> This pursuit is a trap. Magnum should focus on making native
> >>> container APIs available. We should not wrap APIs with leaky
> >>> abstractions. The lowest common denominator of all COEs is an
> >>> remarkably low value API that adds considerable complexity to
> Magnum
> >>> that will not strategically advance OpenStack. If we instead focus
> >>> our effort on making the COEs work better on OpenStack, that would
> >>> be a winning strategy. Support and compliment our various COE
> ecosystems.
> >
> > So I'm all for avoiding 'wrap APIs with leaky abstractions' and
> > 'making COEs work better on OpenStack' but I do dislike the part
> about COEs (plural) because it is once again the old non-opinionated
> problem that we (as a community) suffer from.
> >
> > Just my 2 cents, but I'd almost rather we pick one COE and integrate
> > that deeply/tightly with openstack, and yes if this causes some part
> > of the openstack community to be annoyed, meh, to bad. Sadly I have a
> > feeling we are hurting ourselves by continuing to try to be
> everything
> > and not picking anything (it's a general thing we, as a group, seem
> to
> > be good at, lol). I mean I get the reason to just support all the
> > things, but it feels like we as a community could just pick something,
> > work together on figuring out how to pick one, using all these bright
> > leaders we have to help make that possible (and yes this might piss
> > some people off, to bad). Then work toward making that something
> great
> > and move on…
> 
> The key issue preventing the selection of only one COE is that this
> area is moving very quickly. If we would have decided what to pick at
> the time the Magnum idea was created, we would have selected Docker. If
> you look at it today, you might pick something else. A few months down
> the road, there may be yet another choice that is more compelling. The
> fact that a cloud operator can integrate services with OpenStack, and
> have the freedom to offer support for a selection of COE’s is a form of
> insurance against the risk of picking the wrong one. Our compute
> service offers a choice of hypervisors, our block storage service
> offers a choice of storage hardware drivers, our networking service
> allows a choice of network drivers. Magnum is following the same
> pattern of choice that has made OpenStack compelling for a very diverse
> community. That design consideration was intentional.
> 
> Over time, we can focus the majority of our effort on deep integration
> with COEs that users select the most. I’m convinced it’s still too
> early to bet the farm on just one choice.

If Magnum want to avoid the risk of picking the wrong COE, that mean the risk 
is populated to all our users. They might pick a COE and explore the its 
complexities. Then they find out another COE is more compelling and their 
integration work is wasted. I wonder if we can do better by taking the risk and 
provide insurance for our users? I am trying to understand the rationales that 
prevents us to improve the integration between COEs and OpenStack. Personally, 
I don't like to end up with a situation that "this is the pain from our users, 
but we cannot do anything".

> 
> Adrian
> 
> >> I'm with Adrian on that one. I've attended a lot of
> >> container-oriented conferences over the past year and my main
> >> takeaway is that this new crowd of potential users is not interested
> >> (at all) in an OpenStack-specific lowest common denominator API for
> >> COEs. They want to take advantage of the cool features in Kubernetes
> >> API or the versatility of Mesos. They want to avoid caring about the
> >> infrastructure provider bit (and not deploy Mesos or Kubernetes
> themselves).
> >>
> >> Let's focus on the infrastructure provider bit -- that is what we do
> >> and what the ecosystem wants us to provide.
> >>
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Keith Bray
The work on the plug-ins can still be done by Magnum core contributors (or
anyone). My point is that the work doesn’t have to be code-coupled to
Magnum except via the plug-in interface, which, like heat resources,
should be relatively straight forward. Creating the plug-in framework in
this way allows for leverage of work by non-Magnum contributors and re-use
of Chef/Ansible/Heat/PickYourFavoriteHere tool for infra configuration and
orchestration.  

-Keith

On 4/20/16, 6:03 PM, "Hongbin Lu"  wrote:

>
>
>> -Original Message-
>> From: Keith Bray [mailto:keith.b...@rackspace.com]
>> Sent: April-20-16 6:13 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>> 
>> Magnum doesn¹t have to preclude tight integration for single COEs you
>> speak of.  The heavy lifting of tight integration of the COE in to
>> OpenStack (so that it performs optimally with the infra) can be modular
>> (where the work is performed by plug-in models to Magnum, not performed
>> by Magnum itself. The tight integration can be done by leveraging
>> existing technologies (Heat and/or choose your DevOps tool of choice:
>> Chef/Ansible/etc). This allows interested community members to focus on
>> tight integration of whatever COE they want, focusing specifically on
>
>I agree that tight integration can be achieved by a plugin, but I think
>the key question is who will do the work. If tight integration needs to
>be done, I wonder why it is not part of the Magnum efforts. From my point
>of view, pushing the work out doesn't seem to address the original pain,
>which is some users don't want to explore the complexities of individual
>COEs.
>
>> the COE integration part, contributing that integration focus to Magnum
>> via plug-ins, without having to actually know much about Magnum, but
>> instead
>> contribute to the COE plug-in using DevOps tools of choice.   Pegging
>> Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
>> etc. project for every COE of interest, all with different ways of
>> kicking off COE management.  Magnum could unify that experience for
>> users and operators, without picking a winner in the COE space < this
>> is just like Nova not picking a winner between VM flavors or OS types.
>> It just facilitates instantiation and management of thins.  Opinion
>> here:  The value of Magnum is in being a light-weight/thin API,
>> providing modular choice and plug-ability to COE provisioning and
>> management, thereby providing operators and users choice of COE
>> instantiation and management (via the bay concept), where each COE can
>> be as tightly or loosely integrated as desired by different plug-ins
>> contributed to perform the COE setup and configurations.  So, Magnum
>> could have two or more swarm plug-in options contributed to the
>> community.. One overlays generic swarm on VMs.
>> The other swarm plug-in could instantiate swarm tightly integrated to
>> neutron, keystone, etc on to bare metal.  Magnum just facilities a
>> plug-in model with thin API to offer choice of CEO instantiation and
>> management.
>> The plug-in does the heavy lifting using whatever methods desired by
>> the curator.
>> 
>> That¹s my $0.2.
>> 
>> -Keith
>> 
>> On 4/20/16, 4:49 PM, "Joshua Harlow"  wrote:
>> 
>> >Thierry Carrez wrote:
>> >> Adrian Otto wrote:
>> >>> This pursuit is a trap. Magnum should focus on making native
>> >>> container APIs available. We should not wrap APIs with leaky
>> >>> abstractions. The lowest common denominator of all COEs is an
>> >>> remarkably low value API that adds considerable complexity to
>> Magnum
>> >>> that will not strategically advance OpenStack. If we instead focus
>> >>> our effort on making the COEs work better on OpenStack, that would
>> >>> be a winning strategy. Support and compliment our various COE
>> ecosystems.
>> >
>> >So I'm all for avoiding 'wrap APIs with leaky abstractions' and
>> 'making
>> >COEs work better on OpenStack' but I do dislike the part about COEs
>> >(plural) because it is once again the old non-opinionated problem that
>> >we (as a community) suffer from.
>> >
>> >Just my 2 cents, but I'd almost rather we pick one COE and integrate
>> >that deeply/tightly with openstack, and yes if this causes some part
>> of
>> >the openstack community to be annoyed, meh, to bad. Sadly I have a
>> >feeling we are hurting ourselves by continuing to try to be everything
>> >and not picking anything (it's a general thing we, as a group, seem to
>> >be good at, lol). I mean I get the reason to just support all the
>> >things, but it feels like we as a community could just pick something,
>> >work together on figuring out how to pick one, using all these bright
>> >leaders we have to help make that possible (and yes this might piss
>> >some people off, to bad). Then work toward making that something great

Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-04-21 Thread Michael Krotscheck
On Thu, Apr 21, 2016 at 8:10 AM Hayes, Graham  wrote:

> On 21/04/2016 15:39, Michael Krotscheck wrote:
>
> python-openstackclient does require the creation of a new repo for each
> project (unless you are one of the chosen few).
>
> Does this mean you will accept all projects to the library, or just
> selected projects?


In a perfect world, we'd accept everyone. I have some questions about
things like "Does devstack fall down if we try to gate on every service
ever", and how to package things so we can meet both the "gimme everything"
and the "I just want one service" users, however those strike me as
solvable problems.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-04-21 Thread Monty Taylor

On 04/21/2016 10:05 AM, Hayes, Graham wrote:

On 21/04/2016 15:39, Michael Krotscheck wrote:
[...]

New: js-openstacklib

This new project will be incubated as a single, gate-tested JavaScript
API client library for the OpenStack API’s. Its audience is software
engineers who wish to build their own user interface using modern
javascript tools. As we cannot predict downstream use cases, special
care will be taken to ensure the project’s release artifacts can
eventually support both browser and server based applications.

Philosophically, we will be taking a page from the
python-openstackclient book, and avoid creating a new project for each
of OpenStack’s services. We can make sure our release artifacts can be
used piecemeal, however trying to maintain code consistency across
multiple different projects is a hard lesson that others have already
learned for us. Let’s not do that again.


I'd love to chat about js-openstacklib supporting clouds.yaml files for 
config.



python-openstackclient does require the creation of a new repo for each
project (unless you are one of the chosen few).

Does this mean you will accept all projects to the library, or just
selected projects?


Not being a person who is doing the work on this, I would counsel 
accepting all projects to the library. We have done that in shade and 
os-client-config and have not had any problems with it.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Keith Bray
+1

From: "Fox, Kevin M" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, April 20, 2016 at 6:14 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

I think Magnum much is much closer to Sahara or Trove in its workings. Heat's 
orchestration. Thats what the COE does.

Sahara is and has plugins to deploy various Hadoopy like clusters, get them 
assembled into something useful, and has a few abstraction api's like "submit a 
job to the deployed hadoop cluster queue."

Trove is and has plugins to deploy various Databasey things. Both SQL and 
noSQL. It has a few abstractions over all the things for cluster maintenance, 
backups, db and user creation.

If all Magnum did was deploy a COE, you could potentially just use Heat to do 
that.

What I want to do is have Heat hooked in closely enough through Magnum that 
Heat templates can deploy COE templates through Magnum Resources. Heat tried to 
do that with a docker resource driver directly, and its messy, racy, and 
doesn't work very well. Magnum's in a better position to establish a 
communication channel between Heat and the COE due to its back channel into the 
vms, bypassing Neutron network stuff.

Thanks,
Kevin

From: Georgy Okrokvertskhov 
[gokrokvertsk...@mirantis.com]
Sent: Wednesday, April 20, 2016 3:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

If Magnum will be focused on installation and management for COE it will be 
unclear how much it is different from Heat and other generic orchestrations.  
It looks like most of the current Magnum functionality is provided by Heat. 
Magnum focus on deployment will potentially lead to another Heat-like  API.
Unless Magnum is really focused on containers its value will be minimal for 
OpenStack users who already use Heat/Orchestration.


On Wed, Apr 20, 2016 at 3:12 PM, Keith Bray 
> wrote:
Magnum doesn¹t have to preclude tight integration for single COEs you
speak of.  The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed by
Magnum itself. The tight integration can be done by leveraging existing
technologies (Heat and/or choose your DevOps tool of choice:
Chef/Ansible/etc). This allows interested community members to focus on
tight integration of whatever COE they want, focusing specifically on the
COE integration part, contributing that integration focus to Magnum via
plug-ins, without having to actually know much about Magnum, but instead
contribute to the COE plug-in using DevOps tools of choice.   Pegging
Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
etc. project for every COE of interest, all with different ways of kicking
off COE management.  Magnum could unify that experience for users and
operators, without picking a winner in the COE space ‹ this is just like
Nova not picking a winner between VM flavors or OS types.  It just
facilitates instantiation and management of thins.  Opinion here:  The
value of Magnum is in being a light-weight/thin API, providing modular
choice and plug-ability to COE provisioning and management, thereby
providing operators and users choice of COE instantiation and management
(via the bay concept), where each COE can be as tightly or loosely
integrated as desired by different plug-ins contributed to perform the COE
setup and configurations.  So, Magnum could have two or more swarm plug-in
options contributed to the community.. One overlays generic swarm on VMs.
The other swarm plug-in could instantiate swarm tightly integrated to
neutron, keystone, etc on to bare metal.  Magnum just facilities a plug-in
model with thin API to offer choice of CEO instantiation and management.
The plug-in does the heavy lifting using whatever methods desired by the
curator.

That¹s my $0.2.

-Keith

On 4/20/16, 4:49 PM, "Joshua Harlow" 
> wrote:

>Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> making the COEs work better on 

Re: [openstack-dev] [tc] Leadership training dates - please confirm attendance

2016-04-21 Thread Colette Alexander
>
>
> >> Colette Alexander wrote:
> >>> Hi everyone!
> >>>
> >>> Quick summary of where we're at with leadership training: dates are
> >>> confirmed as available with ZingTrain, and we're finalizing trainers
> >>> with them right now. *June 28/29th in Ann Arbor, Michigan.*
> >>>
> >>> https://etherpad.openstack.org/p/Leadershiptraining


Hi everyone,

Just checking in on this - if you're a current or past member of the TC and
haven't yet signed up on the etherpad [0] and would like to attend
training, please do so by tomorrow if you can! If you're waiting on travel
approval or something else before you confirm, but want me to hold you a
spot, just ping me on IRC and let me know.

If you'd like to go to leadership training and you're *not* a past or
current TC member, stay tuned - I'll know about free spots and will send
out information during the summit next week.

Thank you!

-colette/gothicmindfood

[0] https://etherpad.openstack.org/p/Leadershiptraining
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-04-21 Thread Hayes, Graham
On 21/04/2016 15:39, Michael Krotscheck wrote:
[...]
> New: js-openstacklib
>
> This new project will be incubated as a single, gate-tested JavaScript
> API client library for the OpenStack API’s. Its audience is software
> engineers who wish to build their own user interface using modern
> javascript tools. As we cannot predict downstream use cases, special
> care will be taken to ensure the project’s release artifacts can
> eventually support both browser and server based applications.
>
> Philosophically, we will be taking a page from the
> python-openstackclient book, and avoid creating a new project for each
> of OpenStack’s services. We can make sure our release artifacts can be
> used piecemeal, however trying to maintain code consistency across
> multiple different projects is a hard lesson that others have already
> learned for us. Let’s not do that again.

python-openstackclient does require the creation of a new repo for each
project (unless you are one of the chosen few).

Does this mean you will accept all projects to the library, or just
selected projects?

- Graham


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] IRC meetings for 28 Apr and 5 May SKIPPED

2016-04-21 Thread Vitaly Gridnev
Hello folks.

As we agreed at the last meeting [0], the next two meetings are skipped (28
Apr and 5 May)

[0]
http://eavesdrop.openstack.org/meetings/sahara/2016/sahara.2016-04-21-14.00.log.html#l-126

-- 
Best Regards,
Vitaly Gridnev
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Hongbin Lu


> -Original Message-
> From: Steve Gordon [mailto:sgor...@redhat.com]
> Sent: April-21-16 9:39 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> - Original Message -
> > From: "Hongbin Lu" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > > -Original Message-
> > > From: Keith Bray [mailto:keith.b...@rackspace.com]
> > > Sent: April-20-16 6:13 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build
> > > unified abstraction for all COEs
> > >
> > > Magnum doesn¹t have to preclude tight integration for single COEs
> > > you speak of.  The heavy lifting of tight integration of the COE in
> > > to OpenStack (so that it performs optimally with the infra) can be
> > > modular (where the work is performed by plug-in models to Magnum,
> > > not performed by Magnum itself. The tight integration can be done
> by
> > > leveraging existing technologies (Heat and/or choose your DevOps
> tool of choice:
> > > Chef/Ansible/etc). This allows interested community members to
> focus
> > > on tight integration of whatever COE they want, focusing
> > > specifically on
> >
> > I agree that tight integration can be achieved by a plugin, but I
> > think the key question is who will do the work. If tight integration
> > needs to be done, I wonder why it is not part of the Magnum efforts.
> 
> Why does the integration belong in Magnum though? To me it belongs in
> the COEs themselves (e.g. their in-tree network/storage plugins) such
> that someone can leverage them regardless of their choices regarding
> COE deployment tooling (and yes that means Magnum should be able to
> leverage them too)? I guess the issue is that in the above conversation
> we are overloading the term "integration" which can be taken to mean
> different things...

I can clarify. I mean to introduce abstractions to allow tight integration 
between COEs and OpenStack. For example,

$ magnum container-create --volume= --net= ...

I agree with you that such integration should be supported by the COEs 
themselves. If it does, Magnum will leverage it (anyone can leverage it as well 
regardless of they are using Magnum or not). If it doesn't (the reality), 
Magnum could add support for that via its abstraction layer. For your question 
about why such integration belongs in Magnum, my answer is that the work needs 
to be done in one place so that everyone can leverage it instead of 
re-inventing their own solutions. Magnum is the OpenStack container service so 
it is nature for Magnum to take it IMHO.

> 
> -Steve
> 
> > From my point of view,
> > pushing the work out doesn't seem to address the original pain, which
> > is some users don't want to explore the complexities of individual
> COEs.
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] One plugin - one Launchpad project

2016-04-21 Thread Neil Jerram
On 19/04/16 16:52, Irina Povolotskaya wrote:
> Hi to everyone,
>
> as you possibly know (at least, those dev. teams working on their Fuel
> plugins) we have a fuel-plugins Launchpad project [1] which serves as
> all-in-one entry point for filing bugs, related
> to plugin-specific problems.
>
> nevertheless, this single project is a bad idea in terms of providing
> granularity and visibility for each plugin:
> - it's not possible to make up milestones, unique for every plugin that
> would coincide with the plugin's version (which is specified in
> metadata.yaml file)
> - it's not possible to provide every dev. team with exclusive rights on
> managing importance, milestones etc.
>
> therefore, I would like to propose the following:
> - if you have your own fuel plugin, create a separate LP project for it
> e.g.[2] [3]and make up all corresponding groups for managing release
> cycle of your plugin
> - if you have some issues with fuel plugin framework itself, please
> consider filing bugs in fuel project [4] as usual.
>
> I would appreciate getting feedback on this idea.
> if it seems fine, then I'll follow-up with adding instructions into our
> SDK [5] and the list of already existing LP projects.

I agree that it is better to have a project for each plugin.  For the 
Calico plugin, we actually already have this [1].

Thanks,
Neil

[1] https://launchpad.net/fuel-plugin-calico


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Fuel 9.0 is released

2016-04-21 Thread Vladimir Kozhukalov
Dear all,

I am glad to announce Mitaka release of Fuel (a.k.a Fuel 9.0) - deployment

and lifecycle management tool for OpenStack.

This release introduces support for OpenStack Mitaka and adds

a number of new features and enhancements.

Some highlights:
- Support lifecycle management operations (a.k.a ‘day 2’ operations).
Now cluster settings tab on UI is unlocked after deployment
(cluster configuration could be changed). [1]
- Support of custom deployment graphs. Default deployment graph
could be overridden either by plugins or by a user. [2]
- Support of DPDK capabilities [3]
- Support of Huge Pages capabilities [4]
- Support of CPU pinning (NUMA) capabilities [5]
- Support of QoS capabilities [6]
- Support of SR-IOV capabilities [7]
- Support of multipath devices [8]
- Support of deployment using UCA packages [9]

Please be aware that it is not intended for production use and

there are still about 90 known High bugs [10]. We are planning

to address them all in Fuel 9.0.1 release which is scheduled

for late June [11].

We are looking forward to your feedback.
Great work, Fuel team. Thanks to everyone.

[1]
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/unlock-settings-tab.rst

[2]
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/execute-custom-graph.rst

[1]
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/support-dpdk.rst


[3]
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/execute-custom-graph.rst
[4]
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/support-hugepages.rst
[5]
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/support-numa-cpu-pinning.rst
[6]
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/support-qos.rst
[7]
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/support-sriov.rst
[8]
https://github.com/openstack/fuel-specs/blob/master/specs/9.0/fc-multipath-disks.rst
[9] https://blueprints.launchpad.net/fuel/+spec/deploy-with-uca-packages

[10] https://goo.gl/qXfrhQ

[11] https://wiki.openstack.org/wiki/Fuel/9.0_Release_Schedule


Learn more about Fuel:
https://wiki.openstack.org/wiki/Fuel

How we work:
https://wiki.openstack.org/wiki/Fuel/How_to_contribute

Specs for features in 9.0 and other Fuel releases:
http://specs.openstack.org/openstack/fuel-specs/

ISO image:
http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-9.0.iso.torrent

Test results of the release build:
https://ci.fuel-infra.org/job/9.0-community.test_all/61/

Documentation:
http://docs.openstack.org/developer/fuel-docs/


RPM packages:
http://mirror.fuel-infra.org/mos-repos/centos/mos9.0-centos7/

DEB packages:
http://mirror.fuel-infra.org/mos-repos/ubuntu/9.0/

Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-04-21 Thread Marco Fargetta
On Thu, Apr 21, 2016 at 10:22:46AM -0400, John Dennis wrote:
> On 04/18/2016 12:34 PM, Martin Millnert wrote:
> >(** ECP is a new feature, not supported by all IdP's, that at (second)
> >best requires reconfiguration of core authentication services at each
> >customer, and at worst requires customers to change IdP software
> >completely. This is a varying degree of showstopper for various
> >customers.)
> 
> The majority of work to support ECP is in the SP, not the IdP. In fact IdP's
> are mostly agnostic with respect to ECP, there is nothing ECP specific an
> IdP must implement other than supporting the SOAP binding for the
> SingleSignOnService which is trivial. I've yet to encounter an IdP that does
> not support the SOAP binding.
> 
> What IdP are you utilizing which is incapable of receiving an AuthnRequest
> via the SOAP binding?
> 

I would disagree on this. Last year in EduGAIN, the European
interfederation including hundreds of IdPs, only a very small amount
were supporting ECP. I did a check on the metadata.


Additionally, some IdP implementations do not support ECP
out-of-the-box and for the one providing such support, it requires a
different authentication mechanism compared to the one used for the
redirect or post profile so many IdPs are not supporting this
mechanism.

The work to support ECP is equally distributed among the IdP and SP
although it is getting more common in the IdPs with last release of
IdPs software such as shibboleth IdP v3.

Marco



> 
> -- 
> John
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-21 Thread Boden Russell
On 4/20/16 3:29 PM, Doug Hellmann wrote:
> Yes, please, let's try to make that work and contribute upstream if we
> need minor modifications, before we create something new.

We can leverage the 'retrying' module (already in global requirements).
It lacks a few things we need, but those can be implemented using its
existing "hooks" today, or, working with the module owner(s) to push a
few changes that we need (the later probably provides the "greatest good").

Assuming we'll leverage 'retrying', I was thinking the initial goals
here are:
(a) Ensure 'retrying' supports the behaviors we need for our usages in
neutron + nova (see [1] - [5] on my initial note) today. Implementation
details TBD.
(b) Implement a "Backing off RPC client" in oslo, inspired by [1].
(c) Update nova + neutron to use the "common implementation(s)" rather
than 1-offs.

This sounds fun and I'm happy to take it on. However, I probably won't
make much progress until after the summit for obvious reasons. I'll plan
to lead with code, if a RFE/spec/other is needed please let me know.

Additional comments welcomed.


Thanks

[1] https://review.openstack.org/#/c/280595

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Launchpad cleanups for Newton

2016-04-21 Thread Steven Hardy
Hi all,

So I've been attempting to beat our launchpad project into shape today, and
have made a few changes with a view to making the tool more useful for
tracking things during the Newton cycle:

1. New "TripleO Drivers" team

I created https://launchpad.net/~tripleo-drivers which is a restricted
team, and I added all the currently active tripleo-core members to it.

I also switched over the ownership/driver of https://launchpad.net/tripleo
to this team, so we can limit administering some things like series/milestone
assignments to those on the core team.

Let me know if I have missed anyone (any member of the team should also be
able to add folks if needed)

2. Series added for liberty, mitaka and newton

I created new series for liberty, mitaka and newton, see 
https://launchpad.net/tripleo/+series

I only created milestone targets for Newton (given that the other releases
already happened), and these are set to expected dates according to the
published Newton release schedule.  Around the time of each milestone,
we'll agree to close the milestone and publish a release for each
component, anything not landed will be bumped to the next milestone.

3. Trunk remains, with "ongoing" target

I left the existing "trunk" series in place, and added an "ongoing" target
- we can use this to track tasks/bugs unrelated to the release cycle (such
  as CI issues or enhancements).

4. Any pre-mitaka Fix Committed bugs marked Fix Released

I ran a script (process_bugs.py from release-tools, mildly hacked) over
the existing bugs, and anything that was marked Fix Committed before the
date of the Mitaka release has been marked Fix Released.

Note I didn't make any attempt to retrospectively fix up series
assignments, I just wanted to clear down the large number of Open Fix
Committed bugs (we had nearly 400 open bugs!)

5 - Any pre-liberty New bugs marked Incomplete with a comment

We had a bunch of really old bugs, which were still Triaged after years of
no activity.  So I posted a comment saying it refers to an old eol version
of TripleO, marked the bug incomplete and requested the reporter to re-open
if the bug is still valid.

Hopefully this last one won't be percieved as too draconian, but I viewed a
subset and they all appeared to be irrelevant to the current codebase.

6 - Purged old obsolete blueprints

https://blueprints.launchpad.net/tripleo had a lot of old stuff that was
either obsolete, superseded or actually implemented, so I tried to clear
down these so we can get a better view of what's actually in-progres or on
the roadmap.

7 - New spec-lite tag

https://bugs.launchpad.net/tripleo/+bugs?field.tag=spec-lite

We agreed a while back that we'd adopt the "Spec Lite" process whereby
folks may raise a bug with a description of a feature instead of a
blueprint with a spec.  Please tag any bugs raised for features with
"spec-lite", and mark them as wishlist items assigned to the Newton
series.

Going forward, can I ask (please!) that you assign any bugs or blueprints
to the newton series, and that you try to tag all commits with either a bug
or blueprint reference where appropriate, so we get a better view of
progress as we go through the cycle.

Any questions or comments, please let me know, thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][oslo] Common backoff & timeout utils

2016-04-21 Thread Ben Nemec
FWIW, we were using retrying in oslo.concurrency at one point:
https://review.openstack.org/#/c/130872

It looks like that got removed somewhere in the move to fasteners though.

On 04/20/2016 04:29 PM, Doug Hellmann wrote:
> Excerpts from Chris Dent's message of 2016-04-20 22:16:10 +0100:
>>
>> Will the already existing retrying[1] do the job or is it missing
>> features (the namespacing thing seems like it could be an issue)
>> or perhaps too generic?
>>
>> [1] https://pypi.python.org/pypi/retrying
>>
> 
> Yes, please, let's try to make that work and contribute upstream if we
> need minor modifications, before we create something new.
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] JavaScript RoadMap for OpenStack Newton

2016-04-21 Thread Michael Krotscheck
This post contains the current working draft of the JavaScript roadmap
which myself and Beth Elwell will be working on in Newton. It’s a big list,
and we need help - Overall themes for this cycle are Consistency,
Interoperability, and engaging with the JavaScript community at large. Our
end goal is to build the foundations of a JavaScript ecosystem, which
permits the creation of entirely custom interfaces.

Note: We are not trying to replace Horizon, we are aiming to help those
downstream who need something more than “Vanilla OpenStack”. If you'd like
to have a discussion on this point, I'd be happy to have that under a
different subject.

Continue Development: ironic-webclient

The ironic-webclient will release its first version during the Newton
cycle. We’re awfully close to having the basic set of features supported,
and with some excellent feedback from the OpenStack UX team, will also have
a sexy new user interface that’s currently in the review queue. Once this
work is complete, we will begin extracting common components into a new
project, named…

New: js-openstacklib

This new project will be incubated as a single, gate-tested JavaScript API
client library for the OpenStack API’s. Its audience is software engineers
who wish to build their own user interface using modern javascript tools.
As we cannot predict downstream use cases, special care will be taken to
ensure the project’s release artifacts can eventually support both browser
and server based applications.

Philosophically, we will be taking a page from the python-openstackclient
book, and avoid creating a new project for each of OpenStack’s services. We
can make sure our release artifacts can be used piecemeal, however trying
to maintain code consistency across multiple different projects is a hard
lesson that others have already learned for us. Let’s not do that again.

New: js-generator-openstack

Yeoman is JavaScript’s equivalent of cookiecutter, providing a scaffolding
engine which can rapidly set up, and maintain, new projects. Creating and
maintaining a yeoman generator will be a critical part of engaging with the
JavaScript community, and can drive adoption and consistency across
OpenStack as well. Furthermore, it is sophisticated enough that it could
also support many things that exist in today’s Python toolchain, such as
dependency management, and common tooling maintenance.

Development of the yeoman generator will draw in lessons learned from
OpenStack’s current UI Projects, including Fuel, StoryBoard, Ironic,
Horizon, Refstack, and Health Dashboard, and attempt to converge on common
practices across projects.

New (exploration): js-npm-publish-xstatic

This project aims to bridge the gap between our JavaScript projects, and
Horizon’s measured migration to AngularJS. We don’t believe in duplicating
work, so if it is feasible to publish our libraries in a way that Horizon
may consume (via the existing xstatic toolchain), then we certainly should
pursue that. The notable difference is that our own projects, such as
js-openstacklib, don’t have to go through the repackaging step that our
current xstatic packages do; thus, if it is possible for us to publish to
npm and to xstatic/pypi at the same time, that would be best.

New: Xenial Build Nodes

As of two weeks ago, OpenStack’s Infrastructure is running a version of
Node.js and npm more recent than what is available on Trusty LTS.
Ultimately, we would like to converge this version on Node4 LTS, the
release version maintained by the Node foundation. The easiest way to do
this is to simply piggyback on Infra’s impending adoption of Xenial build
nodes, though some work is required to ensure this transition goes smoothly.

Maintain: eslint-config-openstack

eslint has updated to version 2.x, and no more rule bugfixes are being
landed in 1.x. eslint-config-openstack will follow in kind, updating itself
to use eslint 2.x. We will releases this version as eslint-config-openstack
v2.0.0, and continue to track the eslint version numbers from there.
Downstream projects are encouraged to adopt this, as it is unlikely that
automated dependency updates for JavaScript projects will land this cycle.

Maintain: NPM Mirrors

We are currently synchronizing all npm packages to our AFS master disks,
which should be the final step in getting functional npm mirrors. Some
minor tweaking will be required to make them functional, and they will need
to be maintained throughout the next cycle. Issues raised in the
#openstack-infra channel will be promptly addressed.

This includes work on both the js-openstack-registry-hooks project and the
js-afs-blob-store project, which are two custom components we use to drive
our mirrors.

Maintain: oslo_middleware.cors

CORS landed in mitaka, and we will continue to maintain it going forward.
In the Newton cycle, we have the following new features planned:

- Automatic allowed_origin detection from Keystone (zero-config).
- More consistent use of set_defaults.
- 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Adrian Otto

> On Apr 20, 2016, at 2:49 PM, Joshua Harlow  wrote:
> 
> Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> making the COEs work better on OpenStack, that would be a winning
>>> strategy. Support and compliment our various COE ecosystems.
> 
> So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
> COEs work better on OpenStack' but I do dislike the part about COEs (plural) 
> because it is once again the old non-opinionated problem that we (as a 
> community) suffer from.
> 
> Just my 2 cents, but I'd almost rather we pick one COE and integrate that 
> deeply/tightly with openstack, and yes if this causes some part of the 
> openstack community to be annoyed, meh, to bad. Sadly I have a feeling we are 
> hurting ourselves by continuing to try to be everything and not picking 
> anything (it's a general thing we, as a group, seem to be good at, lol). I 
> mean I get the reason to just support all the things, but it feels like we as 
> a community could just pick something, work together on figuring out how to 
> pick one, using all these bright leaders we have to help make that possible 
> (and yes this might piss some people off, to bad). Then work toward making 
> that something great and move on…

The key issue preventing the selection of only one COE is that this area is 
moving very quickly. If we would have decided what to pick at the time the 
Magnum idea was created, we would have selected Docker. If you look at it 
today, you might pick something else. A few months down the road, there may be 
yet another choice that is more compelling. The fact that a cloud operator can 
integrate services with OpenStack, and have the freedom to offer support for a 
selection of COE’s is a form of insurance against the risk of picking the wrong 
one. Our compute service offers a choice of hypervisors, our block storage 
service offers a choice of storage hardware drivers, our networking service 
allows a choice of network drivers. Magnum is following the same pattern of 
choice that has made OpenStack compelling for a very diverse community. That 
design consideration was intentional.

Over time, we can focus the majority of our effort on deep integration with 
COEs that users select the most. I’m convinced it’s still too early to bet the 
farm on just one choice.

Adrian

>> I'm with Adrian on that one. I've attended a lot of container-oriented
>> conferences over the past year and my main takeaway is that this new
>> crowd of potential users is not interested (at all) in an
>> OpenStack-specific lowest common denominator API for COEs. They want to
>> take advantage of the cool features in Kubernetes API or the versatility
>> of Mesos. They want to avoid caring about the infrastructure provider
>> bit (and not deploy Mesos or Kubernetes themselves).
>> 
>> Let's focus on the infrastructure provider bit -- that is what we do and
>> what the ecosystem wants us to provide.
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-04-21 Thread John Dennis

On 04/18/2016 12:34 PM, Martin Millnert wrote:

(** ECP is a new feature, not supported by all IdP's, that at (second)
best requires reconfiguration of core authentication services at each
customer, and at worst requires customers to change IdP software
completely. This is a varying degree of showstopper for various
customers.)


The majority of work to support ECP is in the SP, not the IdP. In fact 
IdP's are mostly agnostic with respect to ECP, there is nothing ECP 
specific an IdP must implement other than supporting the SOAP binding 
for the SingleSignOnService which is trivial. I've yet to encounter an 
IdP that does not support the SOAP binding.


What IdP are you utilizing which is incapable of receiving an 
AuthnRequest via the SOAP binding?



--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [networking-sfc] A standards-compliant SFC API

2016-04-21 Thread Vikram Choudhary
Hi Igor,

Thanks for understanding. Let's continue the discussion over the submitted
spec.

Thanks
Vikram

On Thu, Apr 21, 2016 at 3:04 PM, Duarte Cardoso, Igor <
igor.duarte.card...@intel.com> wrote:

> Hi Vikram,
>
>
>
> Thanks for the response. I’m happy to provide enhancements instead of
> building the API from scratch, the semantics may change considerably
> though, resulting in what’s essentially a new API, but let’s see. I invite
> you to read the spec [1] thoroughly. Let’s continue there so we can better
> scope the discussion.
>
>
>
> [1] https://review.openstack.org/#/c/308453/
>
>
>
> Best regards,
>
> Igor.
>
>
>
> *From:* Vikram Choudhary [mailto:viks...@gmail.com]
> *Sent:* Thursday, April 21, 2016 3:39 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [neutron] [networking-sfc] A
> standards-compliant SFC API
>
>
>
> Just a quick glance over the proposal seems like the networking-sfc also
> does the same. In addition, networking-sfc is successfully integrated with
> ONOS[1] and planned for ODL[2], OVN [3] & Tacker[4] (without any issues
> with the existing API's so far). In addition, If we feel the existing
> networking-sfc API's has issues then let's enhance them rather than a fresh
> effort from the scratch.
>
>
>
> Let's discuss more about the proposal over the submitted spec.
>
>
>
> [1]
> https://github.com/openstack/networking-onos/blob/master/doc/source/devref/sfc_driver.rst
>
> [2] https://review.openstack.org/#/c/300898/
>
> [3]
> https://blueprints.launchpad.net/networking-sfc/+spec/networking-sfc-ovn-driver
>
> [4]
> https://blueprints.launchpad.net/networking-sfc/+spec/tacker-networking-sfc-driver
>
>
>
>
>
> On Thu, Apr 21, 2016 at 1:24 AM, Duarte Cardoso, Igor <
> igor.duarte.card...@intel.com> wrote:
>
> Thanks for the feedback Armando,
>
>
>
> Adding missing tag.
>
>
>
> Best regards,
>
> Igor.
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Wednesday, April 20, 2016 6:03 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [neutron][sfc] A standards-compliant SFC
> API
>
>
>
>
>
> On 20 April 2016 at 09:31, Duarte Cardoso, Igor <
> igor.duarte.card...@intel.com> wrote:
>
> Dear OpenStack Community,
>
>
>
> We've been investigating options in/around OpenStack for supporting
> Service Function Chaining. The networking-sfc project has made significant
> progress in this space, and we see lots of value in what has been
> completed. However, when we looked at the related IETF specs on SFC we
> concluded that there would be value in further developing an SFC API and
> related classification functionality to enhance the alignment between the
> work in the OpenStack community with the standards work. We would like to
> propose the SFC part as a potential networking-sfc v2 API, but are open to
> other options too based on your feedback.
>
>
>
> I have submitted a spec to the neutron-specs repo [1], where you can check
> what our initial thoughts for this new API are, and provide your feedback
> or questions regarding the same.
>
>
>
> Your thoughts on this are deeply appreciated. We are looking forward to
> having further discussions with everyone interested in giving feedback or
> establishing collaborations during the OpenStack Summit in Austin.
>
>
>
> [1] https://review.openstack.org/#/c/308453
>
>
>
> Thanks for reaching out.
>
>
>
> The networking-sfc initiative so far has been pretty autonomous. The
> project has its own launchpad project [1] and its own docs to document APIs
> and proposals [2]. During the long journey that Neutron has been through,
> we have been adjusting how to manage the project in order to strike a good
> balance between development agility, product stability and community needs.
> We're always looking forward to improving that balance and this means that
> how we track certain initiatives may evolve in the future. For now, it's
> probably best to target the mailing list with tag [networking-sfc] (in
> addition to neutron), as well as the project noted below.
>
>
>
> [1] https://launchpad.net/networking-sfc
>
> [2] http://docs.openstack.org/developer/networking-sfc/
>
>
>
>
>
> Thank you,
>
> Igor & the Intel OpenStack networking team.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> 

Re: [openstack-dev] [nova] api-ref content verification phase doc push

2016-04-21 Thread Sean Dague
On 04/21/2016 09:54 AM, Matt Riedemann wrote:
> How about an etherpad where they are listed and people can assign
> themselves per file? I guess that gets messy when you have some changes
> doing step 1 on multiple files...

The giant etherpads become a mess to keep track of source of truth, and
get out of date because there is no enforcement matching with the code
itself.

We rejected the giant tracking etherpad this time for that reason.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >