Re: [openstack-dev] [nova] Rocky spec review day

2018-03-21 Thread melanie witt

On Tue, 20 Mar 2018 16:47:58 -0700, Melanie Witt wrote:

The past several cycles, we've had a spec review day in the cycle where
reviewers focus on specs and iterating quickly with spec authors for the
day. Spec freeze is April 19 so I wanted to get some input from all of
you about what day would work best for a spec review day.

I was thinking that 2-3 weeks ahead of spec freeze would be appropriate,
so that would be March 27 (next week) or April 3 if we do it on a Tuesday.


Thanks for all who replied on the thread. There was consensus that 
earlier is better, so let's do the spec review day next week: Tuesday 
March 27.


Best,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Does neutron-server support the main backup redundancy?

2018-03-21 Thread Frank Wang
Thanks for your response, another question is Does the compute nodes or agents 
know how many neutron-servers running? I mean If there was a server corrupt, 
they will automatically connect to other servers?

Thanks,


At 2018-03-21 18:14:47, "Miguel Angel Ajo Pelayo"  wrote:

You can run as many as you want, generally an haproxy is used in front of them 
to balance load across neutron servers.


Also, keep in mind, that the db backend is a single mysql, you can also 
distribute that with galera.


That is the configuration you will get by default when you deploy in HA with 
RDO/TripleO or OSP/Director.



On Wed, Mar 21, 2018 at 3:34 AM Kevin Benton  wrote:

You can run as many neutron server processes as you want in an active/active 
setup. 


On Tue, Mar 20, 2018, 18:35 Frank Wang  wrote:

Hi All,
 As far as I know, neutron-server only can be a single node, In order to 
improve the reliability of the system, Does it support the main backup or 
active/active redundancy? Any comment would be appreciated.

Thanks,





 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding "not docs" banner to specs website?

2018-03-21 Thread Rochelle Grober
It could be *really* useful if you could include the date (month/year would be 
good enough)of the last significant patch (not including the reformat to 
Openstackdocstheme).  That could give folks a great stick in the mud for what 
"past" is for the spec.  It might even incent some to see if there are newer, 
conflicting or enhancing specs or docs to reference.

--Eoxky

> 
Doug Hellmann wrote:
> 
> Excerpts from Jim Rollenhagen's message of 2018-03-19 19:06:38 +:
> > On Mon, Mar 19, 2018 at 3:46 PM, Jeremy Stanley 
> wrote:
> >
> > > On 2018-03-19 14:57:58 + (+), Jim Rollenhagen wrote:
> > > [...]
> > > > What do folks think about a banner at the top of the specs website
> > > > (or each individual spec) that points this out? I'm happy to do
> > > > the work if we agree it's a good thing to do.
> > > [...]
> > >
> > > Sounds good in principle, but the execution may take a bit of work.
> > > Specs sites are independently generated Sphinx documents stored in
> > > different repositories managed by different teams, and don't
> > > necessarily share a common theme or configuration.
> >
> >
> > Huh, I had totally thought there was a theme for the specs site that
> > most/all projects use. I may try to accomplish this anyway, but will
> > likely be more work that I thought. I'll poke around at options (small
> > sphinx plugin, etc).
> 
> We want them all to use the openstackdocstheme so you could look into
> creating a "subclass" of that one with the extra content in the header, then
> ensure all of the specs repos use it.  We would have to land a small patch to
> trigger a rebuild, but the patch switching them from oslosphinx to
> openstackdocstheme would serve for that and a small change to the readme
> or another file would do it for any that are already using the theme.
> 
> Doug
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Blazar] Nominating Bertrand Souville to Blazar core

2018-03-21 Thread Hiroaki Kobayashi

Hi Blazar folks,

I'd like to nominate Bertrand Souville to blazar core team. He has been 
involved in the project since the Ocata release. He has worked on NFV usecase, 
gap analysis and feedback in OPNFV and ETSI NFV as well as in Blazar itself.  
Additionally, he has reviewed not only Blazar repository but Blazar related 
repository with nice long-term perspective.

I believe he would make the project much nicer.

best regards,
Masahito


+1


+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack "S" Release Naming Preliminary Results

2018-03-21 Thread Paul Belanger
Hello all!

We decided to run a public poll this time around, we'll likely discuss the
process during a TC meeting, but we'd love the hear your feedback.

The raw results are below - however ...

**PLEASE REMEMBER** that these now have to go through legal vetting. So 
it is too soon to say 'OpenStack Solar' is our next release, given that previous
polls have had some issues with the top choice.

In any case, the names will been sent off to legal for vetting. As soon 
as we have a final winner, I'll let you all know.

https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_40b95cb2be3fcdf1&rkey=c04ca6bca83a1427

Result

1. Solar  (Condorcet winner: wins contests with all other choices)
2. Stein  loses to Solar by 159–138
3. Spree  loses to Solar by 175–122, loses to Stein by 148–141
4. Sonne  loses to Solar by 190–99, loses to Spree by 174–97
5. Springer  loses to Solar by 214–60, loses to Sonne by 147–103
6. Spandau  loses to Solar by 195–88, loses to Springer by 125–118
7. See  loses to Solar by 203–61, loses to Spandau by 121–111
8. Schiller  loses to Solar by 207–70, loses to See by 112–106
9. SBahn  loses to Solar by 212–74, loses to Schiller by 111–101
10. Staaken  loses to Solar by 219–59, loses to SBahn by 115–89
11. Shellhaus  loses to Solar by 213–61, loses to Staaken by 94–85
12. Steglitz  loses to Solar by 216–50, loses to Shellhaus by 90–83
13. Saatwinkel  loses to Solar by 219–55, loses to Steglitz by 96–57
14. Savigny  loses to Solar by 219–51, loses to Saatwinkel by 77–76
15. Schoenholz  loses to Solar by 221–46, loses to Savigny by 78–70
16. Suedkreuz  loses to Solar by 220–50, loses to Schoenholz by 68–67
17. Soorstreet  loses to Solar by 226–32, loses to Suedkreuz by 75–58

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky spec review day

2018-03-21 Thread Ghanshyam Mann
On Wed, Mar 21, 2018 at 10:33 PM, Sylvain Bauza  wrote:
>
>
> On Wed, Mar 21, 2018 at 2:12 PM, Eric Fried  wrote:
>>
>> +1 for the-earlier-the-better, for the additional reason that, if we
>> don't finish, we can do another one in time for spec freeze.
>>
>
> +1 for Wed 27th March.
>
>
>>
>> And I, for one, wouldn't be offended if we could "officially start
>> development" (i.e. focus on patches, start runways, etc.) before the
>> mystical but arbitrary spec freeze date.
>>
>
> Sure, but given we have a lot of specs to review, TBH it'll be possible for
> me to look at implementation patches only close to the 1st milestone.
>
>
>>
>> On 03/20/2018 07:29 PM, Matt Riedemann wrote:
>> > On 3/20/2018 6:47 PM, melanie witt wrote:
>> >> I was thinking that 2-3 weeks ahead of spec freeze would be
>> >> appropriate, so that would be March 27 (next week) or April 3 if we do
>> >> it on a Tuesday.

+1 for either one. I think we had enough time to update/push specs
after PTG and doing it 2-3 weeks ahead of spec freeze are always
helpful.


>> >
>> > It's spring break here on April 3 so I'll be listening to screaming
>> > kids, I mean on vacation. Not that my schedule matters, just FYI.
>> >
>> > But regardless of that, I think the earlier the better to flush out
>> > what's already there, since we've already approved quite a few
>> > blueprints this cycle (32 to so far).
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PTLS] Project Updates & Project Onboarding

2018-03-21 Thread Kendall Nelson
Hello!

Project Updates[1] & Project Onboarding[2] sessions are now live on the
schedule!

We did as best as we could to keep project onboarding sessions adjacent to
project update slots. Though, given the differences in duration and the
number of each we have per day that got increasingly difficult as the days
went on, hopefully what is there will work for everyone.

If there are any speakers you need added to your slots, or any conflicts
you need addressed, feel free to email speakersupp...@openstack.org and
they should be able to help you out.

Thanks!

-Kendall Nelson (diablo_rojo)

[1]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Update
[2]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Onboarding
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][infra][dib] Gate "out of disk" errors and diskimage-builder 2.12.0

2018-03-21 Thread Ian Wienand

On 03/21/2018 03:39 PM, Ian Wienand wrote:

We will prepare dib 2.12.1 with the fix.  As usual there are
complications, since the dib gate is broken due to unrelated triple-o
issues [2].  In the mean time, probably avoid 2.12.0 if you can.



[2] https://review.openstack.org/554705


Since we have having issues getting this verified due to some
instability in the tripleo gate, I've proposed a temporary removal of
the jobs for dib in [1].

[1] https://review.openstack.org/555037

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-21 Thread Monty Taylor

On 03/16/2018 09:29 AM, Kwan, Louie wrote:

In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 
are described in openstack's upper-constraints.txt,

https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297

If I do


git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens


And then stack.sh

We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Having said that, we need the older version, how to configure devstack to use 
openstacksdk===0.11.3 and os-service-types===1.1.0


Would you mind sharing why you need the older versions?

os-service-types is explicitly designed such that the latest version 
should always be correct.


If there is something in 1.2.0 that has broken you in some way that you 
need an older version, that's a problem and we should look in to it.


The story is intended to be similar for sdk moving forward ... but we're 
still pre-1.0, so that makes sense at the moment. I'm still interested 
in what specific issue you had, just to make sure we're aware of issues 
people are having.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] git repo rename and storyboard migration

2018-03-21 Thread Monty Taylor

Hey everybody!

This upcoming Friday we're scheduled to complete the transition from 
python-openstacksdk to openstacksdk. This was started a while back (Tue 
Jun 16 12:05:38 2015 to be exact) by changing the name of what gets 
published to PyPI. Renaming the repo is to get those two back inline 
(and remove a hack in devstack to deal with them not being the same)


Since this is a repo rename, it means that local git remotes will need 
to be updated. This can be done either via changing urls in .git/config 
- or by just re-cloning.


Once that's done, we'll be in a position to migrate to storyboard. shade 
is already over there, which means we're currently split between 
storyboard and launchpad for the openstacksdk team repos.


diablo_rojo has done a test migration and we're good to go there - so 
I'm thinking either Friday post-repo rename - or sometime early next 
week. Any thoughts or opinions?


This will migrate bugs from launchpad for python-openstacksdk and 
os-client-config.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-21 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-03-15 07:03:11 -0400:
> 
> TL;DR
> -
> 
> Let's stop copying exact dependency specifications into all our
> projects to allow them to reflect the actual versions of things
> they depend on. The constraints system in pip makes this change
> safe. We still need to maintain some level of compatibility, so the
> existing requirements-check job (run for changes to requirements.txt
> within each repo) will change a bit rather than going away completely.
> We can enable unit test jobs to verify the lower constraint settings
> at the same time that we're doing the other work.

The new job definition is in https://review.openstack.org/555034 and I
have updated the oslo.config patch I mentioned before to use the new job
instead of one defined in the oslo.config repo (see
https://review.openstack.org/550603).

I'll wait for that job patch to be reviewed and approved before I start
adding the job to a bunch of other repositories.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [libvirt] [virt-tools-list] Project for profiles and defaults for libvirt domains

2018-03-21 Thread Eduardo Habkost
On Wed, Mar 21, 2018 at 06:39:52PM +, Daniel P. Berrangé wrote:
> On Wed, Mar 21, 2018 at 03:00:41PM -0300, Eduardo Habkost wrote:
> > On Tue, Mar 20, 2018 at 03:10:12PM +, Daniel P. Berrangé wrote:
> > > On Tue, Mar 20, 2018 at 03:20:31PM +0100, Martin Kletzander wrote:
> > > > 1) Default devices/values
> > > > 
> > > > Libvirt itself must default to whatever values there were before any
> > > > particular element was introduced due to the fact that it strives to
> > > > keep the guest ABI stable.  That means, for example, that it can't just
> > > > add -vmcoreinfo option (for KASLR support) or magically add the pvpanic
> > > > device to all QEMU machines, even though it would be useful, as that
> > > > would change the guest ABI.
> > > > 
> > > > For default values this is even more obvious.  Let's say someone figures
> > > > out some "pretty good" default values for various HyperV enlightenment
> > > > feature tunables.  Libvirt can't magically change them, but each one of
> > > > the projects building on top of it doesn't want to keep that list
> > > > updated and take care of setting them in every new XML.  Some projects
> > > > don't even expose those to the end user as a knob, while others might.
> > > 
> > > This gets very tricky, very fast.
> > > 
> > > Lets say that you have an initial good set of hyperv config
> > > tunables. Now sometime passes and it is decided that there is a
> > > different, better set of config tunables. If the module that is
> > > providing this policy to apps like OpenStack just updates itself
> > > to provide this new policy, this can cause problems with the
> > > existing deployed applications in a number of ways.
> > > 
> > > First the new config probably depends on specific versions of
> > > libvirt and QEMU,  and you can't mandate to consuming apps which
> > > versions they must be using.  [...]
> > 
> > This is true.
> > 
> > >   [...]  So you need a matrix of libvirt +
> > > QEMU + config option settings.
> > 
> > But this is not.  If config options need support on the lower
> > levels of the stack (libvirt and/or QEMU and/or KVM and/or host
> > hardware), it already has to be represented by libvirt host
> > capabilities somehow, so management layers know it's available.
> > 
> > This means any new config generation system can (and must) use
> > host(s) capabilities as input before generating the
> > configuration.
> 
> I don't think it is that simple. The capabilities reflect what the
> current host is capable of only, not whether it is desirable to
> actually use them. Just because a host reports that it has q35-2.11.0
> machine type doesn't mean that it should be used. The mgmt app may
> only wish to use that if it is available on all hosts in a particular
> grouping. The config generation library can't query every host directly
> to determine this. The mgmt app may have a way to collate capabilities
> info from hosts, but it is probably then stored in a app specific
> format and data source, or it may just ends up being a global config
> parameter to the mgmt app per host.

In other words, you need host capabilities from all hosts as
input when generating a new config XML.  We already have a format
to represent host capabilities defined by libvirt, users of the
new system would just need to reproduce the data they got from
libvirt and give it to the config generator.

Not completely trivial, but maybe worth the effort if you want to
benefit from work done by other people to find good defaults?

> 
> There have been a number of times where a feature is available in
> libvirt and/or QEMU, and the mgmt app still doesn't yet may still
> not wish to use it because it is known broken / incompatible with
> certain usage patterns. So the mgmt app would require an arbitrarily
> newer libvirt/qemu before considering using it, regardless of
> whether host capabilities report it is available.

If this happens sometimes, why is it better for the teams
maintaining management layers to duplicate the work of finding
what works, instead of solving the problem only once?


> 
> > > Even if you have the matching libvirt & QEMU versions, it is not
> > > safe to assume the application will want to use the new policy.
> > > An application may need live migration compatibility with older
> > > versions. Or it may need to retain guaranteed ABI compatibility
> > > with the way the VM was previously launched and be using transient
> > > guests, generating the XML fresh each time.
> > 
> > Why is that a problem?  If you want live migration or ABI
> > guarantees, you simply don't use this system to generate a new
> > configuration.  The same way you don't use the "pc" machine-type
> > if you want to ensure compatibility with existing VMs.
> 
> In many mgmt apps, every VM potentially needs live migration, so
> unless I'm misunderstanding, you're effectively saying don't ever
> use this config generator in these apps.

If you only need live migration, you can cho

Re: [openstack-dev] [libvirt] [virt-tools-list] Project for profiles and defaults for libvirt domains

2018-03-21 Thread Eduardo Habkost
On Tue, Mar 20, 2018 at 03:10:12PM +, Daniel P. Berrangé wrote:
> On Tue, Mar 20, 2018 at 03:20:31PM +0100, Martin Kletzander wrote:
> > 1) Default devices/values
> > 
> > Libvirt itself must default to whatever values there were before any
> > particular element was introduced due to the fact that it strives to
> > keep the guest ABI stable.  That means, for example, that it can't just
> > add -vmcoreinfo option (for KASLR support) or magically add the pvpanic
> > device to all QEMU machines, even though it would be useful, as that
> > would change the guest ABI.
> > 
> > For default values this is even more obvious.  Let's say someone figures
> > out some "pretty good" default values for various HyperV enlightenment
> > feature tunables.  Libvirt can't magically change them, but each one of
> > the projects building on top of it doesn't want to keep that list
> > updated and take care of setting them in every new XML.  Some projects
> > don't even expose those to the end user as a knob, while others might.
> 
> This gets very tricky, very fast.
> 
> Lets say that you have an initial good set of hyperv config
> tunables. Now sometime passes and it is decided that there is a
> different, better set of config tunables. If the module that is
> providing this policy to apps like OpenStack just updates itself
> to provide this new policy, this can cause problems with the
> existing deployed applications in a number of ways.
> 
> First the new config probably depends on specific versions of
> libvirt and QEMU,  and you can't mandate to consuming apps which
> versions they must be using.  [...]

This is true.

>   [...]  So you need a matrix of libvirt +
> QEMU + config option settings.

But this is not.  If config options need support on the lower
levels of the stack (libvirt and/or QEMU and/or KVM and/or host
hardware), it already has to be represented by libvirt host
capabilities somehow, so management layers know it's available.

This means any new config generation system can (and must) use
host(s) capabilities as input before generating the
configuration.


> 
> Even if you have the matching libvirt & QEMU versions, it is not
> safe to assume the application will want to use the new policy.
> An application may need live migration compatibility with older
> versions. Or it may need to retain guaranteed ABI compatibility
> with the way the VM was previously launched and be using transient
> guests, generating the XML fresh each time.

Why is that a problem?  If you want live migration or ABI
guarantees, you simply don't use this system to generate a new
configuration.  The same way you don't use the "pc" machine-type
if you want to ensure compatibility with existing VMs.

> 
> The application will have knowledge about when it wants to use new
> vs old hyperv tunable policy, but exposing that to your policy module
> is very tricky because it is inherantly application specific logic
> largely determined by the way the application code is written.

We have a huge set of features where this is simply not a
problem.  For most virtual hardware features, enabling them is
not even a policy decision: it's just about telling the guest
that the feature is now available.  QEMU have been enabling new
features in the "pc" machine-type for years.

Now, why can't higher layers in the stack do something similar?

The proposal is equivalent to what already happens when people
use the "pc" machine-type in their configurations, but:
1) the new defaults/features wouldn't be hidden behind a opaque
   machine-type name, and would appear in the domain XML
   explicitly;
2) the higher layers won't depend on QEMU introducing a new
   machine-type just to have new features enabled by default;
3) features that depend on host capabilities but are available on
   all hosts in a cluster can now be enabled automatically if
   desired (which is something QEMU can't do because it doesn't
   have enough information about the other hosts).

Choosing reasonable defaults might not be a trivial problem, but
the current approach of pushing the responsibility to management
layers doesn't improve the situation.


[...]
> > 2) Policies
[...]
> > 3) Abstracting the XML
[...]
> > 4) Identifying devices properly
[...]
> > 5) Generating the right XML snippet for device hot-(un)plug
[...]

These parts are trickier and I need to read the discussion more
carefully before replying.

-- 
Eduardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cyborg] why cyborg can not support an accelerators info list for more than one host?

2018-03-21 Thread 少合冯
got it, thanks.


2018-03-22 0:50 GMT+08:00 Ed Leafe :

> On Mar 21, 2018, at 11:35 AM, 少合冯  wrote:
> >
> >> By default, hosts are weighed one by one. You can subclass the
> BaseWeigher (in nova/weights.py) to weigh all objects at once.
> >
> > Does that means it require call cyborg accelerator one by one?  the
> pseudo code as follow:
> > for host in hosts:
> >accelerator = cyborg.http_get_ accelerator(host)
> >do_weight_by_accelerator
> >
> > Instead of call cyborg accelerators once,  the pseudo code as follow :
> > accelerators = cyborg.http_get_ accelerator(hosts)
> > for acc in accelerators:
> >do_weight_by_accelerator
>
> What it means is that if you override the weigh_objects() method of the
> BaseWeigher class, you can make a single call to Cyborg with a list of all
> the hosts. That call could then create a list of weights for all the hosts
> and return that. So if you have 100 hosts, you don’t need to make 100 calls
> to Cyborg; only 1.
>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cyborg] why cyborg can not support an accelerators info list for more than one host?

2018-03-21 Thread Ed Leafe
On Mar 21, 2018, at 11:35 AM, 少合冯  wrote:
> 
>> By default, hosts are weighed one by one. You can subclass the BaseWeigher 
>> (in nova/weights.py) to weigh all objects at once.
> 
> Does that means it require call cyborg accelerator one by one?  the pseudo 
> code as follow:
> for host in hosts:
>accelerator = cyborg.http_get_ accelerator(host)
>do_weight_by_accelerator
>
> Instead of call cyborg accelerators once,  the pseudo code as follow :
> accelerators = cyborg.http_get_ accelerator(hosts)
> for acc in accelerators:
>do_weight_by_accelerator

What it means is that if you override the weigh_objects() method of the 
BaseWeigher class, you can make a single call to Cyborg with a list of all the 
hosts. That call could then create a list of weights for all the hosts and 
return that. So if you have 100 hosts, you don’t need to make 100 calls to 
Cyborg; only 1.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Do we want new meeting time?

2018-03-21 Thread Ivan Kolodyazhny
Hi team,

As was discussed at PTG, usually we've got a very few participants on our
weekly meetings. I hope, mostly it's because of not comfort meeting time
for many of us.

Let's try to re-schedule Horizon weekly meetings and get more attendees
there. I've created a doodle for it [1]. Please, vote for the best time for
you.


[1] https://doodle.com/poll/ei5gstt73d8v3a35

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cyborg] why cyborg can not support an accelerators info list for more than one host?

2018-03-21 Thread 少合冯
2018-03-22 0:11 GMT+08:00 Ed Leafe :

> On Mar 21, 2018, at 10:56 AM, 少合冯  wrote:
> >
> > Sorry, I did not attend the PTG.
> > Is it said there is a conclusion:
> > Scheduler weigher will call into Cyborg REST API for each host instead
> of one REST API for all hosts.
> > Is there some reason?
>
> By default, hosts are weighed one by one. You can subclass the BaseWeigher
> (in nova/weights.py) to weigh all objects at once.
>
>
Does that means it require call cyborg accelerator one by one?  the pseudo
code as follow:
for host in hosts:
   accelerator = cyborg.http_get_ accelerator(host)
   do_weight_by_accelerator

Instead of call cyborg accelerators once,  the pseudo code as follow :
accelerators = cyborg.http_get_ accelerator(hosts)
for acc in accelerators:
   do_weight_by_accelerator

-- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all] Deprecation Notice: Pika driver for oslo.messaging

2018-03-21 Thread Ken Giusti
Folks,

Last year at the Boston summit the Oslo team decided to deprecate
support for the Pika transport in oslo.messaging with removal planned
for Rocky [0].

This was announced on the operators list last May [1].

No objections have been raised to date. We're not aware of any
deployments using this transport and its
removal is not anticipated to affect anyone.

This is notice that the removal is currently underway [2].

Thanks,

[0] 
https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations
[1] 
http://lists.openstack.org/pipermail/openstack-operators/2017-May/013579.html
[2] https://review.openstack.org/#/c/536960/

-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cyborg] why cyborg can not support an accelerators info list for more than one host?

2018-03-21 Thread Ed Leafe
On Mar 21, 2018, at 10:56 AM, 少合冯  wrote:
> 
> Sorry, I did not attend the PTG. 
> Is it said there is a conclusion:
> Scheduler weigher will call into Cyborg REST API for each host instead of one 
> REST API for all hosts.  
> Is there some reason?

By default, hosts are weighed one by one. You can subclass the BaseWeigher (in 
nova/weights.py) to weigh all objects at once.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Cyborg] why cyborg can not support an accelerators info list for more than one host?

2018-03-21 Thread 少合冯
For today's IRC discussion.
There is question about the weigher.
Can cyborg support a get list API to get more than one host accelerators
info when weigher.

Sorry, I did not attend the PTG.
Is it said there is a conclusion:
Scheduler weigher will call into Cyborg REST API for each host instead of
one REST API for all hosts.
Is there some reason?



INFO:
http://eavesdrop.openstack.org/meetings/openstack_cyborg/2018/openstack_cyborg.2018-03-21-14.00.log.html


BR
Shaohe Feng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-12

2018-03-21 Thread David Moreau Simard
In case people have missed it, Jim Blair sent an email recently to
shed some light on where Zuul is headed [1].

[1]: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128396.html

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]


On Tue, Mar 20, 2018 at 7:24 PM, Chris Dent  wrote:
>
> HTML: https://anticdent.org/tc-report-18-12.html
>
> This week's TC Report goes off in the weeds a bit with the editorial
> commentary from yours truly. I had trouble getting started, so had
> to push myself through some thinking by writing stuff that at least
> for the last few weeks I wouldn't normally be including in the
> summaries. After getting through it, I realized that the reason I
> was struggling is because I haven't been including these sorts of
> things. Including them results in a longer and more meandering report
> but it is more authentically my experience, which was my original
> intention.
>
> # Zuul Extraction and the Difficult Nature of Communication
>
> Last [Tuesday
> Morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-13.log.html#t2018-03-13T17:22:38)
> we had some initial discussion about Zuul being extracted from
> OpenStack governance as a precursor to becoming part of the CI/CD
> strategic area being born elsewhere in the OpenStack Foundation.
>
> Then on
> [Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:08:06)
> we revisited the topic, especially as it related to how we
> communicate change in the community and how we invite participation
> in making decisions about change. In this case by "community" we're
> talking about anything under the giant umbrella of "stuff associated
> with the OpenStack Foundation".
>
> Plenty of people expressed that though they were not surprised by
> the change, it was because they are insiders and could understand
> how some, who are not, might be surprised by what seemed like a big
> change. This led to addressing the immediate shortcomings and
> clarifying the history of the event.
>
> There was also
> [concern](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:27:22)
> that some of the reluctance to talk openly about the change appeared
> to stem from needing to preserve the potency of a Foundation marketing
> release.
>
> I [expressed some
> frustration](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:36:50):
> "...as usual, we're getting caught up in
> details of a particular event (one that in the end we're all happy
> to see happen), rather than the general problem we saw with it
> (early transparency etc). Solving the immediate problem is easy, but
> since we _keep doing it_, we've got a general issues to resolve."
>
> We went round and round about the various ways in which we have tried
> and failed to do good communication in the past, and while we make
> some progress, we fail to establish a pattern. As Doug [pointed
> out](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:41:33),
> no method can be 100% successful, but if we pick a method and stick to
> it, people can learn that method.
>
> We have a cycle where we not only sometimes communicate poorly but
> we also communicate poorly about that poor communication. So when I
> come round to another week of writing this report, and am reminded
> that these issues persist and I am once again communicating about
> them, it's frustrating. Communicating, a lot, is generally a good
> thing, but if things don't change as a result, that can be a strain.
> If I'm still writing these things in a year's time, and we haven't
> managed to achieve at least a bit more grace, consistency, and
> transparency in the ways that we share information within and
> between groups (including, and maybe especially, the Foundation
> executive wing) in the wider community, it will be a shame and I will
> have a sad.
>
> In a somewhat related and good sign, there is [great
> thread](http://lists.openstack.org/pipermail/openstack-operators/2018-March/014994.html)
> on the operators list that raises the potential of merging the Ops
> Meeting and the PTG into some kind of "OpenStack Community Working
> Gathering".
>
> # Encouraging Upstream Contribution
>
> On
> [Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-16.log.html#t2018-03-16T14:29:21),
> tbarron raised some interesting questions about how the summit talk
> selection process might relate to the [four
> opens](https://governance.openstack.org/tc/reference/opens.html).  The
> talk eventually led to a positive plan to try bring some potential
> contributors upstream in advance of summit as, well as to work to
> create more clear guidelines for track chairs.
>
> # Executive Power
>
> I had a question at [this morni

Re: [openstack-dev] [tc] [all] TC Report 18-12

2018-03-21 Thread Sean McGinnis
On Tue, Mar 20, 2018 at 11:24:19PM +, Chris Dent wrote:
> 
> HTML: https://anticdent.org/tc-report-18-12.html
> 
> This week's TC Report goes off in the weeds a bit with the editorial
> commentary from yours truly. I had trouble getting started, so had
> to push myself through some thinking by writing stuff that at least
> for the last few weeks I wouldn't normally be including in the
> summaries. After getting through it, I realized that the reason I
> was struggling is because I haven't been including these sorts of
> things. Including them results in a longer and more meandering report
> but it is more authentically my experience, which was my original
> intention.
> 

++

Thanks for doing this Chris!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Documentation meeting canceled

2018-03-21 Thread Petr Kovar
Hi all,

Apologies but have to cancel today's docs meeting due to a meeting
conflict. 

If you want to talk to the docs team, we're in #openstack-doc, as always!

Thanks,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Nominating Dong Wenjuan for Vitrage core

2018-03-21 Thread Eyal B
+2

On 21 March 2018 at 16:37, Afek, Ifat (Nokia - IL/Kfar Sava) <
ifat.a...@nokia.com> wrote:

> Hi,
>
>
>
> I would like to nominate Dong Wenjuan for Vitrage core.
>
> Wenjuan has been contributing to Vitrage for a long time, since Newton
> version. She implemented several important features and has a deep
> knowledge of Vitrage architecture. I’m sure she can be a great addition to
> our team.
>
>
>
> Thanks,
>
> Ifat.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-21 Thread Sean McGinnis
On Wed, Mar 21, 2018 at 10:49:02AM +, Stephen Finucane wrote:
> tl;dr: Make sure you stop using pbr's autodoc feature before converting
> them to the new PTI for docs.
> 
> [snip]
> 
> I've gone through and proposed a couple of reverts to fix projects
> we've already broken. However, going forward, there are two things
> people should do to prevent issues like this popping up.
> 

Unfortunately this will not work to just revert the changes. That may fix
things locally, but they will not pass in gate by going back to the old way.

Any cases of this will have to actually be updated to not use the unsupported
pieces you point out. But the doc builds will still need to be done the way
they are now, as that is what the PTI requires at this point.

>  * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections
>from 'setup.cfg' in any patches that aim to convert a project to use
>the new PTI. This will ensure the gate catches any potential
>issues. 
>  * In addition, if your project uses the pbr autodoc feature, you
>should either (a) remove these docs from your documentation tree or
>(b) migrate to something else like the 'sphinx.ext.autosummary'
>extension [5]. I aim to post instructions on the latter shortly.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Poll: S Release Naming

2018-03-21 Thread Paul Belanger
On Tue, Mar 13, 2018 at 07:58:59PM -0400, Paul Belanger wrote:
> Greetings all,
> 
> It is time again to cast your vote for the naming of the S Release. This time
> is little different as we've decided to use a public polling option over per
> user private URLs for voting. This means, everybody should proceed to use the
> following URL to cast their vote:
> 
>   
> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3
> 
> Because this is a public poll, results will currently be only viewable by 
> myself
> until the poll closes. Once closed, I'll post the URL making the results
> viewable to everybody. This was done to avoid everybody seeing the results 
> while
> the public poll is running.
> 
> The poll will officially end on 2018-03-21 23:59:59[1], and results will be
> posted shortly after.
> 
> [1] 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
> ---
> 
> According to the Release Naming Process, this poll is to determine the
> community preferences for the name of the R release of OpenStack. It is
> possible that the top choice is not viable for legal reasons, so the second or
> later community preference could wind up being the name.
> 
> Release Name Criteria
> 
> Each release name must start with the letter of the ISO basic Latin alphabet
> following the initial letter of the previous release, starting with the
> initial release of "Austin". After "Z", the next name should start with
> "A" again.
> 
> The name must be composed only of the 26 characters of the ISO basic Latin
> alphabet. Names which can be transliterated into this character set are also
> acceptable.
> 
> The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region under
> consideration must be declared before the opening of nominations, as part of
> the initiation of the selection process.
> 
> The name must be a single word with a maximum of 10 characters. Words that
> describe the feature should not be included, so "Foo City" or "Foo Peak"
> would both be eligible as "Foo".
> 
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may make
> an exception for one or more of them to be considered in the Condorcet poll.
> The naming official is responsible for presenting the list of exceptional
> names for consideration to the TC before the poll opens.
> 
> Exact Geographic Region
> 
> The Geographic Region from where names for the S release will come is Berlin
> 
> Proposed Names
> 
> Spree (a river that flows through the Saxony, Brandenburg and Berlin states of
>Germany)
> 
> SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)
> 
> Spandau (One of the twelve boroughs of Berlin)
> 
> Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
>abbreviated as 🍺)
> 
> Steglitz (a locality in the South Western part of the city)
> 
> Springer (Berlin is headquarters of Axel Springer publishing house)
> 
> Staaken (a locality within the Spandau borough)
> 
> Schoenholz (A zone in the Niederschönhausen district of Berlin)
> 
> Shellhaus (A famous office building)
> 
> Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)
> 
> Schiller (A park in the Mitte borough)
> 
> Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
>(The adjective form, Saatwinkler is also a really cool bridge but
>that form is too long)
> 
> Sonne (Sonnenallee is the name of a large street in Berlin crossing the former
>wall, also translates as "sun")
> 
> Savigny (Common place in City-West)
> 
> Soorstreet (Street in Berlin restrict Charlottenburg)
> 
> Solar (Skybar in Berlin)
> 
> See (Seestraße or "See Street" in Berlin)
> 
A friendly reminder, the naming poll will be closing later today (2018-03-21
23:59:59 UTC). If you haven't done so, please take a moment to vote.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Nominating Dong Wenjuan for Vitrage core

2018-03-21 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

I would like to nominate Dong Wenjuan for Vitrage core.
Wenjuan has been contributing to Vitrage for a long time, since Newton version. 
She implemented several important features and has a deep knowledge of Vitrage 
architecture. I’m sure she can be a great addition to our team.

Thanks,
Ifat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky spec review day

2018-03-21 Thread Dan Smith
>>  And I, for one, wouldn't be offended if we could "officially start
>>  development" (i.e. focus on patches, start runways, etc.) before the
>>  mystical but arbitrary spec freeze date.

Yeah, I agree. I see runways as an attempt to add pressure to the
earlier part of the cycle, where we're ignoring things that have been
ready but aren't super high priority because "we have plenty of time."
The later part of the cycle is when we start having to make hard
decisions on things to de-focus, and where focus on the important core
changes goes up naturally anyway.

Personally, I think we're already kinda late in the cycle to be going on
this, as I would have hoped to exit PTG with a plan to start operating
in the new process immediately. Maybe I'm in the minority there, but I
think that if we start this process late in the middle of a cycle, we'll
probably need to adjust the prioritization of things in the queue more
strictly, and remember that when retrospecting on the process for next
cycle.

> Sure, but given we have a lot of specs to review, TBH it'll be
> possible for me to look at implementation patches only close to the
> 1st milestone.

I'm not sure I get this. We can't not review code while we review specs
for weeks on end. We've already approved 75% of the blueprints (in
number) that we completed in queens. One of the intended outcomes of
this effort was to complete a higher percentage of what we approved, so
we're not lying to contributors and so we have more focused review of
things so they actually get completed instead of half-landed. To that
end, I would kind of expect that we need to constantly be throttling (or
maybe re-starting) spec review/approval rates to keep the queue full
enough so we don't run dry, but without just ending up with a thousand
approved things that we'll never get to.

Anyway, just MHO. Obviously this will be an experiment and we won't get
it right the first time.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Support share backup to different projects?

2018-03-21 Thread TommyLike Hu
Hey Gorka,
Thanks for your input:) I think I need to clarify that our idea is to
share the backup resource to another tenants, that is different with
transfer as the original tenant still can fully control the backup
resource, and the tenants that have been shared to only have the ability to
see and read the content of that backup.

Gorka Eguileor 于2018年3月21日周三 下午7:15写道:

> On 20/03, Jay S Bryant wrote:
> >
> >
> > On 3/19/2018 10:55 PM, TommyLike Hu wrote:
> > > Now Cinder can transfer volume (with or without snapshots) to different
> > > projects,  and this make it possbile to transfer data across tenant via
> > > volume or image. Recently we had a conversation with our customer from
> > > Germany, they mentioned they are more pleased if we can support
> transfer
> > > data accross tenant via backup not image or volume, and these below are
> > > some of their concerns:
> > >
> > > 1. There is a use case that they would like to deploy their
> > > develop/test/product systems in the same region but within different
> > > tenants, so they have the requirment to share/transfer data across
> > > tenants.
> > >
> > > 2. Users are more willing to use backups to secure/store their volume
> > > data since backup feature is more advanced in product openstack version
> > > (incremental backups/periodic backups/etc.).
> > >
> > > 3. Volume transfer is not a valid option as it's in AZ and it's a
> > > complicated process if we would like to share the data to multiple
> > > projects (keep copy in all the tenants).
> > >
> > > 4. Most of the users would like to use image for bootable volume only
> > > and share volume data via image means the users have to maintain lots
> of
> > > image copies when volume backup changed as well as the whole system
> > > needs to differentiate bootable images and none bootable images, most
> > > important, we can not restore volume data via image now.
> > >
> > > 5. The easiest way for this seems to support sharing backup to
> different
> > > projects, the owner project have the full authority while shared
> > > projects only can view/read the backups.
> > >
> > > 6. AWS has the similar concept, share snapshot. We can share it by
> > > modify the snapshot's create volume permissions [1].
> > >
> > > Looking forward to any like or dislike or suggestion on this idea
> > > accroding to my feature proposal experience:)
> > >
> > >
> > > Thanks
> > > TommyLike
> > >
> > >
> > > [1]:
> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html
> > >
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > Tommy,
> >
> > As discussed at the PTG, this still sounds like improper usage of
> Backup.
> > Happy to hear input from others but I am having trouble getting my head
> > around it.
> >
> > The idea of sharing a snapshot, as you mention AWS supports sounds like
> it
> > could be a more sensible approach.  Why are you not proposing that?
> >
> > Jay
> >
>
> Hi,
>
> I agree with Jay that this sounds like an improper use of Backups, and I
> believe that this feature, just like trying to transfer snapshots, would
> incur in a lot of code changes as well as an ever greater number of
> bugs, because the ownership structure in Cinder is hierarchical and well
> defined.
>
> So if you transferred a snapshot then you would lose that snapshot
> information on the source volume, which means that we could not prevent
> a volume deletion with a snapshot, or we could prevent it but would
> either have to prevent the deletion from happening (creating a terrible
> user experience since the user can't delete the volume now because
> somebody else still has one of its snapshots) or we have to implement
> some kind of "trash" mechanism to postpone cleanup until all the
> snapshots have been deleted, which would make our quota code more
> complex as well as make our stats reporting and scheduling diverge from
> what the user think has actually happened (they deleted a bunch of
> volumes but the data has not been freed from the backend).
>
> As for backups, you have an even worse situation because of our
> incremental backups, since transferring ownership of an incremental
> backup will create similar deletion issues as the snapshots but we also
> require access to all all incremental snapshots to restore a volume.  So
> the only alternative would be to only allow transferring a full Backup
> and this would carry all the incremental backups with it.
>
> All in all I think this would be an abuse of the Backups, and as stated
> by TommyLike we already have mechanisms to do this via images and volume
> transfers.
>
> Although I have to admit that after giving this some thought there is a
> very good case where it wouldn't be a

Re: [openstack-dev] Adding "not docs" banner to specs website?

2018-03-21 Thread Jim Rollenhagen
>
> We want them all to use the openstackdocstheme so you could look
> into creating a "subclass" of that one with the extra content in
> the header, then ensure all of the specs repos use it.  We would
> have to land a small patch to trigger a rebuild, but the patch
> switching them from oslosphinx to openstackdocstheme would serve
> for that and a small change to the readme or another file would do it
> for any that are already using the theme.
>

Thanks Doug, I'll investigate this route more when I have some free time to
do so. :)

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky spec review day

2018-03-21 Thread Sylvain Bauza
On Wed, Mar 21, 2018 at 2:12 PM, Eric Fried  wrote:

> +1 for the-earlier-the-better, for the additional reason that, if we
> don't finish, we can do another one in time for spec freeze.
>
>
+1 for Wed 27th March.



> And I, for one, wouldn't be offended if we could "officially start
> development" (i.e. focus on patches, start runways, etc.) before the
> mystical but arbitrary spec freeze date.
>
>
Sure, but given we have a lot of specs to review, TBH it'll be possible for
me to look at implementation patches only close to the 1st milestone.



> On 03/20/2018 07:29 PM, Matt Riedemann wrote:
> > On 3/20/2018 6:47 PM, melanie witt wrote:
> >> I was thinking that 2-3 weeks ahead of spec freeze would be
> >> appropriate, so that would be March 27 (next week) or April 3 if we do
> >> it on a Tuesday.
> >
> > It's spring break here on April 3 so I'll be listening to screaming
> > kids, I mean on vacation. Not that my schedule matters, just FYI.
> >
> > But regardless of that, I think the earlier the better to flush out
> > what's already there, since we've already approved quite a few
> > blueprints this cycle (32 to so far).
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-03-21 Thread Kaz Shinohara
Hi Ivan, Akihiro,


Thanks for your kind arrangement.
Looking forward to hearing your decision soon.

Regards,
Kaz

2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny :
> HI Team,
>
> From my perspective, I'm OK both with #2 and #3 options. I agree that #4
> could be too complicated for us. Anyway, we've got this topic on the meeting
> agenda [1] so we'll discuss it there too. I'll share our decision after the
> meeting.
>
> [1] https://wiki.openstack.org/wiki/Meetings/Horizon
>
>
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki  wrote:
>>
>> Hi Kaz and Ivan,
>>
>> Yeah, it is worth discussed officially in the horizon team meeting or the
>> mailing list thread to get a consensus.
>> Hopefully you can add this topic to the horizon meeting agenda.
>>
>> After sending the previous mail, I noticed anther option. I see there are
>> several options now.
>> (1) Keep xstatic-core and horizon-core same.
>> (2) Add specific members to xstatic-core
>> (3) Add specific horizon-plugin core to xstatic-core
>> (4) Split core membership into per-repo basis (perhaps too complicated!!)
>>
>> My current vote is (2) as xstatic-core needs to understand what is xstatic
>> and how it is maintained.
>>
>> Thanks,
>> Akihiro
>>
>>
>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara :
>>>
>>> Hi Akihiro,
>>>
>>>
>>> Thanks for your comment.
>>> The background of my request to add us to xstatic-core comes from
>>> Ivan's comment in last PTG's etherpad for heat-dashboard discussion.
>>>
>>> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion
>>> Line135, "we can share ownership if needed - e0ne"
>>>
>>> Just in case, could you guys confirm unified opinion on this matter as
>>> Horizon team ?
>>>
>>> Frankly speaking I'm feeling the benefit to make us xstatic-core
>>> because it's easier & smoother to manage what we are taking for
>>> heat-dashboard.
>>> On the other hand, I can understand what Akihiro you are saying, the
>>> newly added repos belong to Horizon project & being managed by not
>>> Horizon core is not consistent.
>>> Also having exception might make unexpected confusion in near future.
>>>
>>> Eventually we will follow your opinion, let me hear Horizon team's
>>> conclusion.
>>>
>>> Regards,
>>> Kaz
>>>
>>>
>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki :
>>> > Hi Kaz,
>>> >
>>> > These repositories are under horizon project. It looks better to keep
>>> > the
>>> > current core team.
>>> > It potentially brings some confusion if we treat some horizon plugin
>>> > team
>>> > specially.
>>> > Reviewing xstatic repos would be a small burden, wo I think it would
>>> > work
>>> > without problem even if only horizon-core can approve xstatic reviews.
>>> >
>>> >
>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara :
>>> >>
>>> >> Hi Ivan, Horizon folks,
>>> >>
>>> >>
>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been landed.
>>> >>
>>> >> In project-config for them, I've set same acl-config as the existing
>>> >> xstatic repos.
>>> >> It means only "xstatic-core" can manage the newly created repos on
>>> >> gerrit.
>>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" like as
>>> >> what horizon-core is doing ?
>>> >>
>>> >> xstatic-core
>>> >> https://review.openstack.org/#/admin/groups/385,members
>>> >>
>>> >> heat-dashboard-core
>>> >> https://review.openstack.org/#/admin/groups/1844,members
>>> >>
>>> >> Of course, we will surely touch only what we made, just would like to
>>> >> manage them smoothly by ourselves.
>>> >> In case we need to touch the other ones, will ask Horizon team for
>>> >> help.
>>> >>
>>> >> Thanks in advance.
>>> >>
>>> >> Regards,
>>> >> Kaz
>>> >>
>>> >>
>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge :
>>> >> > Hi Horizon Team,
>>> >> >
>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin
>>> >> > option,
>>> >> >  and submitted a patch for it.
>>> >> > Could you please help to review the patch.
>>> >> >
>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339
>>> >> > https://review.openstack.org/#/c/552259/
>>> >> >
>>> >> > Thank you very much.
>>> >> >
>>> >> > Best Regards,
>>> >> > Xinni
>>> >> >
>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny 
>>> >> > wrote:
>>> >> >>
>>> >> >> Hi Kaz,
>>> >> >>
>>> >> >> Thanks for cleaning this up. I put +1 on both of these patches
>>> >> >>
>>> >> >> Regards,
>>> >> >> Ivan Kolodyazhny,
>>> >> >> http://blog.e0ne.info/
>>> >> >>
>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara
>>> >> >> 
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> Hi Ivan & Horizon folks,
>>> >> >>>
>>> >> >>>
>>> >> >>> Now we are submitting a couple of patches to have the new xstatic
>>> >> >>> modules.
>>> >> >>> Let me request you to have review the following patches.
>>> >> >>> We need Horizon PTL's +1 to move these forward.
>>> >> >>>
>>> >> >>> project-config
>>> >> >>> https://review.openstack.org/#/c/551978/
>>> >> >>>
>>> >> >>> governance
>>> >> >>> https://rev

Re: [openstack-dev] [nova] Rocky spec review day

2018-03-21 Thread Eric Fried
+1 for the-earlier-the-better, for the additional reason that, if we
don't finish, we can do another one in time for spec freeze.

And I, for one, wouldn't be offended if we could "officially start
development" (i.e. focus on patches, start runways, etc.) before the
mystical but arbitrary spec freeze date.

On 03/20/2018 07:29 PM, Matt Riedemann wrote:
> On 3/20/2018 6:47 PM, melanie witt wrote:
>> I was thinking that 2-3 weeks ahead of spec freeze would be
>> appropriate, so that would be March 27 (next week) or April 3 if we do
>> it on a Tuesday.
> 
> It's spring break here on April 3 so I'll be listening to screaming
> kids, I mean on vacation. Not that my schedule matters, just FYI.
> 
> But regardless of that, I think the earlier the better to flush out
> what's already there, since we've already approved quite a few
> blueprints this cycle (32 to so far).
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-03-21 Thread Ivan Kolodyazhny
HI Team,

>From my perspective, I'm OK both with #2 and #3 options. I agree that #4
could be too complicated for us. Anyway, we've got this topic on the
meeting agenda [1] so we'll discuss it there too. I'll share our decision
after the meeting.

[1] https://wiki.openstack.org/wiki/Meetings/Horizon



Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki  wrote:

> Hi Kaz and Ivan,
>
> Yeah, it is worth discussed officially in the horizon team meeting or the
> mailing list thread to get a consensus.
> Hopefully you can add this topic to the horizon meeting agenda.
>
> After sending the previous mail, I noticed anther option. I see there are
> several options now.
> (1) Keep xstatic-core and horizon-core same.
> (2) Add specific members to xstatic-core
> (3) Add specific horizon-plugin core to xstatic-core
> (4) Split core membership into per-repo basis (perhaps too complicated!!)
>
> My current vote is (2) as xstatic-core needs to understand what is xstatic
> and how it is maintained.
>
> Thanks,
> Akihiro
>
>
> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara :
>
>> Hi Akihiro,
>>
>>
>> Thanks for your comment.
>> The background of my request to add us to xstatic-core comes from
>> Ivan's comment in last PTG's etherpad for heat-dashboard discussion.
>>
>> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion
>> Line135
>> ,
>> "we can share ownership if needed - e0ne"
>>
>> Just in case, could you guys confirm unified opinion on this matter as
>> Horizon team ?
>>
>> Frankly speaking I'm feeling the benefit to make us xstatic-core
>> because it's easier & smoother to manage what we are taking for
>> heat-dashboard.
>> On the other hand, I can understand what Akihiro you are saying, the
>> newly added repos belong to Horizon project & being managed by not
>> Horizon core is not consistent.
>> Also having exception might make unexpected confusion in near future.
>>
>> Eventually we will follow your opinion, let me hear Horizon team's
>> conclusion.
>>
>> Regards,
>> Kaz
>>
>>
>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki :
>> > Hi Kaz,
>> >
>> > These repositories are under horizon project. It looks better to keep
>> the
>> > current core team.
>> > It potentially brings some confusion if we treat some horizon plugin
>> team
>> > specially.
>> > Reviewing xstatic repos would be a small burden, wo I think it would
>> work
>> > without problem even if only horizon-core can approve xstatic reviews.
>> >
>> >
>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara :
>> >>
>> >> Hi Ivan, Horizon folks,
>> >>
>> >>
>> >> Now totally 8 xstatic-** repos for heat-dashboard have been landed.
>> >>
>> >> In project-config for them, I've set same acl-config as the existing
>> >> xstatic repos.
>> >> It means only "xstatic-core" can manage the newly created repos on
>> gerrit.
>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" like as
>> >> what horizon-core is doing ?
>> >>
>> >> xstatic-core
>> >> https://review.openstack.org/#/admin/groups/385,members
>> >>
>> >> heat-dashboard-core
>> >> https://review.openstack.org/#/admin/groups/1844,members
>> >>
>> >> Of course, we will surely touch only what we made, just would like to
>> >> manage them smoothly by ourselves.
>> >> In case we need to touch the other ones, will ask Horizon team for
>> help.
>> >>
>> >> Thanks in advance.
>> >>
>> >> Regards,
>> >> Kaz
>> >>
>> >>
>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge :
>> >> > Hi Horizon Team,
>> >> >
>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin option,
>> >> >  and submitted a patch for it.
>> >> > Could you please help to review the patch.
>> >> >
>> >> > https://bugs.launchpad.net/horizon/+bug/1755339
>> >> > https://review.openstack.org/#/c/552259/
>> >> >
>> >> > Thank you very much.
>> >> >
>> >> > Best Regards,
>> >> > Xinni
>> >> >
>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny 
>> >> > wrote:
>> >> >>
>> >> >> Hi Kaz,
>> >> >>
>> >> >> Thanks for cleaning this up. I put +1 on both of these patches
>> >> >>
>> >> >> Regards,
>> >> >> Ivan Kolodyazhny,
>> >> >> http://blog.e0ne.info/
>> >> >>
>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara <
>> ksnhr.t...@gmail.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi Ivan & Horizon folks,
>> >> >>>
>> >> >>>
>> >> >>> Now we are submitting a couple of patches to have the new xstatic
>> >> >>> modules.
>> >> >>> Let me request you to have review the following patches.
>> >> >>> We need Horizon PTL's +1 to move these forward.
>> >> >>>
>> >> >>> project-config
>> >> >>> https://review.openstack.org/#/c/551978/
>> >> >>>
>> >> >>> governance
>> >> >>> https://review.openstack.org/#/c/551980/
>> >> >>>
>> >> >>> Thanks in advance:)
>> >> >>>
>> >> >>> Regards,
>> >> >>> Kaz
>> >> >>>
>> >> >>>
>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski
>> >> >>> :
>> >> >>> > Yes, please do that. We can then d

Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-21 Thread 李杰
So what should we do then about rebuild the volume backed server?Until the 
cinder could re-image a volume?
 
 
-- Original --
From: "Matt Riedemann"; 
Date: 2018年3月16日(星期五) 上午6:35
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume

 
On 3/15/2018 5:29 PM, Dan Smith wrote:
> Yep, for sure. I think if there are snapshots, we have to refuse to do
> te thing. My comment was about the "does nova have authority to destroy
> the root volume during a rebuild" and I think it does, if
> delete_on_termination=True, and if there are no snapshots.

Agree with this.

Things do get a bit weird with delete_on_termination and if nova 'owns' 
the volume. delete_on_termination is False by default, even if you're 
doing boot from volume with source_type of 'image' or 'snapshot' where 
nova creates the volume for you.

If a user really cared about preserving the volume, they'd probably 
pre-create it (with their favorite volume type since you can't tell nova 
the volume type to use) and pass it to nova with 
delete_on_termination=False explicitly.

Given the defaults, I'm not sure how many people are going to specify 
delete_on_termination=True, thinking about the implications, which then 
means they can't rebuild their volume-backed instance later because nova 
can't / won't delete the volume.

If we can solve this without deleting the volume at all and just 
re-image it, then it's a non-issue.

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Support share backup to different projects?

2018-03-21 Thread Gorka Eguileor
On 20/03, Jay S Bryant wrote:
>
>
> On 3/19/2018 10:55 PM, TommyLike Hu wrote:
> > Now Cinder can transfer volume (with or without snapshots) to different
> > projects,  and this make it possbile to transfer data across tenant via
> > volume or image. Recently we had a conversation with our customer from
> > Germany, they mentioned they are more pleased if we can support transfer
> > data accross tenant via backup not image or volume, and these below are
> > some of their concerns:
> >
> > 1. There is a use case that they would like to deploy their
> > develop/test/product systems in the same region but within different
> > tenants, so they have the requirment to share/transfer data across
> > tenants.
> >
> > 2. Users are more willing to use backups to secure/store their volume
> > data since backup feature is more advanced in product openstack version
> > (incremental backups/periodic backups/etc.).
> >
> > 3. Volume transfer is not a valid option as it's in AZ and it's a
> > complicated process if we would like to share the data to multiple
> > projects (keep copy in all the tenants).
> >
> > 4. Most of the users would like to use image for bootable volume only
> > and share volume data via image means the users have to maintain lots of
> > image copies when volume backup changed as well as the whole system
> > needs to differentiate bootable images and none bootable images, most
> > important, we can not restore volume data via image now.
> >
> > 5. The easiest way for this seems to support sharing backup to different
> > projects, the owner project have the full authority while shared
> > projects only can view/read the backups.
> >
> > 6. AWS has the similar concept, share snapshot. We can share it by
> > modify the snapshot's create volume permissions [1].
> >
> > Looking forward to any like or dislike or suggestion on this idea
> > accroding to my feature proposal experience:)
> >
> >
> > Thanks
> > TommyLike
> >
> >
> > [1]: 
> > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Tommy,
>
> As discussed at the PTG, this still sounds like improper usage of Backup. 
> Happy to hear input from others but I am having trouble getting my head
> around it.
>
> The idea of sharing a snapshot, as you mention AWS supports sounds like it
> could be a more sensible approach.  Why are you not proposing that?
>
> Jay
>

Hi,

I agree with Jay that this sounds like an improper use of Backups, and I
believe that this feature, just like trying to transfer snapshots, would
incur in a lot of code changes as well as an ever greater number of
bugs, because the ownership structure in Cinder is hierarchical and well
defined.

So if you transferred a snapshot then you would lose that snapshot
information on the source volume, which means that we could not prevent
a volume deletion with a snapshot, or we could prevent it but would
either have to prevent the deletion from happening (creating a terrible
user experience since the user can't delete the volume now because
somebody else still has one of its snapshots) or we have to implement
some kind of "trash" mechanism to postpone cleanup until all the
snapshots have been deleted, which would make our quota code more
complex as well as make our stats reporting and scheduling diverge from
what the user think has actually happened (they deleted a bunch of
volumes but the data has not been freed from the backend).

As for backups, you have an even worse situation because of our
incremental backups, since transferring ownership of an incremental
backup will create similar deletion issues as the snapshots but we also
require access to all all incremental snapshots to restore a volume.  So
the only alternative would be to only allow transferring a full Backup
and this would carry all the incremental backups with it.

All in all I think this would be an abuse of the Backups, and as stated
by TommyLike we already have mechanisms to do this via images and volume
transfers.

Although I have to admit that after giving this some thought there is a
very good case where it wouldn't be an abuse and where we should allow
transferring full backups together with all their incremental backups,
and that is when you transfer a volume.  If we transfer a volume with
all its snapshots, it makes sense that we should also allow transferring
its backups, after all the original source of the backups no longer
belongs to the owner of the backups.

To summarize, if we are talking about transferring only full backups
with all their dependent incremental backup then I probably won't oppose
the change.

Cheers,
Gorka.


[openstack-dev] Following the new PTI for document build, broken local builds

2018-03-21 Thread Stephen Finucane
tl;dr: Make sure you stop using pbr's autodoc feature before converting
them to the new PTI for docs.

There have been a lot of patches converting projects to use the new
Project Testing Interface for building docs [1]. Generally, these make
the following changes:

   1. Move any requirements for building docs or release notes from 'test-
  requirements.txt' to 'doc/requirements.txt'
   2. Modify 'tox.ini' to call 'sphinx-build' instead of 'python setup.py
  build_sphinx'

Once done, the idea is that the gate will be able to start building
docs by calling the 'sphinx-build' executable directly instead of using
the 'build_sphinx' setuptools command. Unfortunately, this doesn't
always do what you think and has resulted in a few now-broken projects
(mostly oslo).

As noted by Monty in a prior openstack-dev post [2], some projects rely
on a pbr extension to the 'build_sphinx' setuptools command which can
automatically run the 'sphinx-apidoc' tool before building docs. This
is enabled by configuring some settings in the '[pbr]' section of the
'setup.cfg' file [3]. To ensure this continued working, the zuul jobs
definitions [4] check for the presence of these settings and build docs
using the legacy 'build_sphinx' command if found. **At no point do the
jobs call the tox job**. As a result, if you convert a project to use
'sphinx-build' in 'tox.ini' without resolving the autodoc issues, you
lose the ability to build docs locally.

I've gone through and proposed a couple of reverts to fix projects
we've already broken. However, going forward, there are two things
people should do to prevent issues like this popping up.

 * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections
   from 'setup.cfg' in any patches that aim to convert a project to use
   the new PTI. This will ensure the gate catches any potential
   issues. 
 * In addition, if your project uses the pbr autodoc feature, you
   should either (a) remove these docs from your documentation tree or
   (b) migrate to something else like the 'sphinx.ext.autosummary'
   extension [5]. I aim to post instructions on the latter shortly.

If anyone has any questions on the above, feel free to reply here or
contact me on IRC (stephenfin).

Cheers,
Stephen

[1] 
https://review.openstack.org/#/q/topic:updated-pti+(status:open+OR+status:merged)
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-December/125710.html
[3] https://docs.openstack.org/pbr/latest/user/using.html#pbr-setup-cfg
[4] 
https://github.com/openstack-infra/zuul-jobs/blob/d75f5d2b/roles/sphinx/tasks/main.yaml
[5] http://www.sphinx-doc.org/en/stable/ext/autosummary.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Does neutron-server support the main backup redundancy?

2018-03-21 Thread Miguel Angel Ajo Pelayo
You can run as many as you want, generally an haproxy is used in front of
them to balance load across neutron servers.

Also, keep in mind, that the db backend is a single mysql, you can also
distribute that with galera.

That is the configuration you will get by default when you deploy in HA
with RDO/TripleO or OSP/Director.

On Wed, Mar 21, 2018 at 3:34 AM Kevin Benton  wrote:

> You can run as many neutron server processes as you want in an
> active/active setup.
>
> On Tue, Mar 20, 2018, 18:35 Frank Wang  wrote:
>
>> Hi All,
>>  As far as I know, neutron-server only can be a single node, In order
>> to improve the reliability of the system, Does it support the main backup
>> or active/active redundancy? Any comment would be appreciated.
>>
>> Thanks,
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] The Weekly Owl - 13th Edition

2018-03-21 Thread Raoul Scarazzini
On 20/03/2018 20:16, Emilien Macchi wrote:
> On Tue, Mar 20, 2018 at 9:01 AM, Emilien Macchi  > wrote:
> +--> Matt is John and ruck is John. Please let them know any new CI
> issue.
> so I double checked and Matt isn't John but in fact he's the rover ;-) 

But Rover is Rover or not?

-- 
Raoul Scarazzini
ra...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cyborg] Separate spec for compute node flows?

2018-03-21 Thread Zhipeng Huang
Hi Sundar,

Zhuli will work on os-acc spec, and Li Liu will work on the glance and
metadata one, as we assigned during ptg. But you are very welcomed to reach
out to them and work together if you have the bandwidth :)

On Wed, Mar 21, 2018 at 3:00 PM, Nadathur, Sundar  wrote:

> Hi all,
>
> The Cyborg Nova scheduling specification
> 
> addresses the scheduling aspects alone. There needs to be a separate spec
> to address:
> * Cyborg/Nova interactions in the compute node, incl. the newly proposed
> os-acc library.
> * Programming, including fetching bitstreams from Glance.
> * Bitstream metadata.
>
> Shall I send such a spec while the first one is still in review?
> Regards,
> Sundar
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-Dev] [Neutron] [DragonFlow] Automatic Neighbour Discovery responder for IPv6

2018-03-21 Thread Lihi Wishnitzer
Hi Vivek,

Originally we planned to support only in-cloud VMs, therefore, we do not
use these bits in Dragonflow.
We believe we managed to follow the standard without using these bits.
If this is a bug, we will try to fix it.

Regards,
Lihi


On Tue, Mar 20, 2018 at 8:45 AM, N Vivekanandan  wrote:

> Hi DragonFlow Team,
>
>
>
> We noticed that you are adding support for automatic responder for
> neighbor solicitation via OpenFlow Rules here:
>
> https://review.openstack.org/#/c/412208/
>
>
>
> Can you please let us know with latest OVS release are you using to test
> this feature?
>
>
>
> We are pursuing Automatic NS Responder in OpenDaylight Controller
> implementation, and we noticed that there are no NXM extensions to manage
> the ‘R’ bit and ’S’ bit correctly.
>
>
>
> From the RFC: https://tools.ietf.org/html/rfc4861
>
>
>
>   R  Router flag.  When set, the R-bit indicates that
>
>  the sender is a router.  The R-bit is used by
>
>  Neighbor Unreachability Detection to detect a
>
>  router that changes to a host.
>
>
>
>   S  Solicited flag.  When set, the S-bit indicates that
>
>  the advertisement was sent in response to a
>
>  Neighbor Solicitation from the Destination address.
>
>  The S-bit is used as a reachability confirmation
>
>  for Neighbor Unreachability Detection.  It MUST NOT
>
>  be set in multicast advertisements or in
>
>  unsolicited unicast advertisements.
>
>
>
> We noticed that this dragonflow rule is being programmed for automatic
> response generation for NS:
>
> icmp6,ipv6_dst=1::1,icmp_type=135 actions=load:0x88->NXM_NX_
> ICMPV6_TYPE[],move:NXM_NX_IPV6_SRC[]->NXM_NX_IPV6_DST[],mod_dl_src:00:11:
> 22:33:44:55,load:0->NXM_NX_ND_SLL[],IN_PORT
>
> above line from spec https://docs.openstack.org/
> dragonflow/latest/specs/ipv6.html
>
>
>
> However, from the flow rule by dragonflow for automatic response above, we
> couldn’t notice that R and S bits of the NS Response is being managed.
>
>
>
> Can you please clarify if you don’t intend to use ‘R’ and ‘S’ bits at all
> in dragonflow implementation?
>
> Or you intend to use them but you weren’t able to get NXM extensions for
> the same with OVS and so wanted to start ahead without managing those bits
> (as per RFC)?
>
>
>
> Thanks in advance for your help.
>
>
>
> --
>
> Thanks,
>
>
>
> Vivek
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Cyborg] Separate spec for compute node flows?

2018-03-21 Thread Nadathur, Sundar

Hi all,

    The Cyborg Nova scheduling specification 
 
addresses the scheduling aspects alone. There needs to be a separate 
spec to address:


* Cyborg/Nova interactions in the compute node, incl. the newly proposed 
os-acc library.

* Programming, including fetching bitstreams from Glance.
* Bitstream metadata.

Shall I send such a spec while the first one is still in review?

Regards,
Sundar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev