Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-09 Thread Tony Breeds
On Mon, Apr 09, 2018 at 09:58:28AM -0400, Doug Hellmann wrote:

> Now that projects don't have to match the global requirements list
> entries exactly we should be able to remove caps from within the
> projects and keep caps in the global list for cases like this where we
> know we frequently encounter breaking changes in new releases. The
> changes to support that were part of
> https://review.openstack.org/#/c/555402/

True.  I was trying to add context to why we don't always rely on
upper-constraints.txt to save us.  So yeah we can start working towards
removing the caps per project.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] Promote Li Liu as new core reviewer

2018-04-09 Thread Rushil Chugh
+1

On Mon, Apr 9, 2018 at 3:13 PM, Nadathur, Sundar 
wrote:

> Agreed! +1
>
> Regards,
> Sundar
>
> Hi Team,
>
> This is an email for my nomination of adding Li Liu to the core reviewer
> team. Li Liu has been instrumental in the resource provider data model
> implementation for Cyborg during Queens release, as well as metadata
> standardization and programming design for Rocky.
>
> His overall stats [0] and current stats [1] for Rocky speaks for itself.
> His patches could be found here [2].
>
> Given the amount of work undergoing for Rocky, it would be great to add
> such an amazing force :)
>
> [0] http://stackalytics.com/?module=cyborg-group=
> person-day=all
> [1] http://stackalytics.com/?module=cyborg-group=
> person-day=rocky
> [2] https://review.openstack.org/#/q/owner:liliueecg%2540gmail.com
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Product Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event

2018-04-09 Thread Chen CH Ji
Could you please help to share whether this kind of event is sent by
neutron-server or neutron agent ? I searched neutron code
from [1][2]  this means the agent itself need tell neutron server the
device(VIF) is up then neutron server will send notification
to nova through REST API and in turn consumed by compute node?

[1]
https://github.com/openstack/neutron/tree/master/neutron/notify_port_active_direct
[2]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L264


Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Matt Riedemann 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   04/10/2018 01:56 AM
Subject:[openstack-dev] [nova] Changes to
ComputeVirtAPI.wait_for_instance_event



As part of a bug fix [1], the internal
ComputeVirtAPI.wait_for_instance_event interface is changing to no
longer accept event names that are strings, and will now require the
(name, tag) tuple form which all of the in-tree virt drivers are already
using.

If you have an out of tree driver that uses this interface, heads up
that you'll need to be using the tuple form if you are not already doing
so.

[1]
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_558059_=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=yyfqhpyjmwq9aUO_EdyVhYZm-8zEDpEYxh2-hPu1kig=S5Asyhxw296d0rp-EOCg1VMsKcwVV39i1pGeqkobE2U=


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=yyfqhpyjmwq9aUO_EdyVhYZm-8zEDpEYxh2-hPu1kig=H3OLArdYuR4ARtKwSqJaI3ctLkqJSAVhldfty7GL9lo=




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] zun-api error

2018-04-09 Thread Murali B
Hi Hongbin Lu,

After I run the etcd service up and tried to create  container I see the
below error and my container is in error state

Could you please share me if I need to change any configuration in neutron
for docker kuryer

ckercfg'] find_config_file
/usr/local/lib/python2.7/dist-packages/docker/utils/config.py:21
2018-04-09 16:47:44.058 41736 DEBUG docker.utils.config
[req-0afc6b91-e50e-4a5a-a673-c2cecd6f2986 - - - - -] No config file found
find_config_file
/usr/local/lib/python2.7/dist-packages/docker/utils/config.py:28
2018-04-09 16:47:44.345 41736 ERROR zun.compute.manager
[req-0afc6b91-e50e-4a5a-a673-c2cecd6f2986 - - - - -] Error occurred while
calling Docker start API: Docker internal error: 500 Server Error: Internal
Server Error ("IpamDriver.RequestAddress: Requested ip address
{'subnet_id': u'fb768eca-8ad9-4afc-99f7-e13b9c36096e', 'ip_address':
u'3.3.3.12'} already belongs to a bound Neutron port:
401a5599-2309-482e-b100-e2317c4118cf").: DockerError: Docker internal
error: 500 Server Error: Internal Server Error ("IpamDriver.RequestAddress:
Requested ip address {'subnet_id': u'fb768eca-8ad9-4afc-99f7-e13b9c36096e',
'ip_address': u'3.3.3.12'} already belongs to a bound Neutron port:
401a5599-2309-482e-b100-e2317c4118cf").
2018-04-09 16:47:44.372 41736 DEBUG oslo_concurrency.lockutils
[req-0afc6b91-e50e-4a5a-a673-c2cecd6f2986 - - - - -] Lock
"b861d7cc-3e18-4037-8eaf-c6d0076b02a5" released by
"zun.compute.manager.do_container_create" :: held 5.163s inner
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:285
2018-04-09 16:47:48.493 41610 DEBUG eventlet.wsgi.server [-] (41610)
accepted ('10.11.142.2', 60664) server /usr/lib/python2.7/dis

Thanks
-Murali

On Fri, Apr 6, 2018 at 11:00 AM, Murali B  wrote:

> Hi Hongbin Lu,
>
> Thank you. After changing the endpoint it worked. Actually I was using
> magnum service also. I used the service as "container" for magnum  that is
> why its is going to 9511 instead of 9517
> After I corrected it worked.
>
> Thanks
> -Murali
>
> On Fri, Apr 6, 2018 at 8:45 AM, Hongbin Lu  wrote:
>
>> Hi Murali,
>>
>> It looks your zunclient was sending API requests to
>> http://10.11.142.2:9511/v1/services , which doesn't seem to be the right
>> API endpoint. According to the Keystone endpoint you configured, the API
>> endpoint of Zun should be http://10.11.142.2:9517/v1/services
>>  (it is on port 9517 instead of
>> 9511).
>>
>> What confused the zunclient is the endpoint's type you configured in
>> Keystone. Zun expects an endpoint of type "container" but it was configured
>> to be "zun-container" in your setup. I believe the error will be resolved
>> if you can update the Zun endpoint from type "zun-container" to type
>> "container". Please give it a try and let us know.
>>
>> Best regards,
>> Hongbin
>>
>> On Thu, Apr 5, 2018 at 7:27 PM, Murali B  wrote:
>>
>>> Hi Hongbin,
>>>
>>> Thank you for your help
>>>
>>> As per the our discussion here is the output for my current api on pike.
>>> I am not sure which version of zun client  client  I should use for pike
>>>
>>> root@cluster3-2:~/python-zunclient# zun service-list
>>> ERROR: Not Acceptable (HTTP 406) (Request-ID:
>>> req-be69266e-b641-44b9-9739-0c2d050f18b3)
>>> root@cluster3-2:~/python-zunclient# zun --debug service-list
>>> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-keycloak
>>> = vitrageclient.auth:VitrageKeycloakLoader')
>>> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-noauth
>>> = vitrageclient.auth:VitrageNoAuthLoader')
>>> DEBUG (extension:180) found extension EntryPoint.parse('noauth =
>>> cinderclient.contrib.noauth:CinderNoAuthLoader')
>>> DEBUG (extension:180) found extension EntryPoint.parse('v2token =
>>> keystoneauth1.loading._plugins.identity.v2:Token')
>>> DEBUG (extension:180) found extension EntryPoint.parse('none =
>>> keystoneauth1.loading._plugins.noauth:NoAuth')
>>> DEBUG (extension:180) found extension EntryPoint.parse('v3oauth1 =
>>> keystoneauth1.extras.oauth1._loading:V3OAuth1')
>>> DEBUG (extension:180) found extension EntryPoint.parse('admin_token =
>>> keystoneauth1.loading._plugins.admin_token:AdminToken')
>>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcauthcode
>>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuth
>>> orizationCode')
>>> DEBUG (extension:180) found extension EntryPoint.parse('v2password =
>>> keystoneauth1.loading._plugins.identity.v2:Password')
>>> DEBUG (extension:180) found extension EntryPoint.parse('v3samlpassword
>>> = keystoneauth1.extras._saml2._loading:Saml2Password')
>>> DEBUG (extension:180) found extension EntryPoint.parse('v3password =
>>> keystoneauth1.loading._plugins.identity.v3:Password')
>>> DEBUG (extension:180) found extension EntryPoint.parse('v3adfspassword
>>> = keystoneauth1.extras._saml2._loading:ADFSPassword')
>>> DEBUG (extension:180) found extension 

[openstack-dev] [ironic] this week's priorities and subteam reports

2018-04-09 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)


Weekly priorities
-
- Remaining Rescue patches
- https://review.openstack.org/#/c/499050/  - Fix ``agent`` deploy 
interface to call ``boot.prepare_instance``
- https://review.openstack.org/#/c/546919/ -  Prior fix for unrescuiing 
with whole disk image
- https://review.openstack.org/#/c/528699/ - Tempest tests with nova (This 
can land after nova work is done. But, it should be ready to get the nova patch 
reviewed.) Needs Rebase.
- Management interface boot_mode change
- https://review.openstack.org/#/c/526773/
- Bios interface support
- https://review.openstack.org/#/c/511162/
- https://review.openstack.org/#/c/528609/
- db api - https://review.openstack.org/#/c/511402/
- Bug fixes:
- https://review.openstack.org/#/c/556748
- Storyboard related changes
- https://review.openstack.org/556671
- https://review.openstack.org/556649
- https://review.openstack.org/556645
- https://review.openstack.org/556644
- https://review.openstack.org/#/c/556618/ Needs Revision

For next week (TheJulia):
https://review.openstack.org/#/c/558027/
https://review.openstack.org/#/c/557850/

Vendor priorities
-
cisco-ucs:
Patches in works for SDK update, but not posted yet, currently rebuilding 
third party CI infra after a disaster...
idrac:
RFE and first several patches for adding UEFI support will be posted by 
Tuesday, 1/9
ilo:
None
irmc:
None - a few works are work in progress

oneview:
None at this time - No subteam at present.

xclarity:
None at this time - No subteam at present.

Subproject priorities
-
bifrost:

ironic-inspector (or its client):

networking-baremetal:

networking-generic-switch:

sushy and the redfish driver:


Bugs (dtantsur, vdrok, TheJulia)

- (TheJulia) Ironic has moved to Storyboard. Dtantsur has indicated he will 
update the tool that generates these stats.
- Stats (diff between  12 Mar 2018 and 19 Mar 2018)
- Ironic: 225 bugs (+14) + 250 wishlist items (+2). 15 new (+10), 152 in 
progress, 1 critical, 36 high (+3) and 26 incomplete (+2)
- Inspector: 15 bugs (+1) + 26 wishlist items. 1 new (+1), 14 in progress, 0 
critical, 3 high and 4 incomplete
- Nova bugs with Ironic tag: 14 (-1). 1 new, 0 critical, 0 high
- critical:
- sushy: https://bugs.launchpad.net/sushy/+bug/1754514 (basic auth broken 
when SessionService is not present)
- Queens backport release: https://review.openstack.org/#/c/558799/  
Pending.
- the dashboard was abruptly deleted and needs a new home :(
- use it locally with `tox -erun` if you need to
- HIGH bugs with patches to review:
- Clean steps are not tested in gate 
https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic 
standalone test https://review.openstack.org/#/c/429770/15
- Needs to be reproposed to the ironic tempest plugin repository.
- prepare_instance() is not called for whole disk images with 'agent' deploy 
interface https://bugs.launchpad.net/ironic/+bug/1713916:
- Fix ``agent`` deploy interface to call ``boot.prepare_instance`` 
https://review.openstack.org/#/c/499050/
- (TheJulia) Currently WF-1, as revision is required for deprecation.

Priorities
==

Deploy Steps (rloo, mgoddard)
-
- status as of 9 April 2018:
- spec for deployment steps framework has merged: 
https://review.openstack.org/#/c/549493/
- waiting for code from rloo, no timeframe yet

BIOS config framework(zshi, yolanda, mgoddard, hshiina)
---
- status as of 9 April 2018:
- Spec has merged: https://review.openstack.org/#/c/496481/
- List of ordered patches:
- BIOS Settings: Add DB model: https://review.openstack.org/511162
need to fix unit tests and merge conflict
- Add bios_interface db field https://review.openstack.org/528609   
2x+3
- BIOS Settings: Add DB API: https://review.openstack.org/511402
- BIOS Settings: Add RPC object https://review.openstack.org/511714
- Add BIOSInterface to base driver class 
https://review.openstack.org/507793
- BIOS Settings: Add BIOS caching: https://review.openstack.org/512200
- Add Node BIOS support - REST API: https://review.openstack.org/512579

Conductor Location Awareness (jroll, dtantsur)
--
- (April 9) started spec, about halfway done 
https://review.openstack.org/#/c/559420/

Reference architecture guide (dtantsur, jroll)
--
- story: https://storyboard.openstack.org/#!/story/2001745
- status as of 9 

Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-09 Thread Ben Nemec



On 04/09/2018 01:12 PM, Ben Nemec wrote:



On 04/06/2018 04:02 AM, Jens Harbott wrote:

2018-04-05 19:26 GMT+00:00 Matthew Thode :

On 18-04-05 20:11:04, Graham Hayes wrote:

On 05/04/18 16:47, Matthew Thode wrote:
eventlet-0.22.1 has been out for a while now, we should try and use 
it.

Going to be fun times.

I have a review projects can depend upon if they wish to test.
https://review.openstack.org/533021


It looks like we may have an issue with oslo.service -
https://review.openstack.org/#/c/559144/ is failing gates.

Also - what is the dance for this to get merged? It doesn't look 
like we

can merge this while oslo.service has the old requirement restrictions.



The dance is as follows.

0. provide review for projects to test new eventlet version
    projects using eventlet should make backwards compat code changes at
    this time.


But this step is currently failing. Keystone doesn't even start when
eventlet-0.22.1 is installed, because loading oslo.service fails with
its pkg definition still requiring the capped eventlet:

http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482 



So it looks like we need to have an uncapped release of oslo.service
before we can proceed here.


I've proposed a patch[1] to uncap eventlet in oslo.service, but it's 
failing the unit tests[2].  I'll look into it, but I thought I'd provide 
an update in the meantime.


Oh, the unit test failures are unrelated.  Apparently the unit tests 
have been failing in oslo.service for a while.  dims has a patch up at 
https://review.openstack.org/#/c/559831/ that looks to be addressing the 
problem, although it's also failing the unit tests. :-/




1: https://review.openstack.org/559800
2: 
http://logs.openstack.org/00/559800/1/check/openstack-tox-py27/cef8fcb/job-output.txt.gz 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-09 Thread Matt Riedemann

On 4/9/2018 4:58 AM, Kashyap Chamarthy wrote:

Keep in mind that Matt has a tendency to sometimes unfairly
over-simplify others views;-).  More seriously, c'mon Matt; I went out
of my way to spend time learning about Debian's packaging structure and
trying to get the details right by talking to folks on
#debian-backports.  And as you may have seen, I marked the patch[*] as
"RFC", and repeatedly said that I'm working on an agreeable lowest
common denominator.


Sorry Kashyap, I didn't mean to offend. I was hoping "delicious bugs" 
would have made that obvious but I can see how it's not. You've done a 
great, thorough job on sorting this all out.


Since I didn't know what "RFC" meant until googling it today, how about 
dropping that from the patch so I can +2 it?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PBR and Pipfile

2018-04-09 Thread Monty Taylor

On 04/08/2018 04:10 AM, Gaetan wrote:

Hello OpenStack dev community,

I am currently working on the support of Pipfile for PBR ([1]), and I 
also follow actively the work on pipenv, which is now in officially 
supported by PyPA.


Awesome - welcome! This is a fun topic ...

There have been recently an intense discussion on the difficulties about 
Python libraries development, and how to spread good practices [2] on 
the pipenv community and enhance its documentation.


As a user of PBR, and big fan of it, I try to bridge the link between 
pbr and pipenv (with [1]) but I am interested in getting the feedback of 
Python developers of OpenStack that may have much more experience using 
PBR and more generally packaging python libraries than me.


Great - I'll comment more on this a little later.

The main point is that packaging an application is quite easy or at 
least understandable by newcomers, using `requirements.txt` or 
`Pipfile`+ `Pipfile.lock` with pipenv. At least it is easily "teachable".
Packaging a library is harder, and require to explain why by default 
`requirements.txt`(or `Pipfile`) does not work. Some "advanced" 
documentation exists but it still hard to understand why Python ended up 
with something complex for libraries ([3]).
One needs to ensure `install_requires`declares the dependencies to that 
pip can find them during transitive dependencies installation (that is, 
installing the dependencies of a given dependency). PBR helps on this 
point but some does not want its other features.


In general, as you might imagine, pbr has a difference of opinion with 
the pypa community about requirements.txt and install_requires. I'm 
going to respond from my POV about how things should work - and how I 
believe they MUST work for a project such as OpenStack to be able to 
operate.


There are actually three different relevant use cases here, with some 
patterns available to draw from. I'm going to spell them out to just 
make sure we're on the same page.


* Library
* Application
* Suite of Coordinated Applications

A Library needs to declare the requirements it has along with any 
relevant ranges. Such as "this library requires 'foo' at at least 
version 2 but less than version 4". Since it's a library it needs to be 
able to handle being included in more than one application that may have 
different sets of requirements, so as a library it should attempt to 
have as wide a set of acceptable requirements as possible - but it 
should declare if there are versions of requirements it does not work 
with. In Pipfile world, this means "commit Pipfile but not 
Pipfile.lock". In pbr+requirements.txt it means "commit the 
requirements.txt with ranges and not == declared."


An Application isn't included in other things, it's the end point. So 
declaring a specific set of versions of things that the application is 
known to work in addition to the logical requirement range is considered 
a best practice. In Pipfile world, this is "commit both Pipefile and 
Pipfile.lock". There isn't a direct analog for pbr+requirements.txt, 
although you could simulate this by executing pip with a -c 
constraints.txt file.


A Suite of Coordinated Applications (like OpenStack) needs to 
communicate the specific versions the applications have been tested to 
work with, but they need to be the same so that all of the applications 
can be deployed side-by-side on the same machine without conflict. In 
OpenStack we do this by keeping a centrally managed constraints file [1] 
that our CI system adds to the pip install line when installing any of 
the OpenStack projects. A person who wants to install OpenStack from pip 
can also choose to do so using the upper-constraints.txt file and they 
can know they'll be getting the versions of dependencies we tested with. 
There is also no direct support for making this easier in pbr. For 
Pipfile, I believe we'd want to see is adding support for --constraints 
to pipenv install - so that we can update our Pipfile.lock file for each 
application in the context of the global constraints file. This can be 
simulated today without any support from pipenv directly like this:


  pipenv install
  $(pipenv --venv)/bin/pip install -U -c 
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt 
-r requirements.txt

  pipenv lock

There is also works on PEP around pyproject.toml ([4]), which looks 
quite similar to PBR's setup.cfg. What do you think about it?


It's a bit different. There is also a philosophical disagreement about 
the use of TOML that's not worth going in to here - but from a pbr 
perspecitve I'd like to minimize use of pyproject.toml to the bare 
minimm needed to bootstrap things into pbr's control. In the first phase 
I expect to replace our current setup.py boilerplate:


setuptools.setup(
setup_requires=['pbr'],
pbr=True)

with:

setuptool.setup(pbr=True)

and add pyproject.toml files with:

[build-system]
requires = ["setuptools", 

Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Matt Riedemann

On 4/9/2018 1:00 PM, Duncan Thomas wrote:

Hopefully this flow means we can do rebuild root filesystem from
snapshot/backup too? It seems rather artificially limiting to only do
restore-from-image. I'd expect restore-from-snap to be a more common
use case, personally.


Hmm, now you've got me thinking about image-defined block device 
mappings, which is something you'd have if you snapshot a volume-backed 
instance and then later use that image snapshot, which has metadata 
about the volume snapshot in it, to later create (or rebuild?) a server.


Tempest has a scenario test for the boot from volume case here:

https://review.openstack.org/#/c/555495/

I should note that even if you did snapshot a volume-backed server and 
then used that image to rebuild another non-volume-backed server, nova 
won't even look at the block_device_mapping_v2 metadata in the snapshot 
image during rebuild, it doesn't treat it like boot from volume does 
where nova uses the image-defined BDM to create a new volume-backed 
instance.


And now that I've said that, I wonder if people would expect the same 
semantics for rebuild as boot from volume with those types of 
images...it makes my head hurt. Maybe mdbooth would like to weigh in on 
this given he's present in this thread.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][xstatic]How to handle xstatic if upstream files are modified

2018-04-09 Thread Ivan Kolodyazhny
Hi, Xinni,

I absolutely agree with Radomir. We should keep xstatic files without
modifications. We don't know if they are used outside of OpenStack or not,
so they should be the same as NPM packages


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Mon, Apr 9, 2018 at 12:32 PM, Radomir Dopieralski  wrote:

> The whole idea about xstatic files is that they are generic, not specific
> to Horizon or OpenStack, usable by other projects that need those static
> files. In fact, at the time we started using xstatic, it was being used by
> the MoinMoin wiki project (which is now dead, sadly). The modifications you
> made are very specific to your usecase and would make it impossible to
> reuse the packages by other applications (or even by other Horizon
> plugins). The whole idea of a library is that you are using it as it is
> provided, and not modifying it.
>
> We generally try to use all the libraries as they are, and if there are
> any modifications necessary, we push them upstream, to the original
> library. Otherwise there would be quite a bit of maintenance overhead
> necessary to keep all our downstream patches. When considerable
> modification is necessary that can't be pushed upstream, we fork the
> library either into its own repository, or include it in the repository of
> the application that is using it.
>
> On Mon, Apr 9, 2018 at 2:54 AM, Xinni Ge  wrote:
>
>> Hello, team.
>>
>> Sorry for talking about xstatic repo for so many times.
>>
>> I didn't realize xstatic repositories should be provided with exactly the
>> same file as upstream, and should have talked about it at very first.
>>
>> I modified several upstream files because some of them files couldn't be
>> used directly under my expectation.
>>
>> For example,  {{ }} are used in some original files as template tags, but
>> Horizon adopts {$ $} in angular module, so I modified them to be recognized
>> properly.
>>
>> Another major modification is that css files are converted into scss
>> files to solve some css import issue previously.
>> Besides, after collecting statics, some png file paths in css cannot be
>> referenced properly and shown as 404 errors, I also modified css itself to
>> handle this issues.
>>
>> I will recheck all the un-matched xstatic repositories and try to replace
>> with upstream  files as much as I can.
>> But I if I really have to modify some original files, is it acceptable to
>> still use it as embedded files with license info appeared at the top?
>>
>>
>> Best Regards,
>> Xinni Ge
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Sean McGinnis
On Mon, Apr 09, 2018 at 07:00:56PM +0100, Duncan Thomas wrote:
> Hopefully this flow means we can do rebuild root filesystem from
> snapshot/backup too? It seems rather artificially limiting to only do
> restore-from-image. I'd expect restore-from-snap to be a more common
> use case, personally.
> 

That could get tricky. We only support reverting to the last snapshot if we
reuse the same volume. Otherwise, we can create volume from snapshot, but I
don't think it's often that the first thing a user does is create a snapshot on
initial creation of a boot image. If it was created from image cache, and the
backend creates those cached volume by using a snapshot, then that might be an
option.

But these are a lot of ifs, so that seems like it would make the logic for this
much more complicated.

Maybe a phase II optimization we can look into?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] Promote Li Liu as new core reviewer

2018-04-09 Thread Nadathur, Sundar

Agreed! +1

Regards,
Sundar


Hi Team,

This is an email for my nomination of adding Li Liu to the core 
reviewer team. Li Liu has been instrumental in the resource provider 
data model implementation for Cyborg during Queens release, as well as 
metadata standardization and programming design for Rocky.


His overall stats [0] and current stats [1] for Rocky speaks for 
itself. His patches could be found here [2].


Given the amount of work undergoing for Rocky, it would be great to 
add such an amazing force :)


[0] 
http://stackalytics.com/?module=cyborg-group=person-day=all
[1] 
http://stackalytics.com/?module=cyborg-group=person-day=rocky

[2] https://review.openstack.org/#/q/owner:liliueecg%2540gmail.com

--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com 
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu 
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-09 Thread Ben Nemec



On 04/06/2018 04:02 AM, Jens Harbott wrote:

2018-04-05 19:26 GMT+00:00 Matthew Thode :

On 18-04-05 20:11:04, Graham Hayes wrote:

On 05/04/18 16:47, Matthew Thode wrote:

eventlet-0.22.1 has been out for a while now, we should try and use it.
Going to be fun times.

I have a review projects can depend upon if they wish to test.
https://review.openstack.org/533021


It looks like we may have an issue with oslo.service -
https://review.openstack.org/#/c/559144/ is failing gates.

Also - what is the dance for this to get merged? It doesn't look like we
can merge this while oslo.service has the old requirement restrictions.



The dance is as follows.

0. provide review for projects to test new eventlet version
projects using eventlet should make backwards compat code changes at
this time.


But this step is currently failing. Keystone doesn't even start when
eventlet-0.22.1 is installed, because loading oslo.service fails with
its pkg definition still requiring the capped eventlet:

http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482

So it looks like we need to have an uncapped release of oslo.service
before we can proceed here.


I've proposed a patch[1] to uncap eventlet in oslo.service, but it's 
failing the unit tests[2].  I'll look into it, but I thought I'd provide 
an update in the meantime.


1: https://review.openstack.org/559800
2: 
http://logs.openstack.org/00/559800/1/check/openstack-tox-py27/cef8fcb/job-output.txt.gz


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Duncan Thomas
Hopefully this flow means we can do rebuild root filesystem from
snapshot/backup too? It seems rather artificially limiting to only do
restore-from-image. I'd expect restore-from-snap to be a more common
use case, personally.

On 9 April 2018 at 09:51, Gorka Eguileor  wrote:
> On 06/04, Matt Riedemann wrote:
>> On 4/6/2018 5:09 AM, Matthew Booth wrote:
>> > I think you're talking at cross purposes here: this won't require a
>> > swap volume. Apart from anything else, swap volume only works on an
>> > attached volume, and as previously discussed Nova will detach and
>> > re-attach.
>> >
>> > Gorka, the Nova api Matt is referring to is called volume update
>> > externally. It's the operation required for live migrating an attached
>> > volume between backends. It's called swap volume internally in Nova.
>>
>> Yeah I was hoping we were just having a misunderstanding of what 'swap
>> volume' in nova is, which is the blockRebase for an already attached volume
>> to the guest, called from cinder during a volume retype or migration.
>>
>> As for the re-image thing, nova would be detaching the volume from the guest
>> prior to calling the new cinder re-image API, and then re-attach to the
>> guest afterward - similar to how shelve and unshelve work, and for that
>> matter how rebuild works today with non-root volumes.
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Hi,
>
> Thanks for the clarification.  When I was talking about "swapping" I was
> referring to the fact that Nova will have to not only detach the volume
> locally using OS-Brick, but it will also need to use new connection
> information to do the attach after the volume has been re-imaged.
>
> As I see it, the process would look something like this:
>
> - Nova detaches volume using OS-Brick
> - Nova calls Cinder re-image passing the node's info (like we do when
>   attaching a new volume)
> - Cinder would:
>   - Ensure only that node is connected to the volume
>   - Terminate connection to the original volume
>   - If we can do optimized volume creation:
> - If encrypted volume we create a copy of the encryption key on
>   Barbican or copy the ID field from the DB and ensure we don't
>   delete the Barbican key on the delete.
> - Create new volume from image
> - Swap DB fields to preserve the UUID
> - Delete original volume
>   - If it cannot do optimized volume creation:
> - Initialize+Attach volume to Cinder node
> - DD the new image into the volume
> - Detach+Terminate volume
>   - Initialize connection for the new volume to the Nova node
>   - Return connection information to the volume
> - Nova attaches volume with OS-Brick using returned connection
>   information.
>
> So I agree, it's not a blockRebase operation, just a change in the
> volume that is used.
>
> Regards,
> Gorka.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Changes to ComputeVirtAPI.wait_for_instance_event

2018-04-09 Thread Matt Riedemann
As part of a bug fix [1], the internal 
ComputeVirtAPI.wait_for_instance_event interface is changing to no 
longer accept event names that are strings, and will now require the 
(name, tag) tuple form which all of the in-tree virt drivers are already 
using.


If you have an out of tree driver that uses this interface, heads up 
that you'll need to be using the tuple form if you are not already doing so.


[1] https://review.openstack.org/#/c/558059/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Forum - Post your selected topics now

2018-04-09 Thread Thierry Carrez
Csatari, Gergely (Nokia - HU/Budapest) wrote:

> There are two lists of etherpads for forum brainstorming in 
> https://wiki.openstack.org/wiki/Forum/Vancouver2018 and there is 
> http://forumtopics.openstack.org/ .
> 
> Is my understanding correct, that ultimatelly all ideas should go to 
> http://forumtopics.openstack.org/ ?

Yes!

The process recommends that each workgroup uses etherpads to brainstorm
ideas and converge to a set of sessions they want to propose, and then
someone on that group can file the proposed set.

(The idea being to foster a discussion early and reduce duplicate /
overlapping proposals)

-- 
Thierry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] Re-Reminder on the state of WSME

2018-04-09 Thread Ben Nemec



On 04/09/2018 07:22 AM, Chris Dent wrote:


A little over two years ago I sent a reminder that WSME is not being
actively maintained:

 
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088658.html


Today I was reminded of this becasue a random (typo-related)
patchset demonstrated that the tests were no longer passing and
fixing them is enough of a chore that I (at least temporarily)
marked one test as an expected failure.o

     https://review.openstack.org/#/c/559717/

The following projects appear to still use WSME:

     aodh
     blazar
     cloudkitty
     cloudpulse
     cyborg
     glance
     gluon
     iotronic
     ironic
     magnum
     mistral
     mogan
     octavia
     panko
     qinling
     radar
     ranger
     searchlight
     solum
     storyboard
     surveil
     terracotta
     watcher

Most of these are using the 'types' handling in WSME and sometimes
the pecan extension, and not the (potentially broken) Flask
extension, so things should be stable.

However: nobody is working on keeping WSME up to date. It is not a
good long term investment.


What would be the recommended alternative, either for new work or as a 
migration path for existing projects?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Rocky forum topics brainstorming

2018-04-09 Thread melanie witt

Hey everyone,

Let's collect forum topic brainstorming ideas for the Forum sessions in 
Vancouver in this etherpad [0]. Once we've brainstormed, we'll select 
and submit our topic proposals for consideration at the end of this 
week. The deadline for submissions is Sunday April 15.


Thanks,
-melanie

[0] https://etherpad.openstack.org/p/YVR-nova-brainstorming

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Changes to Zuul role checkouts

2018-04-09 Thread David Moreau Simard
If you're not familiar with the "override-checkout" configuration, you
can find the documentation about it here [1] and some example usage
here [2].

[1]: https://zuul-ci.org/docs/zuul/user/config.html#attr-job.override-checkout
[2]: http://codesearch.openstack.org/?q=override-checkout

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]


On Mon, Apr 9, 2018 at 12:55 PM, James E. Blair  wrote:
> Hi,
>
> We recently fixed a subtle but important bug related to how Zuul checks
> out repositories it uses to find Ansible roles for jobs.
>
> This may result in a behavior change, or even an error, for jobs which
> use roles defined in projects with multiple branches.
>
> Previously, Zuul would (with some exceptions) generally check out the
> 'master' branch of any repository which appeared in the 'roles:' stanza
> in the job definition.  Now Zuul will follow its usual procedure of
> trying to find the most appropriate branch to check out.  That means it
> tries the project override-checkout branch first, then the job
> override-checkout branch, then the branch of the change, and finally the
> default branch of the project.
>
> This should produce more predictable behavior which matches the
> checkouts of all other projects involved in a job.
>
> If you find that the wrong branch of a role is being checked out,
> depending on circumstances, you may need to set a job or project
> override-checkout value to force the correct one, or you may need to
> backport a role to an older branch.
>
> If you encounter any problems related to this, please chat with us in
> #openstack-infra.
>
> Thanks,
>
> Jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Changes to Zuul role checkouts

2018-04-09 Thread James E. Blair
Hi,

We recently fixed a subtle but important bug related to how Zuul checks
out repositories it uses to find Ansible roles for jobs.

This may result in a behavior change, or even an error, for jobs which
use roles defined in projects with multiple branches.

Previously, Zuul would (with some exceptions) generally check out the
'master' branch of any repository which appeared in the 'roles:' stanza
in the job definition.  Now Zuul will follow its usual procedure of
trying to find the most appropriate branch to check out.  That means it
tries the project override-checkout branch first, then the job
override-checkout branch, then the branch of the change, and finally the
default branch of the project.

This should produce more predictable behavior which matches the
checkouts of all other projects involved in a job.

If you find that the wrong branch of a role is being checked out,
depending on circumstances, you may need to set a job or project
override-checkout value to force the correct one, or you may need to
backport a role to an older branch.

If you encounter any problems related to this, please chat with us in
#openstack-infra.

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][nova-powervm][pyghmi][solum][trove] Switching to cryptography from pycrypto

2018-04-09 Thread Jim Rollenhagen
On Mon, Apr 2, 2018 at 8:26 AM, Jim Rollenhagen 
wrote:

> On Sat, Mar 31, 2018 at 7:24 PM, Matthew Thode 
> wrote:
>
>> Here's the current status.  I'd like to ask the projects what's keeping
>> them from removing pycrypto in facor of a maintained library.
>>
>> pyghmi:
>>   - (merge conflict) https://review.openstack.org/#/c/331828
>>   - (merge conflict) https://review.openstack.org/#/c/545465
>>   - (doesn't change the import) https://review.openstack.org/#/c/545182
>
>
> Looks like py26 support might be a blocker here. While we've brought
> pyghmi into the ironic project, it's still a project mostly built and
> maintained
> by Jarrod, and he has customers outside of OpenStack that depend on it.
> The ironic team will have to discuss this with Jarrod and find a good path
> forward.
>
> My initial thought is that we need to move forward on this, so
> perhaps we can release this change as a major version, and keep a py26
> branch that can be released on the previous minor version for the people
> that need this on 2.6. Thoughts?
>

I reached out to Jarrod off-list and sounds like this is roughly the plan:

> FWIW, I did at least merge a change to work with cryptodomex and moved
pyghmi to that when available (I could not discern a way to have
requirements allow one of multiple choices).
>
> I thought about cryptodome, but that breaks paramiko in that environment.
>
> I’ll probably do a 1.1.0 that uses cryptography, and continue 1.0 with
pycrypto/pycryptodomex.

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] placement update 18-14

2018-04-09 Thread Chris Dent

On Fri, 6 Apr 2018, Chris Dent wrote:


* Eric and I discussed earlier in the week that it might be a good
 time to start an #openstack-placement IRC channel, for two main
 reasons: break things up so as to limit the crosstalk in the often
 very busy #openstack-nova channel and to lend a bit of momentum
 for going in that direction. Is this okay with everyone? If not,
 please say so, otherwise I'll make it happen soon.


After confirmation in today's scheduler meeting this has been done.
#openstack-placement now exists, is registered, and various *bot
additions are in progress:

https://review.openstack.org/559768
https://review.openstack.org/559769
http://p.anticdent.org/logs/openstack-placement


--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Matt Riedemann

On 4/9/2018 3:51 AM, Gorka Eguileor wrote:

As I see it, the process would look something like this:

- Nova detaches volume using OS-Brick
- Nova calls Cinder re-image passing the node's info (like we do when
   attaching a new volume)
- Cinder would:
   - Ensure only that node is connected to the volume
   - Terminate connection to the original volume
   - If we can do optimized volume creation:
 - If encrypted volume we create a copy of the encryption key on
   Barbican or copy the ID field from the DB and ensure we don't
   delete the Barbican key on the delete.
 - Create new volume from image
 - Swap DB fields to preserve the UUID
 - Delete original volume
   - If it cannot do optimized volume creation:
 - Initialize+Attach volume to Cinder node
 - DD the new image into the volume
 - Detach+Terminate volume
   - Initialize connection for the new volume to the Nova node
   - Return connection information to the volume
- Nova attaches volume with OS-Brick using returned connection
   information.

So I agree, it's not a blockRebase operation, just a change in the
volume that is used.


Yeah we're on the same page with respect to the high level changes on 
the nova side.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-09 Thread Lucas Alvares Gomes
Hi,

> Another idea is to modify test that it will:
> 1. Check how many ports are in tenant,
> 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it is 
> now,
> 3. Try to add 2 ports - exactly as it is now,
>
> I think that this should be still backend agnostic and should fix this 
> problem.
>

Great idea! I've gave it a go and proposed it at
https://review.openstack.org/559758

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ?

2018-04-09 Thread Michael Bayer
On Mon, Apr 9, 2018 at 5:53 AM, Gorka Eguileor  wrote:
> On 06/04, Michael Bayer wrote:
>> On Wed, Apr 4, 2018 at 5:00 AM, Gorka Eguileor  wrote:
>> > On 03/04, Jay Pipes wrote:
>> >> On 04/03/2018 11:07 AM, Michael Bayer wrote:
>> >> > The MySQL / MariaDB variants we use nowadays default to
>> >> > innodb_file_per_table=ON and we also set this flag to ON in installer
>> >> > tools like TripleO. The reason we like file per table is so that
>> >> > we don't grow an enormous ibdata file that can't be shrunk without
>> >> > rebuilding the database.  Instead, we have lots of little .ibd
>> >> > datafiles for each table throughout each openstack database.
>> >> >
>> >> > But now we have the issue that these files also can benefit from
>> >> > periodic optimization which can shrink them and also have a beneficial
>> >> > effect on performance.   The OPTIMIZE TABLE statement achieves this,
>> >> > but as would be expected it itself can lock tables for potentially a
>> >> > long time.   Googling around reveals a lot of controversy, as various
>> >> > users and publications suggest that OPTIMIZE is never needed and would
>> >> > have only a negligible effect on performance.   However here we seek
>> >> > to use OPTIMIZE so that we can reclaim disk space on tables that have
>> >> > lots of DELETE activity, such as keystone "token" and ceilometer
>> >> > "sample".
>> >> >
>> >> > Questions for the group:
>> >> >
>> >> > 1. is OPTIMIZE table worthwhile to be run for tables where the
>> >> > datafile has grown much larger than the number of rows we have in the
>> >> > table?
>> >>
>> >> Possibly, though it's questionable to use MySQL/InnoDB for storing 
>> >> transient
>> >> data that is deleted often like ceilometer samples and keystone tokens. A
>> >> much better solution is to use RDBMS partitioning so you can simply ALTER
>> >> TABLE .. DROP PARTITION those partitions that are no longer relevant (and
>> >> don't even bother DELETEing individual rows) or, in the case of Ceilometer
>> >> samples, don't use a traditional RDBMS for timeseries data at all...
>> >>
>> >> But since that is unfortunately already the case, yes it is probably a 
>> >> good
>> >> idea to OPTIMIZE TABLE on those tables.
>> >>
>> >> > 2. from people's production experience how safe is it to run OPTIMIZE,
>> >> > e.g. how long is it locking tables, etc.
>> >>
>> >> Is it safe? Yes.
>> >>
>> >> Does it lock the entire table for the duration of the operation? No. It 
>> >> uses
>> >> online DDL operations:
>> >>
>> >> https://dev.mysql.com/doc/refman/5.7/en/innodb-file-defragmenting.html
>> >>
>> >> Note that OPTIMIZE TABLE is mapped to ALTER TABLE tbl_name FORCE for 
>> >> InnoDB
>> >> tables.
>> >>
>> >> > 3. is there a heuristic we can use to measure when we might run this
>> >> > -.e.g my plan is we measure the size in bytes of each row in a table
>> >> > and then compare that in some ratio to the size of the corresponding
>> >> > .ibd file, if the .ibd file is N times larger than the logical data
>> >> > size we run OPTIMIZE ?
>> >>
>> >> I don't believe so, no. Most things I see recommended is to simply run
>> >> OPTIMIZE TABLE in a cron job on each table periodically.
>> >>
>> >> > 4. I'd like to propose this job of scanning table datafile sizes in
>> >> > ratio to logical data sizes, then running OPTIMIZE, be a utility
>> >> > script that is delivered via oslo.db, and would run for all innodb
>> >> > tables within a target MySQL/ MariaDB server generically.  That is, I
>> >> > really *dont* want this to be a script that Keystone, Nova, Ceilometer
>> >> > etc. are all maintaining delivering themselves.   this should be done
>> >> > as a generic pass on a whole database (noting, again, we are only
>> >> > running it for very specific InnoDB tables that we observe have a poor
>> >> > logical/physical size ratio).
>> >>
>> >> I don't believe this should be in oslo.db. This is strictly the purview of
>> >> deployment tools and should stay there, IMHO.
>> >>
>> >
>> > Hi,
>> >
>> > As far as I know most projects do "soft deletes" where we just flag the
>> > rows as deleted and don't remove them from the DB, so it's only when we
>> > use a management tool and run the "purge" command that we actually
>> > remove these rows.
>> >
>> > Since running the optimize without purging would be meaningless, I'm
>> > wondering if we should trigger the OPTIMIZE also within the purging
>> > code.  This way we could avoid innefective runs of the optimize command
>> > when no purge has happened and even when we do the optimization we could
>> > skip the ratio calculation altogether for tables where no rows have been
>> > deleted (the ratio hasn't changed).
>> >
>>
>> the issue is that this OPTIMIZE will block on Galera unless it is run
>> on a per-individual node basis along with the changing of the
>> wsrep_OSU_method parameter, this is way out of scope both to be
>> redundantly hardcoded in multiple openstack 

Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-09 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2018-04-09 13:39:30 +1000:
> On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote:
> 
> > My understanding of our use of upper constraints was that this should
> > (almost) always be the case for (almost) all dependencies.  We should
> > rely on constraints instead of requirements caps. Capping libs like
> > pbr or eventlet and any other that is in use globally is incredibly
> > difficult to work with when you want to uncap it because you have to
> > coordinate globally. Instead if using constraints you just bump the
> > constraint and are done.
> 
> Part of the reason that we have the caps it to prevent the tools that
> auto-generate the constraints syncs from considering these versions and
> then depending on the requirements team to strip that from the bot
> change before committing (assuming it passes CI).
> 
> Once the work Doug's doing is complete we could consider tweaking the
> tools to use a different mechanism, but that's only part of the reason
> for the caps in g-r.
> 
> Yours Tony.

Now that projects don't have to match the global requirements list
entries exactly we should be able to remove caps from within the
projects and keep caps in the global list for cases like this where we
know we frequently encounter breaking changes in new releases. The
changes to support that were part of
https://review.openstack.org/#/c/555402/

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] placement update 18-14

2018-04-09 Thread Sylvain Bauza
On Fri, Apr 6, 2018 at 2:54 PM, Chris Dent  wrote:

>
> This is "contract" style update. New stuff will not be added to the
> lists.
>
> # Most Important
>
> There doesn't appear to be anything new with regard to most
> important. That which was important remains important. At the
> scheduler team meeting at the start of the week there was talk of
> working out ways to trim the amount of work in progress by using the
> nova priorities tracking etherpad to help sort things out:
>
> https://etherpad.openstack.org/p/rocky-nova-priorities-tracking
>
> Update provider tree and nested allocation candidates remain
> critical basic functionality on which much else is based. With most
> of provider tree done, it's really on nested allocation candidates.
>
> # What's Changed
>
> Quite a bit of provider tree related code has merged.
>
> Some negotiation happened with regard to when/if the fixes for
> shared providers is going to happen. I'm not sure how that resolved,
> if someone can follow up with that, that would be most excellent.
>
> Most of the placement-req-filter series merged.
>
> The spec for error codes in the placement API merged (code is in
> progress and ready for review, see below).
>
> # Questions
>
> * Eric and I discussed earlier in the week that it might be a good
>   time to start an #openstack-placement IRC channel, for two main
>   reasons: break things up so as to limit the crosstalk in the often
>   very busy #openstack-nova channel and to lend a bit of momentum
>   for going in that direction. Is this okay with everyone? If not,
>   please say so, otherwise I'll make it happen soon.
>
>
Fine by me. It's sometimes difficult to follow all the conversations so
having a separate channel looks good to me, at least for discussing only
about specific Placement questions.
For Nova related points (like how to use nested RPs for example with NUMA),
maybe #openstack-nova is still the main IRC channel for that.


* Shared providers status?
>   (I really think we need to make this go. It was one of the
>   original value propositions of placement: being able to accurate
>   manage shared disk.)
>
> # Bugs
>
> * Placement related bugs not yet in progress:  https://goo.gl/TgiPXb
>15, -1 on last week
> * In progress placement bugs: https://goo.gl/vzGGDQ
>13, +1 on last week
>
> # Specs
>
> These seem to be divided into three classes:
>
> * Normal stuff
> * Old stuff not getting attention or newer stuff that ought to be
>   abandoned because of lack of support
> * Anything related to the client side of using nested providers
>   effectively. This apparently needs a lot of thinking. If there are
>   some general sticking points we can extract and resolve, that
>   might help move the whole thing forward?
>
> * https://review.openstack.org/#/c/549067/
>   VMware: place instances on resource pool
>   (using update_provider_tree)
>
> * https://review.openstack.org/#/c/545057/
>   mirror nova host aggregates to placement API
>
> * https://review.openstack.org/#/c/552924/
>  Proposes NUMA topology with RPs
>
> * https://review.openstack.org/#/c/544683/
>  Account for host agg allocation ratio in placement
>
> * https://review.openstack.org/#/c/552927/
>  Spec for isolating configuration of placement database
>  (This has a strong +2 on it but needs one more.)
>
> * https://review.openstack.org/#/c/552105/
>  Support default allocation ratios
>
> * https://review.openstack.org/#/c/438640/
>  Spec on preemptible servers
>
> * https://review.openstack.org/#/c/556873/
>Handle nested providers for allocation candidates
>
> * https://review.openstack.org/#/c/556971/
>Add Generation to Consumers
>
> * https://review.openstack.org/#/c/557065/
>Proposes Multiple GPU types
>
> * https://review.openstack.org/#/c/555081/
>Standardize CPU resource tracking
>
> * https://review.openstack.org/#/c/502306/
>Network bandwidth resource provider
>
> * https://review.openstack.org/#/c/509042/
>Propose counting quota usage from placement
>
> # Main Themes
>
> ## Update Provider Tree
>
> Most of the main guts of this have merged (huzzah!). What's left are
> some loose end details, and clean handling of aggregates:
>
> https://review.openstack.org/#/q/topic:bp/update-provider-tree
>
> ## Nested providers in allocation candidates
>
> Representing nested provides in the response to GET
> /allocation_candidates is required to actually make use of all the
> topology that update provider tree will report. That work is in
> progress at:
>
> https://review.openstack.org/#/q/topic:bp/nested-resource-providers
> https://review.openstack.org/#/q/topic:bp/nested-resource-pr
> oviders-allocation-candidates
>
> Note that some of this includes the up-for-debate shared handling.
>
> ## Request Filters
>
> As far as I can tell this is mostly done (yay!) but there is a loose
> end: We merged an updated spec to support multiple member_of
> 

[openstack-dev] [keystone] Rocky forum topics

2018-04-09 Thread Lance Bragstad
Hey all,

I've created an etherpad [0] to collect ideas/proposals for forum
sessions in Vancouver. Please take a look and add anything that you
think we should propose as a forum session. The deadline for submissions
is this Sunday.

Thanks,

Lance

[0] https://etherpad.openstack.org/p/YVR-keystone-forum-sessions




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ?

2018-04-09 Thread Julien Danjou
On Tue, Apr 03 2018, Jay Pipes wrote:

> Possibly, though it's questionable to use MySQL/InnoDB for storing transient
> data that is deleted often like ceilometer samples and keystone tokens. A much
> better solution is to use RDBMS partitioning so you can simply ALTER TABLE ..
> DROP PARTITION those partitions that are no longer relevant (and don't even
> bother DELETEing individual rows) or, in the case of Ceilometer samples, don't
> use a traditional RDBMS for timeseries data at all...

For the record, and because I imagine not everyone follows Ceilometer,
this codes does not exist anymore in Queens. Ceilometer storage (and
API) has been deprecated for 2 cycles already and removed last release.

Feel free to continue discussing the problem, but you can ignore
Ceilometer. :)

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][horizon][l2gw] Unable to create a floating IP

2018-04-09 Thread Gary Kotton
Hi,
From Queens onwards we have a issue with horizon and L2GW. We are unable to 
create a floating IP. This does not occur when using the CLI only via horizon. 
The error received is
‘Error: User does not have admin privileges: Cannot GET resource for non admin 
tenant. Neutron server returns request_ids: 
['req-f07a3aac-0994-4d3a-8409-1e55b374af9d']’
This is due to: 
https://github.com/openstack/networking-l2gw/blob/master/networking_l2gw/db/l2gateway/l2gateway_db.py#L316
This worked in Ocata and not sure what has changed since then ☹. Maybe in the 
past the Ocata quota’s were not checking L2gw.
Any ideas?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Forum - Post your selected topics now

2018-04-09 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi, 

There are two lists of etherpads for forum brainstorming in 
https://wiki.openstack.org/wiki/Forum/Vancouver2018 and there is 
http://forumtopics.openstack.org/ .

Is my understanding correct, that ultimatelly all ideas should go to 
http://forumtopics.openstack.org/ ?

Thanks, 
Gerg0

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Monday, April 9, 2018 2:02 PM
To: OpenStack Development Mailing List ; 
openstack-operat...@lists.openstack.org
Subject: [openstack-dev] Vancouver Forum - Post your selected topics now

Hi everyone,

You've been actively brainstorming ideas of topics for discussion at the 
"Forum" at the Vancouver OpenStack Summit. Now it's time to select which ones 
you want to propose, and file them at forumtopics.openstack.org !

The topic submission website will be open until EOD on Sunday, April 15, at 
which point the Forum selection committee will take the entries and make the 
final selection. So you have the whole week to enter your selection of ideas on 
the website.

Thanks !

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [api] Re-Reminder on the state of WSME

2018-04-09 Thread Chris Dent


A little over two years ago I sent a reminder that WSME is not being
actively maintained:

http://lists.openstack.org/pipermail/openstack-dev/2016-March/088658.html

Today I was reminded of this becasue a random (typo-related)
patchset demonstrated that the tests were no longer passing and
fixing them is enough of a chore that I (at least temporarily)
marked one test as an expected failure.o

https://review.openstack.org/#/c/559717/

The following projects appear to still use WSME:

aodh
blazar
cloudkitty
cloudpulse
cyborg
glance
gluon
iotronic
ironic
magnum
mistral
mogan
octavia
panko
qinling
radar
ranger
searchlight
solum
storyboard
surveil
terracotta
watcher

Most of these are using the 'types' handling in WSME and sometimes
the pecan extension, and not the (potentially broken) Flask
extension, so things should be stable.

However: nobody is working on keeping WSME up to date. It is not a
good long term investment.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Vancouver Forum - Post your selected topics now

2018-04-09 Thread Thierry Carrez
Hi everyone,

You've been actively brainstorming ideas of topics for discussion at the
"Forum" at the Vancouver OpenStack Summit. Now it's time to select which
ones you want to propose, and file them at forumtopics.openstack.org !

The topic submission website will be open until EOD on Sunday, April 15,
at which point the Forum selection committee will take the entries and
make the final selection. So you have the whole week to enter your
selection of ideas on the website.

Thanks !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Todays Office Hour Time Change

2018-04-09 Thread Dougal Matthews
Hey all,

I have moved the office hour today from 16:00 UTC to 15:00 UTC. If there is
demand we could make it a 2 hour slot or move it back. I wasn't able to
make it for 16:00 UTC today and it will often be tricky for me.

I think one of the biggest advantages to doing office hours is that we can
be more flexible. So if there isn't a slot that suits you, please propose
one!

On Friday we had a good triage session, reducing the untriaged bugs by
about 25. I hope to do something similar today unless somebody comes along
with specific topics they want to discuss.

The hours now are:

   - Mon 15.00 UTC


   - Wed 3.00 UTC


   - Fri 8.00 UTC

The Office hour etherpad is:
https://etherpad.openstack.org/p/mistral-office-hours

(Side note: As far as I know there hasn't been any activity on the
Wednesday slot, so we may want to move that. It is at 2am for me, so I wont
ever make it personally.)

Cheers,

Dougal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-09 Thread Kashyap Chamarthy
On Fri, Apr 06, 2018 at 12:12:31PM -0500, Matt Riedemann wrote:
> On 4/6/2018 12:07 PM, Kashyap Chamarthy wrote:
> > FWIW, I'd suggest so, if it's not too much maintenance.  It'll just
> > spare you additional bug reports in that area, and the overall default
> > experience when dealing with CPU models would be relatively much better.
> > (Another way to look at it is, multiple other "conservative" long-term
> > stable distributions also provide libvirt 3.2.0 and QEMU 2.9.0, so that
> > should give you confidence.)
> > 
> > Again, I don't want to push too hard on this.  If that'll be messy from
> > a package maintainance POV for you / Debian maintainers, then we could
> > settle with whatever is in 'Stretch'.
> 
> Keep in mind that Kashyap has a tendency to want the latest and greatest of
> libvirt and qemu at all times for all of those delicious bug fixes. 

Keep in mind that Matt has a tendency to sometimes unfairly
over-simplify others views ;-).  More seriously, c'mon Matt; I went out
of my way to spend time learning about Debian's packaging structure and
trying to get the details right by talking to folks on
#debian-backports.  And as you may have seen, I marked the patch[*] as
"RFC", and repeatedly said that I'm working on an agreeable lowest
common denominator.

> But we also know that new code also brings new not-yet-fixed bugs.

Yep, of course.

> Keep in mind the big picture here, we're talking about bumping from
> minimum required (in Rocky) libvirt 1.3.1 to at least 3.0.0 (in Stein)
> and qemu 2.5.0 to at least 2.8.0, so I think that's already covering
> some good ground. Let's not get greedy. :)

Sure :-) Also if there's a way we can avoid bugs in the default
experience with minimal effort, we should.

Anyway, there we go: changed the patch[*] to what's in Stretch.

[*] https://review.openstack.org/#/c/558171/

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ?

2018-04-09 Thread Gorka Eguileor
On 06/04, Michael Bayer wrote:
> On Wed, Apr 4, 2018 at 5:00 AM, Gorka Eguileor  wrote:
> > On 03/04, Jay Pipes wrote:
> >> On 04/03/2018 11:07 AM, Michael Bayer wrote:
> >> > The MySQL / MariaDB variants we use nowadays default to
> >> > innodb_file_per_table=ON and we also set this flag to ON in installer
> >> > tools like TripleO. The reason we like file per table is so that
> >> > we don't grow an enormous ibdata file that can't be shrunk without
> >> > rebuilding the database.  Instead, we have lots of little .ibd
> >> > datafiles for each table throughout each openstack database.
> >> >
> >> > But now we have the issue that these files also can benefit from
> >> > periodic optimization which can shrink them and also have a beneficial
> >> > effect on performance.   The OPTIMIZE TABLE statement achieves this,
> >> > but as would be expected it itself can lock tables for potentially a
> >> > long time.   Googling around reveals a lot of controversy, as various
> >> > users and publications suggest that OPTIMIZE is never needed and would
> >> > have only a negligible effect on performance.   However here we seek
> >> > to use OPTIMIZE so that we can reclaim disk space on tables that have
> >> > lots of DELETE activity, such as keystone "token" and ceilometer
> >> > "sample".
> >> >
> >> > Questions for the group:
> >> >
> >> > 1. is OPTIMIZE table worthwhile to be run for tables where the
> >> > datafile has grown much larger than the number of rows we have in the
> >> > table?
> >>
> >> Possibly, though it's questionable to use MySQL/InnoDB for storing 
> >> transient
> >> data that is deleted often like ceilometer samples and keystone tokens. A
> >> much better solution is to use RDBMS partitioning so you can simply ALTER
> >> TABLE .. DROP PARTITION those partitions that are no longer relevant (and
> >> don't even bother DELETEing individual rows) or, in the case of Ceilometer
> >> samples, don't use a traditional RDBMS for timeseries data at all...
> >>
> >> But since that is unfortunately already the case, yes it is probably a good
> >> idea to OPTIMIZE TABLE on those tables.
> >>
> >> > 2. from people's production experience how safe is it to run OPTIMIZE,
> >> > e.g. how long is it locking tables, etc.
> >>
> >> Is it safe? Yes.
> >>
> >> Does it lock the entire table for the duration of the operation? No. It 
> >> uses
> >> online DDL operations:
> >>
> >> https://dev.mysql.com/doc/refman/5.7/en/innodb-file-defragmenting.html
> >>
> >> Note that OPTIMIZE TABLE is mapped to ALTER TABLE tbl_name FORCE for InnoDB
> >> tables.
> >>
> >> > 3. is there a heuristic we can use to measure when we might run this
> >> > -.e.g my plan is we measure the size in bytes of each row in a table
> >> > and then compare that in some ratio to the size of the corresponding
> >> > .ibd file, if the .ibd file is N times larger than the logical data
> >> > size we run OPTIMIZE ?
> >>
> >> I don't believe so, no. Most things I see recommended is to simply run
> >> OPTIMIZE TABLE in a cron job on each table periodically.
> >>
> >> > 4. I'd like to propose this job of scanning table datafile sizes in
> >> > ratio to logical data sizes, then running OPTIMIZE, be a utility
> >> > script that is delivered via oslo.db, and would run for all innodb
> >> > tables within a target MySQL/ MariaDB server generically.  That is, I
> >> > really *dont* want this to be a script that Keystone, Nova, Ceilometer
> >> > etc. are all maintaining delivering themselves.   this should be done
> >> > as a generic pass on a whole database (noting, again, we are only
> >> > running it for very specific InnoDB tables that we observe have a poor
> >> > logical/physical size ratio).
> >>
> >> I don't believe this should be in oslo.db. This is strictly the purview of
> >> deployment tools and should stay there, IMHO.
> >>
> >
> > Hi,
> >
> > As far as I know most projects do "soft deletes" where we just flag the
> > rows as deleted and don't remove them from the DB, so it's only when we
> > use a management tool and run the "purge" command that we actually
> > remove these rows.
> >
> > Since running the optimize without purging would be meaningless, I'm
> > wondering if we should trigger the OPTIMIZE also within the purging
> > code.  This way we could avoid innefective runs of the optimize command
> > when no purge has happened and even when we do the optimization we could
> > skip the ratio calculation altogether for tables where no rows have been
> > deleted (the ratio hasn't changed).
> >
>
> the issue is that this OPTIMIZE will block on Galera unless it is run
> on a per-individual node basis along with the changing of the
> wsrep_OSU_method parameter, this is way out of scope both to be
> redundantly hardcoded in multiple openstack projects, as well as
> there's no portable way for Keystone and others to get at the
> individual Galera node addresses.Putting it in oslo.db would at
> least be a place that most of 

Re: [openstack-dev] [horizon][xstatic]How to handle xstatic if upstream files are modified

2018-04-09 Thread Radomir Dopieralski
The whole idea about xstatic files is that they are generic, not specific
to Horizon or OpenStack, usable by other projects that need those static
files. In fact, at the time we started using xstatic, it was being used by
the MoinMoin wiki project (which is now dead, sadly). The modifications you
made are very specific to your usecase and would make it impossible to
reuse the packages by other applications (or even by other Horizon
plugins). The whole idea of a library is that you are using it as it is
provided, and not modifying it.

We generally try to use all the libraries as they are, and if there are any
modifications necessary, we push them upstream, to the original library.
Otherwise there would be quite a bit of maintenance overhead necessary to
keep all our downstream patches. When considerable modification is
necessary that can't be pushed upstream, we fork the library either into
its own repository, or include it in the repository of the application that
is using it.

On Mon, Apr 9, 2018 at 2:54 AM, Xinni Ge  wrote:

> Hello, team.
>
> Sorry for talking about xstatic repo for so many times.
>
> I didn't realize xstatic repositories should be provided with exactly the
> same file as upstream, and should have talked about it at very first.
>
> I modified several upstream files because some of them files couldn't be
> used directly under my expectation.
>
> For example,  {{ }} are used in some original files as template tags, but
> Horizon adopts {$ $} in angular module, so I modified them to be recognized
> properly.
>
> Another major modification is that css files are converted into scss files
> to solve some css import issue previously.
> Besides, after collecting statics, some png file paths in css cannot be
> referenced properly and shown as 404 errors, I also modified css itself to
> handle this issues.
>
> I will recheck all the un-matched xstatic repositories and try to replace
> with upstream  files as much as I can.
> But I if I really have to modify some original files, is it acceptable to
> still use it as embedded files with license info appeared at the top?
>
>
> Best Regards,
> Xinni Ge
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-09 Thread Gorka Eguileor
On 06/04, Matt Riedemann wrote:
> On 4/6/2018 5:09 AM, Matthew Booth wrote:
> > I think you're talking at cross purposes here: this won't require a
> > swap volume. Apart from anything else, swap volume only works on an
> > attached volume, and as previously discussed Nova will detach and
> > re-attach.
> >
> > Gorka, the Nova api Matt is referring to is called volume update
> > externally. It's the operation required for live migrating an attached
> > volume between backends. It's called swap volume internally in Nova.
>
> Yeah I was hoping we were just having a misunderstanding of what 'swap
> volume' in nova is, which is the blockRebase for an already attached volume
> to the guest, called from cinder during a volume retype or migration.
>
> As for the re-image thing, nova would be detaching the volume from the guest
> prior to calling the new cinder re-image API, and then re-attach to the
> guest afterward - similar to how shelve and unshelve work, and for that
> matter how rebuild works today with non-root volumes.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi,

Thanks for the clarification.  When I was talking about "swapping" I was
referring to the fact that Nova will have to not only detach the volume
locally using OS-Brick, but it will also need to use new connection
information to do the attach after the volume has been re-imaged.

As I see it, the process would look something like this:

- Nova detaches volume using OS-Brick
- Nova calls Cinder re-image passing the node's info (like we do when
  attaching a new volume)
- Cinder would:
  - Ensure only that node is connected to the volume
  - Terminate connection to the original volume
  - If we can do optimized volume creation:
- If encrypted volume we create a copy of the encryption key on
  Barbican or copy the ID field from the DB and ensure we don't
  delete the Barbican key on the delete.
- Create new volume from image
- Swap DB fields to preserve the UUID
- Delete original volume
  - If it cannot do optimized volume creation:
- Initialize+Attach volume to Cinder node
- DD the new image into the volume
- Detach+Terminate volume
  - Initialize connection for the new volume to the Nova node
  - Return connection information to the volume
- Nova attaches volume with OS-Brick using returned connection
  information.

So I agree, it's not a blockRebase operation, just a change in the
volume that is used.

Regards,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images

2018-04-09 Thread Surya Singh
On Sat, Apr 7, 2018 at 11:11 AM, Jeffrey Zhang  wrote:
> +1 for kolla-api
>
> Migrate all scripts from kolla(image) to kolla-ansible, will make image hard
> to use by
> downstream. Martin explain this clearly. we need some API to make images
> more easy to use.
> For the operator, I don't think he needs to read all the set_config.py file.
> Just knowing
> how the config.json file looks like and effects of the file are enough. So a
> doc is enough.
>

Yes agree, moving the scripts from kolla will not be that soft to use
for downstream.
And it seems very reasonable to me that kolla API can be a good thing
to make images easy to use

>
> For images, we need to add some common functions before using them. Instead
> of
> using the upstream image directly. For example, if we support loci, mostly
> we
> will use upgrade infra images. like mariadb, redis etc. But is them really
> enough
> for production use directly? there is some concern here
>
> - drop root. does it work when it runs without root?
> - init process. Does it contain a init process binary?
> - configuration. The different image may use different configuration method.
> Should we need
>   unify them?
> - lack of packages. what the image lack some packages we needed?
>
>
> One of a possible solution for this, I think, is use upstream image +
> kolla-api to generate a
> image with the features.
>
> On Sat, Apr 7, 2018 at 6:47 AM, Steven Dake (stdake) 
> wrote:
>>
>> Mark,
>>
>>
>>
>> TLDR good proposal
>>
>>
>>
>> I don’t think Paul was proposing what you proposed.  However:
>>
>>
>>
>> You make a strong case for separately packaging the api (mostly which Is
>> setcfg.py and the json API + docs + samples).  I am super surprised nobody
>> has ever proposed this in the past, but now is as good of a time as any to
>> propose a good model for managing the JSON->setcfg.py API.  We could unit
>> test this with extreme clarity, document with extreme clarity, and provide
>> an easier path for people to submit changes to the API that they require to
>> run the OpenStack containers.  Finally, it would provide complete semver
>> semantics for managing change and provide perfect backwards compatibility.
>>
>>
>>
>> A separate repo for this proposed api split makes sense to me.  I think
>> initially we would want to seed with the kolla core team but be open to
>> anyone that reviews + contributes to join the kolla-api core team (just as
>> happens with other kolla deliverables).
>>
>>
>>
>> This should reduce cross-project developer friction which was an implied
>> but unstated problem in the various threads over the last week and produce
>> the many other beneficial effects APIs produce along with the benefits you
>> stated above.
>>
>>
>>
>> I’m not sure if this approach is technically sound –but I’d be in favor of
>> this approach if it were not too disruptive, provided full backwards
>> compatibility and was felt to be an improvement by the consumers of kolla
>> images.  I don’t think deprecation is something that is all that viable with
>> an API model like the one we have nor this new repo and think we need to set
>> clear boundaries around what would/would not be done.
>>
>>
>>
>> I do know that a change of this magnitude is a lot of work for the
>> community to take on – and just like adding or removing any deliverable in
>> kolla, would require a majority vote from the CR team.
>>
>>
>>
>> Also, repeating myself, I don’t think the current API is good nor perfect,
>> I don’t think perfection is necessarily possible, but this may help drive
>> towards that mythical perfection that interested parties seek to achieve.
>>
>>
>> Cheers
>>
>> -steve
>>
>>
>>
>> From: Mark Goddard 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Friday, April 6, 2018 at 12:30 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out
>> of Kolla images
>>
>>
>>
>>
>>
>> On Thu, 5 Apr 2018, 20:28 Martin André,  wrote:
>>
>> On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke 
>> wrote:
>> > Hi all,
>> >
>> > This mail is to serve as a follow on to the discussion during
>> > yesterday's
>> > team meeting[4], which was regarding the desire to move start scripts
>> > out of
>> > the kolla images [0]. There's a few factors at play, and it may well be
>> > best
>> > left to discuss in person at the summit in May, but hopefully we can get
>> > at
>> > least some of this hashed out before then.
>> >
>> > I'll start by summarising why I think this is a good idea, and then
>> > attempt
>> > to address some of the concerns that have come up since.
>> >
>> > First off, to be frank, this is effort is driven by wanting to add
>> > support
>> > for loci images[1] in kolla-ansible. I think it 

[openstack-dev] [neutron] Bug deputy report

2018-04-09 Thread Luo, Lujin
Hello everyone,

I was on bug deputy between 2018/04/02 and 2018/04/09. I am sending a short 
summary of the bugs reported during this period. We do not have many bug 
reported this week. 

https://bugs.launchpad.net/neutron/+bug/1760047 - Confirmed but the importance 
is not yet decided. It is about when spawning large number of VMs at the same 
time, some ports not becoming ACTIVE. It seems we need more details from the 
bug reporter or we need to figure out a way to reproduce it in small scale. I 
will bring this to Miguel too.

https://bugs.launchpad.net/neutron/+bug/1760584 - Medium. This is about how 
tempest tests warnings about subnet CIDR. The possible fix is propsed by 
haleyb but no one has assigned this bug yet. If anyone if interested, please 
take it over. 

https://bugs.launchpad.net/neutron/+bug/1760902 - Low. Hongbin proposes we 
align segment resource to contain standard attributes.

https://bugs.launchpad.net/neutron/+bug/1761070 - Medium. It is about bridge 
mappings, where neutron/agent/linux/iptables_firewall.py doesn't take into 
account mappings and just uses the default bridge name which is derived from 
the network ID. It is not assigned yet. Anyone interested, please take it over.

https://bugs.launchpad.net/neutron/+bug/1761555 and 
https://bugs.launchpad.net/neutron/+bug/1761591 - Triaging. Swami has been 
following up with the bug reporter to find out what the problems are. 

https://bugs.launchpad.net/neutron/+bug/1761748 - Medium. CI failures in 
networking-hyperv about not able to get port details for devices. It is not 
assigned yet. Anyone interested, please take it over. 

https://bugs.launchpad.net/neutron/+bug/1761823 - RFE. This derives from 
another RFE that we should add /ip-address resource to API. It needs discussion 
on the drivers meeting.  

Best regards,
Lujin

∽---
Lujin Luo
Email: luo.lu...@jp.fujitsu.com
Tel: (81) 044-754-2027
Linux Development Division
Platform Software Business Unit
Fujitsu Ltd.
--∽



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-09 Thread Miguel Angel Ajo Pelayo
I don't necessarily agree that rewriting test is the solution here.

May be for some extreme cases that could be fine, but from the maintenance
point of view doesn't sound very practical IMHO.

In some cases it can be just a parametrization of tests as they are, or
simply accounting for
a bit of extra headroom in quotas (when of course the purpose of such
specific tests is not
to verify the quota behaviour, for example).



On Sun, Apr 8, 2018 at 3:52 PM Gary Kotton  wrote:

> Hi,
>
> There are some tempest tests that check realization of resources on the
> networking platform and connectivity. Here things are challenging as each
> networking platform may be more restrictive than the upstream ML2 plugin.
> My thinking here is that we should leverage the tempest plugins for each
> networking platform and they can overwrite the problematic tests and
> address them as suitable for the specific plugin.
>
> Thanks
>
> Gary
>
>
>
> *From: *Miguel Angel Ajo Pelayo 
> *Reply-To: *OpenStack List 
> *Date: *Saturday, April 7, 2018 at 8:56 AM
> *To: *OpenStack List 
> *Subject: *Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario
> tests and OVN metadata
>
>
>
> this issue isn't only for networking ovn, please note that it happens with
> a flew other vendor plugins (like nsx), at least this is something we have
> found in downstream certifications.
>
>
>
> Cheers,
>
> On Sat, Apr 7, 2018, 12:36 AM Daniel Alvarez  wrote:
>
>
>
> > On 6 Apr 2018, at 19:04, Sławek Kapłoński  wrote:
> >
> > Hi,
> >
> > Another idea is to modify test that it will:
> > 1. Check how many ports are in tenant,
> > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it
> is now,
> > 3. Try to add 2 ports - exactly as it is now,
> >
> Cool, I like this one :-)
> Good idea.
>
> > I think that this should be still backend agnostic and should fix this
> problem.
> >
> >> Wiadomość napisana przez Sławek Kapłoński  w dniu
> 06.04.2018, o godz. 17:08:
> >>
> >> Hi,
> >>
> >> I don’t know how networking-ovn is working but I have one question.
> >>
> >>
> >>> Wiadomość napisana przez Daniel Alvarez Sanchez 
> w dniu 06.04.2018, o godz. 15:30:
> >>>
> >>> Hi,
> >>>
> >>> Thanks Lucas for writing this down.
> >>>
> >>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes <
> lucasago...@gmail.com> wrote:
> >>> Hi,
> >>>
> >>> The tests below are failing in the tempest API / Scenario job that
> >>> runs in the networking-ovn gate (non-voting):
> >>>
> >>>
> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full
> >>>
> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status
> >>>
> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr
> >>>
> >>> Digging a bit into it I noticed that with the exception of the two
> >>> "test_router_interface_status" (ipv6 and ipv4) all other tests are
> >>> failing because the way metadata works in networking-ovn.
> >>>
> >>> Taking the "test_create_port_when_quotas_is_full" as an example. The
> >>> reason why it fails is because when the OVN metadata is enabled,
> >>> networking-ovn will metadata port at the moment a network is created
> >>> [0] and that will already fulfill the quota limit set by that test
> >>> [1].
> >>>
> >>> That port will also allocate an IP from the subnet which will cause
> >>> the rest of the tests to fail with a "No more IP addresses available
> >>> on network ..." error.
> >>>
> >>> With ML2/OVS we would run into the same Quota problem if DHCP would be
> >>> enabled for the created subnets. This means that if we modify the
> current tests
> >>> to enable DHCP on them and we account this extra port it would be
> valid for
> >>> all networking-ovn as well. Does it sound good or we still want to
> isolate quotas?
> >>
> >> If DHCP will be enabled for networking-ovn, will it use one more port
> also or not? If so then You will still have the same problem with DHCP as
> in ML2/OVS You will have one port created and for networking-ovn it will be
> 2 ports.
> >> If it’s not like that then I think that this solution, with some
> comment in test code why DHCP is enabled should be good IMO.
> >>
> >>>
> >>> This is not very trivial to fix because:
> >>>
> >>> 1. Tempest should be backend agnostic. So, adding a conditional in the
> >>> tempest test to check whether OVN is being used or not doesn't sound
> >>> correct.
> >>>
> >>> 2. Creating a