Re: [openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/pike image build by devstack

2018-05-19 Thread Michael Johnson
Yes, this just started occurring with Thursday/Fridays updates to the
Ubuntu cloud image upstream of us.

I have posted a patch for Queens here: https://review.openstack.org/#/c/569531

We will be back porting that as soon as we can to the other stable
releases. Please review the backports as they come out to help the
team merge them as soon as possible.

Michael (johnsom)

On Fri, May 18, 2018 at 10:16 PM, rezroo  wrote:
> Hi - let's try this again - this time with pike :-)
> Any suggestions on how to get the image builder to create a larger loop
> device? I think that's what the problem is.
> Thanks in advance.
>
> 2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
> diskimage_builder.block_device.level1.mbr [-] Write partition entry blockno
> [0] entry [0] start [2048] length [4190208]   [57/1588]
> 2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo sync]
> 2018-05-19 05:03:04.538 | 2018-05-19 05:03:04.537 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo kpartx -avs
> /dev/loop3]
> 2018-05-19 05:03:04.642 | 2018-05-19 05:03:04.642 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo mkfs -t ext4 -i 4096
> -J size=64 -L cloudimg-rootfs -U 376d4b4d-2597-4838-963a-3d
> 9c5fcb5d9c -q /dev/mapper/loop3p1]
> 2018-05-19 05:03:04.824 | 2018-05-19 05:03:04.823 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p
> /tmp/dib_build.zv2VZo3W/mnt/]
> 2018-05-19 05:03:04.833 | 2018-05-19 05:03:04.833 INFO
> diskimage_builder.block_device.level3.mount [-] Mounting [mount_mkfs_root]
> to [/tmp/dib_build.zv2VZo3W/mnt/]
> 2018-05-19 05:03:04.834 | 2018-05-19 05:03:04.833 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo mount
> /dev/mapper/loop3p1 /tmp/dib_build.zv2VZo3W/mnt/]
> 2018-05-19 05:03:04.850 | 2018-05-19 05:03:04.850 INFO
> diskimage_builder.block_device.blockdevice [-] create() finished
> 2018-05-19 05:03:05.527 | 2018-05-19 05:03:05.527 INFO
> diskimage_builder.block_device.blockdevice [-] Getting value for
> [image-block-device]
> 2018-05-19 05:03:06.168 | 2018-05-19 05:03:06.168 INFO
> diskimage_builder.block_device.blockdevice [-] Getting value for
> [image-block-devices]
> 2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO
> diskimage_builder.block_device.blockdevice [-] Creating fstab
> 2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p
> /tmp/dib_build.zv2VZo3W/built/etc]
> 2018-05-19 05:03:06.855 | 2018-05-19 05:03:06.855 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo cp
> /tmp/dib_build.zv2VZo3W/states/block-device/fstab
> /tmp/dib_build.zv2VZo3W/bui
> lt/etc/fstab]
> 2018-05-19 05:03:12.946 | dib-run-parts Sat May 19 05:03:12 UTC 2018
> Sourcing environment file
> /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline
> 2018-05-19 05:03:12.947 | + source
> /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline
> 2018-05-19 05:03:12.947 | ++ export 'DIB_BOOTLOADER_DEFAULT_CMDLINE=nofb
> nomodeset vga=normal'
> 2018-05-19 05:03:12.947 | ++ DIB_BOOTLOADER_DEFAULT_CMDLINE='nofb nomodeset
> vga=normal'
> 2018-05-19 05:03:12.948 | dib-run-parts Sat May 19 05:03:12 UTC 2018
> Sourcing environment file
> /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
> 2018-05-19 05:03:12.950 | + source
> /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
> 2018-05-19 05:03:12.950 |  dirname
> /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
> 2018-05-19 05:03:12.951 | +++
> PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/finalise.d/../environment.d/..'
> 2018-05-19 05:03:12.951 | +++ dib-init-system
> 2018-05-19 05:03:12.953 | ++ DIB_INIT_SYSTEM=systemd
> 2018-05-19 05:03:12.953 | ++ export DIB_INIT_SYSTEM
> 2018-05-19 05:03:12.954 | dib-run-parts Sat May 19 05:03:12 UTC 2018
> Sourcing environment file
> /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache
> 2018-05-19 05:03:12.955 | + source
> /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache
> 2018-05-19 05:03:12.955 | ++ export PIP_DOWNLOAD_CACHE=/tmp/pip
> 2018-05-19 05:03:12.955 | ++ PIP_DOWNLOAD_CACHE=/tmp/pip
> 2018-05-19 05:03:12.956 | dib-run-parts Sat May 19 05:03:12 UTC 2018
> Sourcing environment file
> /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash
> 2018-05-19 05:03:12.958 | + source
> /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash
> 2018-05-19 05:03:12.958 | ++ export DISTRO_NAME=ubuntu
> 2018-05-19 05:03:12.958 | ++ DISTRO_NAME=ubuntu
> 2018-05-19 05:03:12.958 | ++ export DIB_RELEASE=xenial
> 2018-05-19 05:03:12.958 | ++ DIB_RELEASE=xenial
> 2018-05-19 05:03:12.959 | dib-run-parts Sat May 19 05:03:12 UTC 2018
> Sourcing environment file
> /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash
> 2018-05-19 

Re: [openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/ocata image build by devstack

2018-05-18 Thread Michael Johnson
Hi rezroo,

Yes, the recent release of pip 10 broke the disk image building.
There is a patch posted here: https://review.openstack.org/#/c/562850/
pending review that works around this issue for the ocata branch by
pining the pip used for the image building to a version that does not
have this issue.

Michael


On Thu, May 17, 2018 at 7:38 PM, rezroo  wrote:
> Hello - I'm trying to install a working local.conf devstack ocata on a new
> server, and some python packages have changed so I end up with this error
> during the build of octavia image:
>
> 2018-05-18 01:00:26.276 |   Found existing installation: Jinja2 2.8
> 2018-05-18 01:00:26.280 | Uninstalling Jinja2-2.8:
> 2018-05-18 01:00:26.280 |   Successfully uninstalled Jinja2-2.8
> 2018-05-18 01:00:26.839 |   Found existing installation: PyYAML 3.11
> 2018-05-18 01:00:26.969 | Cannot uninstall 'PyYAML'. It is a distutils
> installed project and thus we cannot accurately determine which files belong
> to it which would lead to only a partial uninstall.
>
> 2018-05-18 02:05:44.768 | Unmount
> /tmp/dib_build.2fbBBePD/mnt/var/cache/apt/archives
> 2018-05-18 02:05:44.796 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/pip
> 2018-05-18 02:05:44.820 | Unmount
> /tmp/dib_build.2fbBBePD/mnt/tmp/in_target.d
> 2018-05-18 02:05:44.844 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/ccache
> 2018-05-18 02:05:44.868 | Unmount /tmp/dib_build.2fbBBePD/mnt/sys
> 2018-05-18 02:05:44.896 | Unmount /tmp/dib_build.2fbBBePD/mnt/proc
> 2018-05-18 02:05:44.920 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev/pts
> 2018-05-18 02:05:44.947 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev
> 2018-05-18 02:05:50.668 |
> +/opt/stack/octavia/devstack/plugin.sh:build_octavia_worker_image:1
> exit_trap
> 2018-05-18 02:05:50.679 | +./devstack/stack.sh:exit_trap:494 local
> r=1
> 2018-05-18 02:05:50.690 | ++./devstack/stack.sh:exit_trap:495 jobs
> -p
> 2018-05-18 02:05:50.700 | +./devstack/stack.sh:exit_trap:495 jobs=
> 2018-05-18 02:05:50.710 | +./devstack/stack.sh:exit_trap:498 [[ -n
> '' ]]
> 2018-05-18 02:05:50.720 | +./devstack/stack.sh:exit_trap:504
> kill_spinner
> 2018-05-18 02:05:50.731 | +./devstack/stack.sh:kill_spinner:390  '[' '!'
> -z '' ']'
> 2018-05-18 02:05:50.741 | +./devstack/stack.sh:exit_trap:506 [[ 1
> -ne 0 ]]
> 2018-05-18 02:05:50.751 | +./devstack/stack.sh:exit_trap:507 echo
> 'Error on exit'
> 2018-05-18 02:05:50.751 | Error on exit
> 2018-05-18 02:05:50.761 | +./devstack/stack.sh:exit_trap:508
> generate-subunit 1526608058 1092 fail
> 2018-05-18 02:05:51.148 | +./devstack/stack.sh:exit_trap:509 [[ -z
> /tmp ]]
> 2018-05-18 02:05:51.157 | +./devstack/stack.sh:exit_trap:512
> /home/stack/devstack/tools/worlddump.py -d /tmp
>
> I've tried pip uninstalling PyYAML and pip installing it before running
> stack.sh, but the error comes back.
>
> $ sudo pip uninstall PyYAML
> The directory '/home/stack/.cache/pip/http' or its parent directory is not
> owned by the current user and the cache has been disabled. Please check the
> permissions and owner of that directory. If executing pip with sudo, you may
> want sudo's -H flag.
> Uninstalling PyYAML-3.12:
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/INSTALLER
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/METADATA
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/RECORD
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/WHEEL
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/top_level.txt
>   /usr/local/lib/python2.7/dist-packages/_yaml.so
> Proceed (y/n)? y
>   Successfully uninstalled PyYAML-3.12
>
> I've posted my question to the pip folks and they think it's an openstack
> issue: https://github.com/pypa/pip/issues/4805
>
> Is there a workaround here?
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Jens Harbott
2018-04-16 7:46 GMT+00:00 Ian Wienand :
> On 04/15/2018 09:32 PM, Gary Kotton wrote:
>>
>> The gate is currently broken with
>>  https://launchpad.net/bugs/1763966.
>> https://review.openstack.org/#/c/561427/
>>  Can unblock us in the short term. Any other ideas?
>
>
> I'm thinking this is probably along the lines of the best idea.  I
> left a fairly long comment on this in [1], but the root issue here is
> that if a system package is created using distutils (rather than
> setuptools) we end up with this problem with pip10.
>
> That means the problem occurs when we a) try to overwrite a system
> package and b) that package has been created using distutils.  This
> means it is a small(er) subset of packages that cause this problem.
> Ergo, our best option might be to see if we can avoid such packages on
> a one-by-one basis, like here.
>
> In some cases, we could just delete the .egg-info file, which is
> approximately what was happening before anyway.
>
> In this particular case, the psutils package is used by glance & the
> peakmem tracker.  Under USE_PYTHON3, devstack's pip_install_gr only
> installs the python3 library; however the peakmem tracker always uses
> python2 -- leaing to missing library the failures in [2].  I have two
> thoughts; either install for both python2 & 3 always [3] or make
> peakmem tracker obey USE_PYTHON3 [4].  We can discuss the approach in
> the reviews.
>
> The other option is to move everything to virtualenv's, so we never
> conflict with a system package, as suggested by clarkb [5] or
> pabelanger [6].  These are more invasive changes, but also arguably
> more correct.
>
> Note diskimage-builder, and hence our image generation for some
> platforms, is also broken.  Working on that in [7].

The cap in devstack has been merged in master and stable/queens, other
merges are being help up by unstable volume checks or so it seems.

There is also another issue caused by pip 10 treating some former
warning as error now. I've tried to list all "global" (Infra+QA)
related issues in [8], feel free to amend as needed.

[8] https://etherpad.openstack.org/p/pip10-mitigation

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Ian Wienand

On 04/15/2018 09:32 PM, Gary Kotton wrote:

The gate is currently broken with
 https://launchpad.net/bugs/1763966. https://review.openstack.org/#/c/561427/
 Can unblock us in the short term. Any other ideas?


I'm thinking this is probably along the lines of the best idea.  I
left a fairly long comment on this in [1], but the root issue here is
that if a system package is created using distutils (rather than
setuptools) we end up with this problem with pip10.

That means the problem occurs when we a) try to overwrite a system
package and b) that package has been created using distutils.  This
means it is a small(er) subset of packages that cause this problem.
Ergo, our best option might be to see if we can avoid such packages on
a one-by-one basis, like here.

In some cases, we could just delete the .egg-info file, which is
approximately what was happening before anyway.

In this particular case, the psutils package is used by glance & the
peakmem tracker.  Under USE_PYTHON3, devstack's pip_install_gr only
installs the python3 library; however the peakmem tracker always uses
python2 -- leaing to missing library the failures in [2].  I have two
thoughts; either install for both python2 & 3 always [3] or make
peakmem tracker obey USE_PYTHON3 [4].  We can discuss the approach in
the reviews.

The other option is to move everything to virtualenv's, so we never
conflict with a system package, as suggested by clarkb [5] or
pabelanger [6].  These are more invasive changes, but also arguably
more correct.

Note diskimage-builder, and hence our image generation for some
platforms, is also broken.  Working on that in [7].

-i


[1] https://github.com/pypa/pip/issues/4805#issuecomment-340987536
[2] https://review.openstack.org/561427
[3] https://review.openstack.org/561524
[4] https://review.openstack.org/561525
[5] https://review.openstack.org/558930
[6] https://review.openstack.org/#/c/552939
[7] https://review.openstack.org/#/c/561479/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Slawomir Kaplonski
Right. Thx Gary :)

> Wiadomość napisana przez Gary Kotton  w dniu 16.04.2018, 
> o godz. 09:14:
> 
> Hi,
> I think that we need https://review.openstack.org/561471 until we have a 
> proper solution.
> Thanks
> Gary
> 
> On 4/16/18, 10:13 AM, "Slawomir Kaplonski"  wrote:
> 
>Hi,
> 
>I just wanted to ask if there is any ongoing work on 
> https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It 
> looks that e.g. all grenade jobs in neutron are broken currently :/
> 
>> Wiadomość napisana przez Gary Kotton  w dniu 15.04.2018, 
>> o godz. 13:32:
>> 
>> Hi,
>> The gate is currently broken with https://launchpad.net/bugs/1763966. 
>> https://review.openstack.org/#/c/561427/ Can unblock us in the short term. 
>> Any other ideas?
>> Thanks
>> Gary
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>— 
>Best regards
>Slawek Kaplonski
>skapl...@redhat.com
> 
> 
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Best regards
Slawek Kaplonski
skapl...@redhat.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Gary Kotton
Hi,
I think that we need https://review.openstack.org/561471 until we have a proper 
solution.
Thanks
Gary

On 4/16/18, 10:13 AM, "Slawomir Kaplonski"  wrote:

Hi,

I just wanted to ask if there is any ongoing work on 
https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It 
looks that e.g. all grenade jobs in neutron are broken currently :/

> Wiadomość napisana przez Gary Kotton  w dniu 
15.04.2018, o godz. 13:32:
> 
> Hi,
> The gate is currently broken with https://launchpad.net/bugs/1763966. 
https://review.openstack.org/#/c/561427/ Can unblock us in the short term. Any 
other ideas?
> Thanks
> Gary
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Best regards
Slawek Kaplonski
skapl...@redhat.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Slawomir Kaplonski
Hi,

I just wanted to ask if there is any ongoing work on 
https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It 
looks that e.g. all grenade jobs in neutron are broken currently :/

> Wiadomość napisana przez Gary Kotton  w dniu 15.04.2018, 
> o godz. 13:32:
> 
> Hi,
> The gate is currently broken with https://launchpad.net/bugs/1763966. 
> https://review.openstack.org/#/c/561427/ Can unblock us in the short term. 
> Any other ideas?
> Thanks
> Gary
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Best regards
Slawek Kaplonski
skapl...@redhat.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-04-02 Thread Ghanshyam Mann
On Thu, Mar 29, 2018 at 5:21 AM, James E. Blair  wrote:
> Hi,
>
> I've proposed a change to devstack which slightly alters the
> LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
> those using legacy devstack jobs (but you may want to be aware of it).
> It is more significant for new-style devstack jobs.
>
> The change is at https://review.openstack.org/549252
>
> In summary, when this change lands, new-style devstack jobs should no
> longer need to set LIBS_FROM_GIT explicitly.  Existing legacy jobs
> should be unaffected (but there is a change to the verification process
> performed by devstack).
>
>
> Currently devstack expects the contents of LIBS_FROM_GIT to be
> exclusively a list of python packages which, obviously, should be
> installed from git and not pypi.  It is used for two purposes:
> determining whether an individual package should be installed from git,
> and verifying that a package was installed from git.
>
> In the old devstack-gate system, we prepared many of the common git
> repos, whether they were used or not.  So LIBS_FROM_GIT was created to
> indicate that in some cases devstack should ignore those repos and
> install from pypi instead.  In other words, its original purpose was
> purely as a method of selecting whether a devstack-gate prepared repo
> should be used or ignored.
>
> In Zuul v3, we have a good way to indicate whether a job is going to use
> a repo or not -- add it to "required-projects".  Considering that, the
> LIBS_FROM_GIT variable is redundant.  So my patch causes it to be
> automatically generated based on the contents of required-projects.
> This means that job authors don't need to list every required repository
> twice.
>
> However, a naïve implementation of that runs afoul of the second use of
> LIBS_FROM_GIT -- verifying that python packages are installed from git.
>
> This usage was added later, after a typographical error ("-" vs "_" in a
> python package name) in a constraints file caused us not to install a
> package from git.  Now devstack verifies that every package in
> LIBS_FROM_GIT is installed.  However, Zuul doesn't know that devstack,
> tempest, and other packages aren't installed.  So adding them
> automatically to LIBS_FROM_GIT will cause devstack to fail.
>
> My change modifies this verification to only check that packages
> mentioned in LIBS_FROM_GIT that devstack tried to install were actually
> installed.  I realize that stated as such this sounds tautological,
> however, this check is still valid -- it would have caught the original
> error that prompted the check in the first case.
>
> What the revised check will no longer handle is a typo in a legacy job.
> If someone enters a typo into LIBS_FROM_GIT, it will no longer fail.
> However, I think the risk is worthwhile -- particularly since it is in
> service of a system which eliminates the opportunity to introduce such
> an error in the first place.
>
> To see the result in action, take a look at this change which, in only a
> few lines, implements what was a significantly more complex undertaking
> in Zuul v2:
>
> https://review.openstack.org/548331
>
> Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for
> some reason, you require a new-style devstack job to manually set
> LIBS_FROM_GIT, that will still work.  Simply define the variable as
> normal, and the module which generates the devstack config will bypass
> automatic generation if the variable is already set.

+1, thanks Jim. idea looks good to me as long as it still works for
non-zuulv3 users. ll check the patch.

-gmann

>
> -Jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-29 Thread Sean McGinnis
> 
> Neither local nor third-party CI use should be affected.  There's no
> change in behavior based on current usage patterns.  Only the caveat
> that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or
> non-existent package name), it will not automatically be caught.
> 
> -Jim

Perfect, thanks Jim.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-29 Thread James E. Blair
Sean McGinnis  writes:

> On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
>> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
>> > Hi,
>> > 
>> > I've proposed a change to devstack which slightly alters the
>> > LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
>> > those using legacy devstack jobs (but you may want to be aware of it).
>> > It is more significant for new-style devstack jobs.
>> > 
>> > -snip-
>> > 
>> 
>> How does this apply to uses of devstack outside of zuul, such as in a
>> local development environment?
>> 
>> Doug
>> 
>
> This is my question too. I know in Cinder there are a lot of third party CI
> systems that do not use zuul. If they are impacted in any way by changes to
> devstack, we will need to make sure they are all aware of those changes (and
> have an alternative method for them to get the same functionality).

Neither local nor third-party CI use should be affected.  There's no
change in behavior based on current usage patterns.  Only the caveat
that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or
non-existent package name), it will not automatically be caught.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-29 Thread Sean McGinnis
On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
> > Hi,
> > 
> > I've proposed a change to devstack which slightly alters the
> > LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
> > those using legacy devstack jobs (but you may want to be aware of it).
> > It is more significant for new-style devstack jobs.
> > 
> > -snip-
> > 
> 
> How does this apply to uses of devstack outside of zuul, such as in a
> local development environment?
> 
> Doug
> 

This is my question too. I know in Cinder there are a lot of third party CI
systems that do not use zuul. If they are impacted in any way by changes to
devstack, we will need to make sure they are all aware of those changes (and
have an alternative method for them to get the same functionality).

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-29 Thread Tony Breeds
On Fri, Mar 16, 2018 at 02:29:51PM +, Kwan, Louie wrote:
> In the stable/queens branch, since openstacksdk0.11.3 and 
> os-service-types1.1.0 are described in openstack's upper-constraints.txt, 
> 
> https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
> https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297
> 
> If I do 
> 
> > git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens
> 
> And then stack.sh
> 
> We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Okay that's pretty strange.  I can't think of why you'd be getting the
master version of upper-constraints.txt from the queens branch.

[tony@thor requirements]$ tools/grep-all.sh openstacksdk | grep -E 
'(master|queens)'
origin/master : openstacksdk>=0.11.2  # Apache-2.0
origin/stable/queens  : openstacksdk>=0.9.19  # Apache-2.0
origin/master : openstacksdk===0.12.0
origin/stable/queens  : openstacksdk===0.11.3
[tony@thor requirements]$ tools/grep-all.sh os-service-types | grep -E 
'(master|queens)'
origin/master : os-service-types>=1.2.0  # Apache-2.0
origin/stable/queens  : os-service-types>=1.1.0  # Apache-2.0
origin/master : os-service-types===1.2.0
origin/stable/queens  : os-service-types===1.1.0


I quick eyeball of the code doesn't show anything obvious.

Can you provide the devstack log somewhere?
 
> Having said that, we need the older version, how to configure devstack to use 
> openstacksdk===0.11.3 and os-service-types===1.1.0

We can try to work out why you're getting the wrong versions but what
error/problem do you see with the version from master?

I'd expect some general we need version X of FOO but Y is installed
messages.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-28 Thread Doug Hellmann
Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
> Hi,
> 
> I've proposed a change to devstack which slightly alters the
> LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
> those using legacy devstack jobs (but you may want to be aware of it).
> It is more significant for new-style devstack jobs.
> 
> The change is at https://review.openstack.org/549252
> 
> In summary, when this change lands, new-style devstack jobs should no
> longer need to set LIBS_FROM_GIT explicitly.  Existing legacy jobs
> should be unaffected (but there is a change to the verification process
> performed by devstack).
> 
> 
> Currently devstack expects the contents of LIBS_FROM_GIT to be
> exclusively a list of python packages which, obviously, should be
> installed from git and not pypi.  It is used for two purposes:
> determining whether an individual package should be installed from git,
> and verifying that a package was installed from git.
> 
> In the old devstack-gate system, we prepared many of the common git
> repos, whether they were used or not.  So LIBS_FROM_GIT was created to
> indicate that in some cases devstack should ignore those repos and
> install from pypi instead.  In other words, its original purpose was
> purely as a method of selecting whether a devstack-gate prepared repo
> should be used or ignored.
> 
> In Zuul v3, we have a good way to indicate whether a job is going to use
> a repo or not -- add it to "required-projects".  Considering that, the
> LIBS_FROM_GIT variable is redundant.  So my patch causes it to be
> automatically generated based on the contents of required-projects.
> This means that job authors don't need to list every required repository
> twice.
> 
> However, a naïve implementation of that runs afoul of the second use of
> LIBS_FROM_GIT -- verifying that python packages are installed from git.
> 
> This usage was added later, after a typographical error ("-" vs "_" in a
> python package name) in a constraints file caused us not to install a
> package from git.  Now devstack verifies that every package in
> LIBS_FROM_GIT is installed.  However, Zuul doesn't know that devstack,
> tempest, and other packages aren't installed.  So adding them
> automatically to LIBS_FROM_GIT will cause devstack to fail.
> 
> My change modifies this verification to only check that packages
> mentioned in LIBS_FROM_GIT that devstack tried to install were actually
> installed.  I realize that stated as such this sounds tautological,
> however, this check is still valid -- it would have caught the original
> error that prompted the check in the first case.
> 
> What the revised check will no longer handle is a typo in a legacy job.
> If someone enters a typo into LIBS_FROM_GIT, it will no longer fail.
> However, I think the risk is worthwhile -- particularly since it is in
> service of a system which eliminates the opportunity to introduce such
> an error in the first place.
> 
> To see the result in action, take a look at this change which, in only a
> few lines, implements what was a significantly more complex undertaking
> in Zuul v2:
> 
> https://review.openstack.org/548331
> 
> Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for
> some reason, you require a new-style devstack job to manually set
> LIBS_FROM_GIT, that will still work.  Simply define the variable as
> normal, and the module which generates the devstack config will bypass
> automatic generation if the variable is already set.
> 
> -Jim
> 

How does this apply to uses of devstack outside of zuul, such as in a
local development environment?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-21 Thread Monty Taylor

On 03/16/2018 09:29 AM, Kwan, Louie wrote:

In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 
are described in openstack's upper-constraints.txt,

https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297

If I do


git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens


And then stack.sh

We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Having said that, we need the older version, how to configure devstack to use 
openstacksdk===0.11.3 and os-service-types===1.1.0


Would you mind sharing why you need the older versions?

os-service-types is explicitly designed such that the latest version 
should always be correct.


If there is something in 1.2.0 that has broken you in some way that you 
need an older version, that's a problem and we should look in to it.


The story is intended to be similar for sdk moving forward ... but we're 
still pre-1.0, so that makes sense at the moment. I'm still interested 
in what specific issue you had, just to make sure we're aware of issues 
people are having.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-16 Thread Matt Riedemann

On 3/16/2018 9:29 AM, Kwan, Louie wrote:

In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 
are described in openstack's upper-constraints.txt,

https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297

If I do


git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens


And then stack.sh

We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Having said that, we need the older version, how to configure devstack to use 
openstacksdk===0.11.3 and os-service-types===1.1.0



You could try setting this in your local.conf:

https://github.com/openstack-dev/devstack/blob/master/stackrc#L547

GITBRANCH["python-openstacksdk"]=0.11.3

But I don't see a similar entry for os-service-types.

I don't know if ^ will work, but it's what I'd try.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Jens Harbott added to core

2018-03-05 Thread Andrea Frittoli
On Mon, 5 Mar 2018, 1:02 am Ian Wienand,  wrote:

> Hello,
>
> Jens Harbott (frickler) has agreed to take on core responsibilities in
> devstack, so feel free to bug him about reviews :)
>

Yay +1

>
> We have also added the members of qa-release in directly to
> devstack-core, just for visibility (they already had permissions via
> qa-release -> devstack-release -> devstack-core).
>
> We have also added devstack-core as grenade core to hopefully expand
> coverage there.
>

Thanks, this helps indeed.
I started working on the zuulv3 native grenade jobs, hopefully this will
help getting a bit more speed on that.


> ---
>
> Always feel free to give a gentle ping on reviews that don't seem have
> received sufficient attention.
>
> But please also take a few minutes to compose a commit message!  I
> think sometimes devs have been deep in the weeds with their cool
> change and devstack requires just a few tweaks.  It's easy to forget
> not all reviewers may have this same context.  A couple of
> well-crafted sentences can avoid pulling projects and "git blame"
> archaeological digs, which gets everything going faster!
>


+1000

Andrea Frittoli (andreaf)

>
> Thanks,
>
> -i
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Broken repo on devstack-plugin-container for Fedora

2018-01-24 Thread Andreas Jaeger
On 2018-01-24 14:14, Daniel Mellado wrote:
> Hi everyone,
> 
> Since today, when I try to install devstack-plugin-container plugin over
> fedora. It complains in here [1] about not being able to sync the cache
> for the repo with the following error [2].
> 
> This is affecting me on Fedora26+ from different network locations, so I
> was wondering if someone from suse could have a look (it did work for
> Andreas in opensuse... thanks in advance!)

Just a heads up:

So, one problem: The signing key was expired. The key was extended but
not used - now the repo has been published again using the extended key.
So, download works.

AFAIU there's still some problem where dnf is not happy with - Daniel is
investigating,

Andreas

> 
> [1]
> https://github.com/openstack/devstack-plugin-container/blob/master/devstack/lib/docker#L164-L170
> 
> [2] http://paste.openstack.org/show/652041/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Broken repo on devstack-plugin-container for Fedora

2018-01-24 Thread Paul Belanger
On Wed, Jan 24, 2018 at 02:14:40PM +0100, Daniel Mellado wrote:
> Hi everyone,
> 
> Since today, when I try to install devstack-plugin-container plugin over
> fedora. It complains in here [1] about not being able to sync the cache
> for the repo with the following error [2].
> 
> This is affecting me on Fedora26+ from different network locations, so I
> was wondering if someone from suse could have a look (it did work for
> Andreas in opensuse... thanks in advance!)
> 
> [1]
> https://github.com/openstack/devstack-plugin-container/blob/master/devstack/lib/docker#L164-L170
> 
> [2] http://paste.openstack.org/show/652041/
> 
We should consider mirroring this into our AFS mirror infrastrcuture to help
remove the dependency on opensuse servers. Then each regional mirror has a copy
and we don't always need to hit upstream.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][zuul] About devstack plugin orders and the log to contain the running local.conf

2017-11-22 Thread James E. Blair
cor...@inaugust.com (James E. Blair) writes:

> "gong_ys2004"  writes:
>
>> Hi, everyone
>> I am trying to migrate tacker's functional CI job into new zuul v3
>> framework, but it seems:
>> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
>> https://review.openstack.org/#/c/516004/4/.zuul.yaml:I
>> have:  devstack_plugins:
>> heat: https://git.openstack.org/openstack/heat
>> networking-sfc:  https://git.openstack.org/openstack/networking-sfc
>> aodh: https://git.openstack.org/openstack/aodh
>> ceilometer: https://git.openstack.org/openstack/ceilometer
>> barbican: https://git.openstack.org/openstack/barbican
>> mistral: https://git.openstack.org/openstack/mistral
>> tacker: https://git.openstack.org/openstack/tacker
>> but the running order
>> seems:http://logs.openstack.org/04/516004/4/check/tacker-functional-devstack/f365f21/job-output.txt.gz:
>> local plugins=,ceilometer,aodh,mistral,networking-sfc,heat,tacker,barbican
>> I need barbican to start before tacker.
>
> [I changed the subject to replace the 'openstack' tag with 'devstack',
> which is what I assume was intended.]
>
>
> As Yatin Karel later notes, this is handled as a regular python
> dictionary which means we process the keys in an indeterminate order.
>
> I can think of a few ways we can address this:
>
...
> 3) Add dependency information to devstack plugins, but rather than
> having devstack resolve it, have the Ansible role which writes out the
> local.conf read that information and resolve the order.  This lets us
> keep the actual information in plugins so we don't have to continually
> update the role, but it lets us perform the processing in the role
> (which is in Python) when writing the config file.
...
> After considering all of those, I think I favor option 3, because we
> should be able to implement it without too much difficulty, it will
> improve things by providing a known and documented location for plugins
> to specify dependencies, and once it is in place, we can still implement
> option 1 later if we want, using the same declaration.

I discussed this with Dean and we agreed on something close to this
option, except that we would do it in such a way that devstack could
potentially make use of this in the future.  For starters, it will be
easy for devstack to error if someone adds plugins in the wrong order.
If someone feels like having a lot of fun, they could actually implement
a dependency resolver in devstack.

I have two patches which implement this idea:

https://review.openstack.org/521965
https://review.openstack.org/522054

Once those land, we'll need to add the appropriate lines to barbican and
tacker's devstack plugin settings files, then the job you're creating
should start those plugins in the right order automatically.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][zuul] About devstack plugin orders and the log to contain the running local.conf

2017-10-30 Thread James E. Blair
"gong_ys2004"  writes:

> Hi, everyone
> I am trying to migrate tacker's functional CI job into new zuul v3 framework, 
> but it seems:
> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
> https://review.openstack.org/#/c/516004/4/.zuul.yaml:I have:  
> devstack_plugins:
> heat: https://git.openstack.org/openstack/heat
> networking-sfc:  https://git.openstack.org/openstack/networking-sfc
> aodh: https://git.openstack.org/openstack/aodh
> ceilometer: https://git.openstack.org/openstack/ceilometer
> barbican: https://git.openstack.org/openstack/barbican
> mistral: https://git.openstack.org/openstack/mistral
> tacker: https://git.openstack.org/openstack/tacker
> but the running order 
> seems:http://logs.openstack.org/04/516004/4/check/tacker-functional-devstack/f365f21/job-output.txt.gz:
> local plugins=,ceilometer,aodh,mistral,networking-sfc,heat,tacker,barbican
> I need barbican to start before tacker.

[I changed the subject to replace the 'openstack' tag with 'devstack',
which is what I assume was intended.]


As Yatin Karel later notes, this is handled as a regular python
dictionary which means we process the keys in an indeterminate order.

I can think of a few ways we can address this:

1) Add dependency information to devstack plugins so that devstack
itself is able to work out the correct order.  This is perhaps the ideal
solution from a user experience perspective, but perhaps the most
difficult.

2) Add dependency information to the Ansible role so that it resolves
the order on its own.  This is attractive because it solves a problem
that is unique to this Ansible role entirely within the role.  However,
it means that new plugins would need to also update this role which is
in devstack itself, which partially defeats the purpose of plugins.

3) Add dependency information to devstack plugins, but rather than
having devstack resolve it, have the Ansible role which writes out the
local.conf read that information and resolve the order.  This lets us
keep the actual information in plugins so we don't have to continually
update the role, but it lets us perform the processing in the role
(which is in Python) when writing the config file.

4) Alter Zuul's handling of this to an ordered dictionary.  Then when
you specify a series of plugins, they would be processed in that order.
However, I'm not sure this works very well with Zuul job inheritance.
Imagine that a parent job enabled the barbican plugin, and a child job
enabled ceilometer, needed ceilometer to start before barbican.  There
would be no way to express that.

5) Change the definition of the dictionary to encode ordering
information.  Currently the dictionary schema is simply the name of the
plugin as the key, and either the contents of the "enable_plugin" line,
or "null" if the plugin should be disabled.  We could alter it to be:

  devstack_plugins:
barbican:
  enabled: true
  url: https://git.openstack.org/openstack/barbican
  branch: testing
tacker:
  enabled: true
  url: https://git.openstack.org/openstack/tacker
  requires:
barbican: true

This option is very flexible, but makes using the jobs somewhat more
difficult because of the complexity of the data structure.

After considering all of those, I think I favor option 3, because we
should be able to implement it without too much difficulty, it will
improve things by providing a known and documented location for plugins
to specify dependencies, and once it is in place, we can still implement
option 1 later if we want, using the same declaration.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Tong Liu
The workaround [1] has not landed yet. I saw it has +1 workflow but has not
been merged.

Thanks,
Tong
[1] https://review.openstack.org/#/c/508344/

On Mon, Oct 2, 2017 at 6:51 AM, Mehdi Abaakouk  wrote:

> Looks like the LIBS_FROM_GIT workarounds have landed, but I still have
> some issue
> on telemetry integration jobs:
>
>  http://logs.openstack.org/32/508132/1/check/legacy-telemetr
> y-dsvm-integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz
>
>
> On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:
>
>> On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:
>>
>>> 2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
>>>
 We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

 http://logs.openstack.org/32/508132/1/check/legacy-telemetry
 -dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-
 setup-workspace-new.txt

>>>
>>> That looks similar to what Ian fixed in [1], seems like your job needs
>>> a corresponding patch.
>>>
>>
>> Thanks, I have proposed the same kind of patch for telemetry [1]
>>
>> [1] https://review.openstack.org/508448
>>
>> --
>> Mehdi Abaakouk
>> mail: sil...@sileht.net
>> irc: sileht
>>
>
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Mooney, Sean K
This also broke the legacy-tempest-dsvm-nova-os-vif gate job
http://logs.openstack.org/98/508498/1/check/legacy-tempest-dsvm-nova-os-vif/8fdf055/logs/devstacklog.txt.gz#_2017-09-29_14_15_41_961

> -Original Message-
> From: Mehdi Abaakouk [mailto:sil...@sileht.net]
> Sent: Monday, October 2, 2017 2:52 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [devstack] zuulv3 gate status;
> LIBS_FROM_GIT failures
> 
> Looks like the LIBS_FROM_GIT workarounds have landed, but I still have
> some issue on telemetry integration jobs:
> 
>   http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-
> integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz
> 
> On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:
> >On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:
> >>2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk <sil...@sileht.net>:
> >>>We also have our legacy-telemetry-dsvm-integration-ceilometer
> broken:
> >>>
> >>>http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-
> int
> >>>egration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-
> new.tx
> >>>t
> >>
> >>That looks similar to what Ian fixed in [1], seems like your job
> needs
> >>a corresponding patch.
> >
> >Thanks, I have proposed the same kind of patch for telemetry [1]
> >
> >[1] https://review.openstack.org/508448
> >
> >--
> >Mehdi Abaakouk
> >mail: sil...@sileht.net
> >irc: sileht
> 
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Mehdi Abaakouk

Looks like the LIBS_FROM_GIT workarounds have landed, but I still have some 
issue
on telemetry integration jobs:

 
http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz

On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:

On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:

2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :

We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt


That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.


Thanks, I have proposed the same kind of patch for telemetry [1]

[1] https://review.openstack.org/508448

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-29 Thread Attila Fazekas
I have overlay2 and super fast disk I/O (memory cheat + SSD),
just the CPU freq is not high. The CPU is a Broadwell
and actually it has lot more core (E5-2630V4). Even a 5 year old gamer CPU
can be 2 times
faster on a single core, but cannot compete with all of the cores ;-)

This machine have seen faster setup time,  but I'll return to this in an
another topic.

On Tue, Sep 26, 2017 at 6:16 PM, Michał Jastrzębski 
wrote:

> On 26 September 2017 at 07:34, Attila Fazekas  wrote:
> > decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
> >
> > Fully pulling all container takes something like ~4.5 min (from
> localhost,
> > one leaf request at a time),
> > but on the gate vm  we usually have 4 core,
> > so it is possible to go bellow 2 min with better pulling strategy,
> > unless we hit some disk limit.
>
> Check your $docker info. If you kept defaults, storage driver will be
> devicemapper on loopback, which is awfully slow and not very reliable.
> Overlay2 is much better and should speed things up quite a bit. For me
> deployment of 5 node openstack on vms similar to gate took 6min (I had
> registry available in same network). Also if you pull single image it
> will download all base images as well, so next one will be
> significantly faster.
>
> >
> > On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski 
> > wrote:
> >>
> >> On 22 September 2017 at 17:21, Paul Belanger 
> >> wrote:
> >> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> >> >> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> >> >> > "if DevStack gets custom images prepped to make its jobs
> >> >> > run faster, won't Triple-O, Kolla, et cetera want the same and
> where
> >> >> > do we draw that line?). "
> >> >> >
> >> >> > IMHO we can try to have only one big image per distribution,
> >> >> > where the packages are the union of the packages requested by all
> >> >> > team,
> >> >> > minus the packages blacklisted by any team.
> >> >> [...]
> >> >>
> >> >> Until you realize that some projects want packages from UCA, from
> >> >> RDO, from EPEL, from third-party package repositories. Version
> >> >> conflicts mean they'll still spend time uninstalling the versions
> >> >> they don't want and downloading/installing the ones they do so we
> >> >> have to optimize for one particular set and make the rest
> >> >> second-class citizens in that scenario.
> >> >>
> >> >> Also, preinstalling packages means we _don't_ test that projects
> >> >> actually properly declare their system-level dependencies any
> >> >> longer. I don't know if anyone's concerned about that currently, but
> >> >> it used to be the case that we'd regularly add/break the package
> >> >> dependency declarations in DevStack because of running on images
> >> >> where the things it expected were preinstalled.
> >> >> --
> >> >> Jeremy Stanley
> >> >
> >> > +1
> >> >
> >> > We spend a lot of effort trying to keep the 6 images we have in
> nodepool
> >> > working
> >> > today, I can't imagine how much work it would be to start adding more
> >> > images per
> >> > project.
> >> >
> >> > Personally, I'd like to audit things again once we roll out zuulv3, I
> am
> >> > sure
> >> > there are some tweaks we could make to help speed up things.
> >>
> >> I don't understand, why would you add images per project? We have all
> >> the images there.. What I'm talking about is to leverage what we'll
> >> have soon (registry) to lower time of gates/DIB infra requirements
> >> (DIB would hardly need to refresh images...)
> >>
> >> >
> >> > 
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Jens Harbott
2017-09-29 5:41 GMT+00:00 Ian Wienand :
> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>
>> I'm not aware of issues other than these at this time
>
>
> Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
> also failing for unknown reasons.  Any debugging would be helpful,
> thanks.

Seem there are multiple issues with the multinode jobs:

a) post_failures due to an error in log collection, sample fix at
https://review.openstack.org/508473
b) jobs are being run as two identical tasks on primary and subnodes,
triggering https://bugs.launchpad.net/zun/+bug/1720240

Other issues:
- openstack-tox-py27 is being run on trusty nodes instead of xenial
- unit tests are missing in at least neutron gate runs
- some patches are not getting any results from zuul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Mehdi Abaakouk

On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:

2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :

We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt


That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.


Thanks, I have proposed the same kind of patch for telemetry [1]

[1] https://review.openstack.org/508448

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Jens Harbott
2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
> On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:
>>
>> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>>
>>> I'm not aware of issues other than these at this time
>>
>>
>> Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
>> also failing for unknown reasons.  Any debugging would be helpful,
>> thanks.
>
>
> We also have our legacy-telemetry-dsvm-integration-ceilometer broken:
>
> http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt

That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.

[1] https://review.openstack.org/#/c/508396

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Mehdi Abaakouk

On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:

On 09/29/2017 03:37 PM, Ian Wienand wrote:

I'm not aware of issues other than these at this time


Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons.  Any debugging would be helpful,
thanks.


We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-28 Thread Ian Wienand

On 09/29/2017 03:37 PM, Ian Wienand wrote:

I'm not aware of issues other than these at this time


Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons.  Any debugging would be helpful,
thanks.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-26 Thread Michał Jastrzębski
On 26 September 2017 at 07:34, Attila Fazekas  wrote:
> decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
>
> Fully pulling all container takes something like ~4.5 min (from localhost,
> one leaf request at a time),
> but on the gate vm  we usually have 4 core,
> so it is possible to go bellow 2 min with better pulling strategy,
> unless we hit some disk limit.

Check your $docker info. If you kept defaults, storage driver will be
devicemapper on loopback, which is awfully slow and not very reliable.
Overlay2 is much better and should speed things up quite a bit. For me
deployment of 5 node openstack on vms similar to gate took 6min (I had
registry available in same network). Also if you pull single image it
will download all base images as well, so next one will be
significantly faster.

>
> On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski 
> wrote:
>>
>> On 22 September 2017 at 17:21, Paul Belanger 
>> wrote:
>> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
>> >> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> >> > "if DevStack gets custom images prepped to make its jobs
>> >> > run faster, won't Triple-O, Kolla, et cetera want the same and where
>> >> > do we draw that line?). "
>> >> >
>> >> > IMHO we can try to have only one big image per distribution,
>> >> > where the packages are the union of the packages requested by all
>> >> > team,
>> >> > minus the packages blacklisted by any team.
>> >> [...]
>> >>
>> >> Until you realize that some projects want packages from UCA, from
>> >> RDO, from EPEL, from third-party package repositories. Version
>> >> conflicts mean they'll still spend time uninstalling the versions
>> >> they don't want and downloading/installing the ones they do so we
>> >> have to optimize for one particular set and make the rest
>> >> second-class citizens in that scenario.
>> >>
>> >> Also, preinstalling packages means we _don't_ test that projects
>> >> actually properly declare their system-level dependencies any
>> >> longer. I don't know if anyone's concerned about that currently, but
>> >> it used to be the case that we'd regularly add/break the package
>> >> dependency declarations in DevStack because of running on images
>> >> where the things it expected were preinstalled.
>> >> --
>> >> Jeremy Stanley
>> >
>> > +1
>> >
>> > We spend a lot of effort trying to keep the 6 images we have in nodepool
>> > working
>> > today, I can't imagine how much work it would be to start adding more
>> > images per
>> > project.
>> >
>> > Personally, I'd like to audit things again once we roll out zuulv3, I am
>> > sure
>> > there are some tweaks we could make to help speed up things.
>>
>> I don't understand, why would you add images per project? We have all
>> the images there.. What I'm talking about is to leverage what we'll
>> have soon (registry) to lower time of gates/DIB infra requirements
>> (DIB would hardly need to refresh images...)
>>
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-26 Thread Attila Fazekas
decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.

Fully pulling all container takes something like ~4.5 min (from localhost,
one leaf request at a time),
but on the gate vm  we usually have 4 core,
so it is possible to go bellow 2 min with better pulling strategy,
unless we hit some disk limit.


On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski 
wrote:

> On 22 September 2017 at 17:21, Paul Belanger 
> wrote:
> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> >> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> >> > "if DevStack gets custom images prepped to make its jobs
> >> > run faster, won't Triple-O, Kolla, et cetera want the same and where
> >> > do we draw that line?). "
> >> >
> >> > IMHO we can try to have only one big image per distribution,
> >> > where the packages are the union of the packages requested by all
> team,
> >> > minus the packages blacklisted by any team.
> >> [...]
> >>
> >> Until you realize that some projects want packages from UCA, from
> >> RDO, from EPEL, from third-party package repositories. Version
> >> conflicts mean they'll still spend time uninstalling the versions
> >> they don't want and downloading/installing the ones they do so we
> >> have to optimize for one particular set and make the rest
> >> second-class citizens in that scenario.
> >>
> >> Also, preinstalling packages means we _don't_ test that projects
> >> actually properly declare their system-level dependencies any
> >> longer. I don't know if anyone's concerned about that currently, but
> >> it used to be the case that we'd regularly add/break the package
> >> dependency declarations in DevStack because of running on images
> >> where the things it expected were preinstalled.
> >> --
> >> Jeremy Stanley
> >
> > +1
> >
> > We spend a lot of effort trying to keep the 6 images we have in nodepool
> working
> > today, I can't imagine how much work it would be to start adding more
> images per
> > project.
> >
> > Personally, I'd like to audit things again once we roll out zuulv3, I am
> sure
> > there are some tweaks we could make to help speed up things.
>
> I don't understand, why would you add images per project? We have all
> the images there.. What I'm talking about is to leverage what we'll
> have soon (registry) to lower time of gates/DIB infra requirements
> (DIB would hardly need to refresh images...)
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-09-25 Thread Tony Breeds
On Fri, Jun 16, 2017 at 12:06:47PM +1000, Tony Breeds wrote:
> Hi All,
>   I just push a review [1] to bump the minimum etcd version to
> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
> cycle to be making changes like this but releasing pike with a dependacy
> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
> 
> Yours Tony.
> 
> [1] https://review.openstack.org/474825

So this came up at the PTG and the current plan is:

Interim solution:
 1. Get the mirroring tool to the point it can be consumed by infra.
 2. setup a new zuulv3 job to run this and do the mirroring.

Middle term solution:
 1. Get etcd3 packages updated in debian/ubuntu create a PPA (or
similar) for infra to consume.

Both of these are intended to be done during Queens.

Long term plan:
 1. Ensue the packages above are just there for 18.04 so we can put this
behind us.

With hindsight I think we need to add something like "Current packages
can be consumed in our CI" as a requirement for anything added as a base
service.



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Michał Jastrzębski
On 22 September 2017 at 17:21, Paul Belanger  wrote:
> On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
>> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> > "if DevStack gets custom images prepped to make its jobs
>> > run faster, won't Triple-O, Kolla, et cetera want the same and where
>> > do we draw that line?). "
>> >
>> > IMHO we can try to have only one big image per distribution,
>> > where the packages are the union of the packages requested by all team,
>> > minus the packages blacklisted by any team.
>> [...]
>>
>> Until you realize that some projects want packages from UCA, from
>> RDO, from EPEL, from third-party package repositories. Version
>> conflicts mean they'll still spend time uninstalling the versions
>> they don't want and downloading/installing the ones they do so we
>> have to optimize for one particular set and make the rest
>> second-class citizens in that scenario.
>>
>> Also, preinstalling packages means we _don't_ test that projects
>> actually properly declare their system-level dependencies any
>> longer. I don't know if anyone's concerned about that currently, but
>> it used to be the case that we'd regularly add/break the package
>> dependency declarations in DevStack because of running on images
>> where the things it expected were preinstalled.
>> --
>> Jeremy Stanley
>
> +1
>
> We spend a lot of effort trying to keep the 6 images we have in nodepool 
> working
> today, I can't imagine how much work it would be to start adding more images 
> per
> project.
>
> Personally, I'd like to audit things again once we roll out zuulv3, I am sure
> there are some tweaks we could make to help speed up things.

I don't understand, why would you add images per project? We have all
the images there.. What I'm talking about is to leverage what we'll
have soon (registry) to lower time of gates/DIB infra requirements
(DIB would hardly need to refresh images...)

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Paul Belanger
On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> > "if DevStack gets custom images prepped to make its jobs
> > run faster, won't Triple-O, Kolla, et cetera want the same and where
> > do we draw that line?). "
> > 
> > IMHO we can try to have only one big image per distribution,
> > where the packages are the union of the packages requested by all team,
> > minus the packages blacklisted by any team.
> [...]
> 
> Until you realize that some projects want packages from UCA, from
> RDO, from EPEL, from third-party package repositories. Version
> conflicts mean they'll still spend time uninstalling the versions
> they don't want and downloading/installing the ones they do so we
> have to optimize for one particular set and make the rest
> second-class citizens in that scenario.
> 
> Also, preinstalling packages means we _don't_ test that projects
> actually properly declare their system-level dependencies any
> longer. I don't know if anyone's concerned about that currently, but
> it used to be the case that we'd regularly add/break the package
> dependency declarations in DevStack because of running on images
> where the things it expected were preinstalled.
> -- 
> Jeremy Stanley

+1

We spend a lot of effort trying to keep the 6 images we have in nodepool working
today, I can't imagine how much work it would be to start adding more images per
project.

Personally, I'd like to audit things again once we roll out zuulv3, I am sure
there are some tweaks we could make to help speed up things.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Michał Jastrzębski
On 22 September 2017 at 11:45, Clark Boylan  wrote:
> On Fri, Sep 22, 2017, at 08:58 AM, Michał Jastrzębski wrote:
>> Another, more revolutionary (for good or ill) alternative would be to
>> move gates to run Kolla instead of DevStack. We're working towards
>> registry of images, and we support most of openstack services now. If
>> we enable mixed installation (your service in devstack-ish way, others
>> via Kolla), that should lower the amount of downloads quite
>> dramatically (lots of it will be downloads from registry which will be
>> mirrored/cached in every nodepool). Then all we really need is to
>> support barebone image with docker and ansible installed and that's
>> it.
>
> Except that it very likely isn't going to use less bandwidth. We already
> mirror most of these package repos so all transfers are local to the
> nodepool cloud region. In total we seem to grab about 139MB of packages
> for a neutron dvr multinode scenario job (146676348 bytes) on Ubuntu
> Xenial. This is based off the package list compiled at
> http://paste.openstack.org/raw/621753/ then asking apt-cache for the
> package size for the latest version.
>
> Kolla images on the other hand are in the multigigabyte range
> http://tarballs.openstack.org/kolla/images/.
>
> Clark

Right, all 200+ of them, with proper registry management it's going to
be more streamlined. That will lower amount of effort to handle DIB
images tho. We are going to build them anyway, so there net bandwidth
will actually be lower... Also I don't think it's bandwidth that's
issue here as much as general package management and installation of
packages even from locally available mirror, docker would help with
that.

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] pike time growth in August

2017-09-22 Thread Clark Boylan
On Fri, Sep 22, 2017, at 01:18 PM, Attila Fazekas wrote:
> The main offenders reported by devstack does not seams to explain the
> growth visible on OpenstackHealth [1] .
> The logs also stated to disappear which does not makes easy to figure
> out.
> 
> 
> Which code/infra changes can be related ?
> 
> 
> http://status.openstack.org/openstack-health/#/test/devstack?resolutionKey=day=P6M

A big factor is likely the loss of OSIC. That cloud performed really
well and now we don't have it anymore so averages will increase.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Clark Boylan
On Fri, Sep 22, 2017, at 08:58 AM, Michał Jastrzębski wrote:
> Another, more revolutionary (for good or ill) alternative would be to
> move gates to run Kolla instead of DevStack. We're working towards
> registry of images, and we support most of openstack services now. If
> we enable mixed installation (your service in devstack-ish way, others
> via Kolla), that should lower the amount of downloads quite
> dramatically (lots of it will be downloads from registry which will be
> mirrored/cached in every nodepool). Then all we really need is to
> support barebone image with docker and ansible installed and that's
> it.

Except that it very likely isn't going to use less bandwidth. We already
mirror most of these package repos so all transfers are local to the
nodepool cloud region. In total we seem to grab about 139MB of packages
for a neutron dvr multinode scenario job (146676348 bytes) on Ubuntu
Xenial. This is based off the package list compiled at
http://paste.openstack.org/raw/621753/ then asking apt-cache for the
package size for the latest version.

Kolla images on the other hand are in the multigigabyte range
http://tarballs.openstack.org/kolla/images/.

Clark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Michał Jastrzębski
On 22 September 2017 at 07:31, Jeremy Stanley  wrote:
> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> "if DevStack gets custom images prepped to make its jobs
>> run faster, won't Triple-O, Kolla, et cetera want the same and where
>> do we draw that line?). "
>>
>> IMHO we can try to have only one big image per distribution,
>> where the packages are the union of the packages requested by all team,
>> minus the packages blacklisted by any team.
> [...]
>
> Until you realize that some projects want packages from UCA, from
> RDO, from EPEL, from third-party package repositories. Version
> conflicts mean they'll still spend time uninstalling the versions
> they don't want and downloading/installing the ones they do so we
> have to optimize for one particular set and make the rest
> second-class citizens in that scenario.
>
> Also, preinstalling packages means we _don't_ test that projects
> actually properly declare their system-level dependencies any
> longer. I don't know if anyone's concerned about that currently, but
> it used to be the case that we'd regularly add/break the package
> dependency declarations in DevStack because of running on images
> where the things it expected were preinstalled.
> --
> Jeremy Stanley

Another, more revolutionary (for good or ill) alternative would be to
move gates to run Kolla instead of DevStack. We're working towards
registry of images, and we support most of openstack services now. If
we enable mixed installation (your service in devstack-ish way, others
via Kolla), that should lower the amount of downloads quite
dramatically (lots of it will be downloads from registry which will be
mirrored/cached in every nodepool). Then all we really need is to
support barebone image with docker and ansible installed and that's
it.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Jeremy Stanley
On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> "if DevStack gets custom images prepped to make its jobs
> run faster, won't Triple-O, Kolla, et cetera want the same and where
> do we draw that line?). "
> 
> IMHO we can try to have only one big image per distribution,
> where the packages are the union of the packages requested by all team,
> minus the packages blacklisted by any team.
[...]

Until you realize that some projects want packages from UCA, from
RDO, from EPEL, from third-party package repositories. Version
conflicts mean they'll still spend time uninstalling the versions
they don't want and downloading/installing the ones they do so we
have to optimize for one particular set and make the rest
second-class citizens in that scenario.

Also, preinstalling packages means we _don't_ test that projects
actually properly declare their system-level dependencies any
longer. I don't know if anyone's concerned about that currently, but
it used to be the case that we'd regularly add/break the package
dependency declarations in DevStack because of running on images
where the things it expected were preinstalled.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Attila Fazekas
"if DevStack gets custom images prepped to make its jobs
run faster, won't Triple-O, Kolla, et cetera want the same and where
do we draw that line?). "

IMHO we can try to have only one big image per distribution,
where the packages are the union of the packages requested by all team,
minus the packages blacklisted by any team.

You need to provide a bug link(s) (distribution/upstream bug) for
blacklisting
a package.

Very unlikely we will run out from the disk space juts because of the too
many packages,
usually if a package causes harm to anything it is a distro/upstream bug
which expected
to be solved within 1..2 cycle in the worst case scenario.

If the above thing proven to be not working, we need to draw the line based
on the
expected usage frequency.




On Wed, Sep 20, 2017 at 3:46 PM, Jeremy Stanley  wrote:

> On 2017-09-20 15:17:28 +0200 (+0200), Attila Fazekas wrote:
> [...]
> > The image building was the good old working solution and unless
> > the image build become a super expensive thing, this is still the
> > best option.
> [...]
>
> It became a super expensive thing, and that's the main reason we
> stopped doing it. Now that Nodepool has grown support for
> distributed/parallel image building and uploading, the cost model
> may have changed a bit in that regard so I agree it doesn't hurt to
> revisit that decision. Nevertheless it will take a fair amount of
> convincing that the savings balances out the costs (not just in
> resource consumption but also administrative overhead and community
> impact... if DevStack gets custom images prepped to make its jobs
> run faster, won't Triple-O, Kolla, et cetera want the same and where
> do we draw that line?).
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-20 Thread Jeremy Stanley
On 2017-09-20 15:17:28 +0200 (+0200), Attila Fazekas wrote:
[...]
> The image building was the good old working solution and unless
> the image build become a super expensive thing, this is still the
> best option.
[...]

It became a super expensive thing, and that's the main reason we
stopped doing it. Now that Nodepool has grown support for
distributed/parallel image building and uploading, the cost model
may have changed a bit in that regard so I agree it doesn't hurt to
revisit that decision. Nevertheless it will take a fair amount of
convincing that the savings balances out the costs (not just in
resource consumption but also administrative overhead and community
impact... if DevStack gets custom images prepped to make its jobs
run faster, won't Triple-O, Kolla, et cetera want the same and where
do we draw that line?).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-20 Thread Attila Fazekas
On Wed, Sep 20, 2017 at 3:11 AM, Ian Wienand  wrote:

> On 09/20/2017 09:30 AM, David Moreau Simard wrote:
>
>> At what point does it become beneficial to build more than one image per
>> OS
>> that is more aggressively tuned/optimized for a particular purpose ?
>>
>
> ... and we can put -dsvm- in the jobs names to indicate it should run
> on these nodes :)
>
> Older hands than myself will remember even more issues, but the
> "thicker" the base-image has been has traditionally just lead to a lot
> more corners for corner-cases can hide in.  We saw this all the time
> with "snapshot" images where we'd be based on upstream images that
> would change ever so slightly and break things, leading to
> diskimage-builder and the -minimal build approach.
>
> That said, in a zuulv3 world where we are not caching all git and have
> considerably smaller images, a nodepool that has a scheduler that
> accounts for flavor sizes and could conceivably understand similar for
> images, and where we're building with discrete elements that could
> "bolt-on" things like a list-of-packages install sanely to daily
> builds ... it's not impossible to imagine.
>
> -i


The problem is these package install steps are not really I/O bottle-necked
in most cases,
even with a regular DSL speed you can  frequently see
the decompress and the post config steps takes more time.

The site-local cache/mirror has visible benefit, but does not eliminates
the issues.

The main enemy is the single threaded CPU intensive operation in most
install/config related script,
the 2th most common issue is serially requesting high latency steps, which
does not reaches neither
the CPU or I/O possibilities at the end.

The fat images are generally cheaper even if your cloud has only 1Gb
Ethernet for image transfer.
You gain more by baking the packages into the image than the 1GbE can steal
from you, because
you also save time what would be loosen on CPU intensive operations or from
random disk access.

It is safe to add all distro packages used  by devstack to the cloud image.

Historically we had issues with some base image packages which presence
changed the
behavior of some component ,for example firewalld vs. libvirt (likely an
already solved issue),
these packages got explicitly removed by devstack in case of necessary.
Those packages not requested by devstack !

Fedora/Centos also has/had issues with overlapping with pypi packages on
main filesystem,
(too long story, pointing fingers ..) , generally not a good idea to add
packages from pypi to
an image which content might be overridden by the distro's package manager.

The distribution package install time delays the gate response,
when the slowest ruining job delayed by this, than the whole response is
delayed.

It Is an user facing latency issue, which should be solved even if the cost
would be higher.

The image building was the good old working solution and unless the image
build
become a super expensive thing, this is still the best option.

site-local mirror also expected to help making the image build step(s)
faster and safer.

The other option is the ready scripts.


>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread Ian Wienand

On 09/20/2017 09:30 AM, David Moreau Simard wrote:

At what point does it become beneficial to build more than one image per OS
that is more aggressively tuned/optimized for a particular purpose ?


... and we can put -dsvm- in the jobs names to indicate it should run
on these nodes :)

Older hands than myself will remember even more issues, but the
"thicker" the base-image has been has traditionally just lead to a lot
more corners for corner-cases can hide in.  We saw this all the time
with "snapshot" images where we'd be based on upstream images that
would change ever so slightly and break things, leading to
diskimage-builder and the -minimal build approach.

That said, in a zuulv3 world where we are not caching all git and have
considerably smaller images, a nodepool that has a scheduler that
accounts for flavor sizes and could conceivably understand similar for
images, and where we're building with discrete elements that could
"bolt-on" things like a list-of-packages install sanely to daily
builds ... it's not impossible to imagine.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread David Moreau Simard
On Tue, Sep 19, 2017 at 9:03 AM, Jeremy Stanley  wrote:
>
> In order to reduce image sizes and the time it takes to build
> images, once we had local package caches in each provider we stopped
> pre-retrieving packages onto the images. Is the time spent at this
> stage mostly while downloading package files (which is what that
> used to alleviate) or is it more while retrieving indices or
> installing the downloaded packages (things having them pre-retrieved
> on the images never solved anyway)?
>

At what point does it become beneficial to build more than one image per OS
that is more aggressively tuned/optimized for a particular purpose ?

We could take more freedom in a devstack-specific image like pre-install
packages that are provided out of base OS, etc.
Different projects could take this kind of freedom to optimize build times
according to their needs as well.

Here's an example of something we once did in RDO:
1) Aggregate the list of every package installed (rpm -qa) at the end
of several jobs
2) From that sorted and uniq'd list, work out which repositories each
package came from
3) Blacklist every package that was not installed from a base
operating system repository
(i.e, blacklist every package and dependencies from RDO, since
we'll be testing these)
4) Pre-install every package that were not blacklisted in our images

The end result was a list of >700 packages [1] completely unrelated to
OpenStack that ended up
being installed anyway throughout different jobs.
To give an idea of numbers, a fairly vanilla CentOS image has ~400
packages installed.
You can find the (rudimentary) script to achieve this filtering is here [2].

[1]: 
https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/nodepool/scripts/weirdo-packages.txt
[2]: 
https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/nodepool/scripts/filter_packages.sh

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread Ian Wienand

On 09/19/2017 11:03 PM, Jeremy Stanley wrote:

On 2017-09-19 14:15:53 +0200 (+0200), Attila Fazekas wrote:
[...]

The jobs does 120..220 sec apt-get install and packages defined
/files/debs/general are missing from the images before starting the job.



Is the time spent at this stage mostly while downloading package
files (which is what that used to alleviate) or is it more while
retrieving indices or installing the downloaded packages (things
having them pre-retrieved on the images never solved anyway)?


As you're both aware, but others may not be, at the end of the logs
devstack does keep a timing overview that looks something like

=
DevStack Component Timing
=
Total runtime1352

run_process   15
test_with_retry4
apt-get-update 2
pip_install  270
osc  365
wait_for_service  29
dbsync23
apt-get  137
=

That doesn't break things down into download v install, but apt does
have download summary that can be grepped for

---
$ cat devstacklog.txt.gz | grep Fetched
2017-09-19 17:52:45.808 | Fetched 39.3 MB in 1s (26.3 MB/s)
2017-09-19 17:53:41.115 | Fetched 185 kB in 0s (3,222 kB/s)
2017-09-19 17:54:16.365 | Fetched 23.5 MB in 1s (21.1 MB/s)
2017-09-19 17:54:25.779 | Fetched 18.3 MB in 0s (35.6 MB/s)
2017-09-19 17:54:39.439 | Fetched 59.1 kB in 0s (0 B/s)
2017-09-19 17:54:40.986 | Fetched 2,128 kB in 0s (40.0 MB/s)
2017-09-19 17:57:37.190 | Fetched 333 kB in 0s (1,679 kB/s)
2017-09-19 17:58:17.592 | Fetched 50.5 MB in 2s (18.1 MB/s)
2017-09-19 17:58:26.947 | Fetched 5,829 kB in 0s (15.5 MB/s)
2017-09-19 17:58:49.571 | Fetched 5,065 kB in 1s (3,719 kB/s)
2017-09-19 17:59:25.438 | Fetched 9,758 kB in 0s (44.5 MB/s)
2017-09-19 18:00:14.373 | Fetched 77.5 kB in 0s (286 kB/s)
---

As mentioned, we setup the package manager to point to a region-local
mirror during node bringup.  Depending on the i/o situation, it is
probably just as fast as coming off disk :) Note (also as mentioned)
these were never pre-installed, just pre-downloaded to an on-disk
cache area (as an aside, I don't think dnf was ever really happy with
that situation and kept being too smart and clearing it's caches).

If you're feeling regexy you could maybe do something similar with the
pip "Collecting" bits in the logs ... one idea for investigation down
that path is if we could save time by somehow collecting larger
batches of requirements and doing less pip invocations?

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread Jeremy Stanley
On 2017-09-19 14:15:53 +0200 (+0200), Attila Fazekas wrote:
[...]
> Let's start with the first obvious difference compared to the old-time
> jobs.:
> The jobs does 120..220 sec apt-get install and packages defined
> /files/debs/general are missing from the images before starting the job.
> 
> We used to bake multiple packages into the images based on the package list
> provided by devstack in order to save time.
> 
> Why this does not happens anymore ?
> Is anybody working on solving this issue ?
> Is any blocker technical issue / challenge exists ?
> Was it a design decision ?
[...]

In order to reduce image sizes and the time it takes to build
images, once we had local package caches in each provider we stopped
pre-retrieving packages onto the images. Is the time spent at this
stage mostly while downloading package files (which is what that
used to alleviate) or is it more while retrieving indices or
installing the downloaded packages (things having them pre-retrieved
on the images never solved anyway)?

Our earlier analysis of the impact of dropping package files from
images indicated it was negligible for most jobs because of the
caching package mirrors we maintain nearby.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] SUSE jobs started failing on peakmem_tracker

2017-09-08 Thread Dirk Müller
Hi David,

Thanks for looking into this. I do watch devstack changes every once in a
while but couldn't catch this one  in time. The missing pmap -XX flag
problem has been there forever but it used to be non fatal. Now it is,
which is in principle a good change.

I will make sure that it passes again on SUSE shortly.

Greetings,
Dirk

I was trying to make sure the existing openSUSE jobs passed on Zuul v3
but even the regular v2 jobs are hitting a bug I filed here [1].
As far as I know, these jobs were passing until recently.



This is preventing us from sanity checking that everything works out
of the box for the suse devstack job for the v3 migration.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [ironic] [nova] Trying again on wait_for_compute in devstack

2017-08-02 Thread Brian Haley

On 08/02/2017 07:17 AM, Sean Dague wrote:

The 3 node scenarios in Neutron (which are still experimental nv) are
typically failing to bring online the 3rd compute. In cells v2 you have
to explicitly add nodes to the cells. There is a nova-manage command
"discover-hosts" that takes all the compute nodes which have checked in,
but aren't yet assigned to a cell, and puts them into a cell of your
choosing. We do this in devstack-gate in the gate.

However... subnodes don't take very long to setup (so few services). And
the nova-compute process takes about 30s before it's done all it's
initialization and actually checks in to the cluster. It's a real
possibility that discover_hosts will run before subnode 3 checks in. We
see it in logs. This also really could come and bite us on any multinode
job, and I'm a bit concerned some of the multinode jobs aren't running
multinode some times because of it.

One way to fix this, without putting more logic in devstack-gate, is
ensure that by the time stack.sh finishes, the compute node is up. This
was tried previously, but it turned out that we totally missed that it
broke Ironic (the check happened too early, ironic was not yet running,
so we always failed), Cells v1 (munges hostnames :(  ), and PowerVM
(their nova-compute was never starting correctly, and they were working
around it with a restart later).

This patch https://review.openstack.org/#/c/488381/ tries again. The
check is moved very late, Ironic seems to be running fine with it. Cells
v1 is just skipped, that's deprecated in Nova now, and we're not going
to use it in multinode scenarios. The PowerVM team fixed their
nova-compute start issues, so they should be good to go as well.


I had also filed https://bugs.launchpad.net/neutron/+bug/1707003 for 
this since it was mainly just affecting that one 3-node neutron job. 
Glad I hadn't started working on a patch, I'll just take a look at yours.


Thanks for working on it!

-Brian


This is an FYI that we're going to land this again soon. If you think
this impacts your CI / jobs, please speak up. The CI runs on both the
main and experimental queue on devstack for this change look pretty
good, so I think we're safe to move forward this time. But we also
thought that the last time, and were wrong.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [ironic] [nova] Trying again on wait_for_compute in devstack

2017-08-02 Thread Sean Dague
An issue with the xenserver CI was identified. Once we get this patch 
in, and backported to ocata, it should also address a frequent grenade 
multinode fail scenario which is plaguing the gate.


-Sean

On 08/02/2017 07:17 AM, Sean Dague wrote:

The 3 node scenarios in Neutron (which are still experimental nv) are
typically failing to bring online the 3rd compute. In cells v2 you have
to explicitly add nodes to the cells. There is a nova-manage command
"discover-hosts" that takes all the compute nodes which have checked in,
but aren't yet assigned to a cell, and puts them into a cell of your
choosing. We do this in devstack-gate in the gate.

However... subnodes don't take very long to setup (so few services). And
the nova-compute process takes about 30s before it's done all it's
initialization and actually checks in to the cluster. It's a real
possibility that discover_hosts will run before subnode 3 checks in. We
see it in logs. This also really could come and bite us on any multinode
job, and I'm a bit concerned some of the multinode jobs aren't running
multinode some times because of it.

One way to fix this, without putting more logic in devstack-gate, is
ensure that by the time stack.sh finishes, the compute node is up. This
was tried previously, but it turned out that we totally missed that it
broke Ironic (the check happened too early, ironic was not yet running,
so we always failed), Cells v1 (munges hostnames :(  ), and PowerVM
(their nova-compute was never starting correctly, and they were working
around it with a restart later).

This patch https://review.openstack.org/#/c/488381/ tries again. The
check is moved very late, Ironic seems to be running fine with it. Cells
v1 is just skipped, that's deprecated in Nova now, and we're not going
to use it in multinode scenarios. The PowerVM team fixed their
nova-compute start issues, so they should be good to go as well.

This is an FYI that we're going to land this again soon. If you think
this impacts your CI / jobs, please speak up. The CI runs on both the
main and experimental queue on devstack for this change look pretty
good, so I think we're safe to move forward this time. But we also
thought that the last time, and were wrong.

-Sean




--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-19 Thread Tony Breeds
On Mon, Jun 19, 2017 at 08:17:53AM -0400, Davanum Srinivas wrote:
> Tony,
> 
> 
> On Sun, Jun 18, 2017 at 11:34 PM, Tony Breeds  wrote:
> > On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
> >
> >> Awesome! thanks Tony, some kolla jobs do that for example, but i think
> >> this job is a better one to key off of:
> >> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381
> >>
> >> Outline of the work is - check if there are any new releases in github
> >> downloads, if so download them using wget and then delegate to the scp
> >> publisher (with keep-hierarchy) to create the new directories and
> >> upload the file(s).
> >
> > So perhaps I'm dense but I can't see an easy way to get a list of
> > release artefacts from github in a form that wget can consume.  The best
> > I can see is via the API.  I've knocked up a quick'n'dirty mirror
> > script[1] but I really feel like I've gone off into the weeds.
> >
> > You basically need to do:
> >
> > git clone  && cd
> > virtualenv .venv
> > .venv/bin/pip install -U pip setuptools wheel
> > .venv/bin/pip install -r ./requirements.txt   # [2]
> > .venv/bin/python ./mirror-github-releases.py \
> > 'coreos/etcd::.*linux.*gz:etcd' \
> > 'coreos/etcd:6225411:.*linux.*gz:etcd'
> 
> Works for me!

Okay I'll put something more compleet together for infra review.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-19 Thread Davanum Srinivas
Tony,


On Sun, Jun 18, 2017 at 11:34 PM, Tony Breeds  wrote:
> On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
>
>> Awesome! thanks Tony, some kolla jobs do that for example, but i think
>> this job is a better one to key off of:
>> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381
>>
>> Outline of the work is - check if there are any new releases in github
>> downloads, if so download them using wget and then delegate to the scp
>> publisher (with keep-hierarchy) to create the new directories and
>> upload the file(s).
>
> So perhaps I'm dense but I can't see an easy way to get a list of
> release artefacts from github in a form that wget can consume.  The best
> I can see is via the API.  I've knocked up a quick'n'dirty mirror
> script[1] but I really feel like I've gone off into the weeds.
>
> You basically need to do:
>
> git clone  && cd
> virtualenv .venv
> .venv/bin/pip install -U pip setuptools wheel
> .venv/bin/pip install -r ./requirements.txt   # [2]
> .venv/bin/python ./mirror-github-releases.py \
> 'coreos/etcd::.*linux.*gz:etcd' \
> 'coreos/etcd:6225411:.*linux.*gz:etcd'

Works for me!

> This will in theory from the 3.2.0 (latest) release and look at the
> 3.1.7 release, see that it's already publically mirrored and move on.
>
> It wouldn't be too hard to incorporate into a job.  Thoughts?
>
> Yours Tony.
>
> [1]  https://github.com/tbreeds/mirror-github-releases
> [2] Yes of course I could publish it on pypi if we want to go down this
> path
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-18 Thread Tony Breeds
On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:

> Awesome! thanks Tony, some kolla jobs do that for example, but i think
> this job is a better one to key off of:
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381
> 
> Outline of the work is - check if there are any new releases in github
> downloads, if so download them using wget and then delegate to the scp
> publisher (with keep-hierarchy) to create the new directories and
> upload the file(s).

So perhaps I'm dense but I can't see an easy way to get a list of
release artefacts from github in a form that wget can consume.  The best
I can see is via the API.  I've knocked up a quick'n'dirty mirror
script[1] but I really feel like I've gone off into the weeds.

You basically need to do:

git clone  && cd
virtualenv .venv
.venv/bin/pip install -U pip setuptools wheel
.venv/bin/pip install -r ./requirements.txt   # [2]
.venv/bin/python ./mirror-github-releases.py \
'coreos/etcd::.*linux.*gz:etcd' \
'coreos/etcd:6225411:.*linux.*gz:etcd'

This will in theory from the 3.2.0 (latest) release and look at the
3.1.7 release, see that it's already publically mirrored and move on.

It wouldn't be too hard to incorporate into a job.  Thoughts?

Yours Tony.

[1]  https://github.com/tbreeds/mirror-github-releases
[2] Yes of course I could publish it on pypi if we want to go down this
path


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-18 Thread Davanum Srinivas
On Sun, Jun 18, 2017 at 7:36 PM, Tony Breeds  wrote:
> On Fri, Jun 16, 2017 at 03:59:22PM -0400, Davanum Srinivas wrote:
>> Mikhail,
>>
>> I have a TODO on my list - " adding a job that looks for new releases
>> and uploads them to tarballs periodically "
>
> If you point me to how things are added to that mirror I can work
> towards that.

Awesome! thanks Tony, some kolla jobs do that for example, but i think
this job is a better one to key off of:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381

Outline of the work is - check if there are any new releases in github
downloads, if so download them using wget and then delegate to the scp
publisher (with keep-hierarchy) to create the new directories and
upload the file(s).

Thanks,
Dims

>
> Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-18 Thread Tony Breeds
On Fri, Jun 16, 2017 at 03:59:22PM -0400, Davanum Srinivas wrote:
> Mikhail,
> 
> I have a TODO on my list - " adding a job that looks for new releases
> and uploads them to tarballs periodically "

If you point me to how things are added to that mirror I can work
towards that.

Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-16 Thread Davanum Srinivas
Mikhail,

I have a TODO on my list - " adding a job that looks for new releases
and uploads them to tarballs periodically "

Thanks,
-- Dims

On Fri, Jun 16, 2017 at 3:32 PM, Mikhail Medvedev  wrote:
> On Fri, Jun 16, 2017 at 6:01 AM, Sean Dague  wrote:
>> On 06/15/2017 10:06 PM, Tony Breeds wrote:
>>> Hi All,
>>>   I just push a review [1] to bump the minimum etcd version to
>>> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
>>> cycle to be making changes like this but releasing pike with a dependacy
>>> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
>>>
>>> Yours Tony.
>>>
>>> [1] https://review.openstack.org/474825
>>
>> It should be fine, no one is really using these much at this point.
>> However it looks like mirroring is not happening automatically? The
>> patch fails on not existing in the infra mirror.
>>
>> -Sean
>>
>
> It appears so. Also, IIRC, infra mirror would only host x86 binaries.
> Right now PowerKVM CI works by patching devstack-gate to override
> infra etcd download url. The fix [2] still needs to get merged to make
> it a bit easier to use d-g with your own etcd mirror.
>
> [2] https://review.openstack.org/#/c/467437/
>
> ---
> Mikhail Medvedev
> IBM OpenStack CI for KVM on Power
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-16 Thread Mikhail Medvedev
On Fri, Jun 16, 2017 at 6:01 AM, Sean Dague  wrote:
> On 06/15/2017 10:06 PM, Tony Breeds wrote:
>> Hi All,
>>   I just push a review [1] to bump the minimum etcd version to
>> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
>> cycle to be making changes like this but releasing pike with a dependacy
>> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
>>
>> Yours Tony.
>>
>> [1] https://review.openstack.org/474825
>
> It should be fine, no one is really using these much at this point.
> However it looks like mirroring is not happening automatically? The
> patch fails on not existing in the infra mirror.
>
> -Sean
>

It appears so. Also, IIRC, infra mirror would only host x86 binaries.
Right now PowerKVM CI works by patching devstack-gate to override
infra etcd download url. The fix [2] still needs to get merged to make
it a bit easier to use d-g with your own etcd mirror.

[2] https://review.openstack.org/#/c/467437/

---
Mikhail Medvedev
IBM OpenStack CI for KVM on Power

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-16 Thread Sean Dague
On 06/15/2017 10:06 PM, Tony Breeds wrote:
> Hi All,
>   I just push a review [1] to bump the minimum etcd version to
> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
> cycle to be making changes like this but releasing pike with a dependacy
> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
> 
> Yours Tony.
> 
> [1] https://review.openstack.org/474825

It should be fine, no one is really using these much at this point.
However it looks like mirroring is not happening automatically? The
patch fails on not existing in the infra mirror.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] systemd + ENABLED_SERVICES + user_init_file

2017-05-31 Thread Markus Zoeller
On 11.05.2017 15:56, Markus Zoeller wrote:
> I'm working on a nova live-migration hook which configures and starts
> the nova-serialproxy service, runs a subset of tempest tests, and tears
> down the previously started service.
> 
>https://review.openstack.org/#/c/347471/47
> 
> After the change to "systemd", I thought all I have to do was to start
> the service with:
> 
>systemctl enable devstack@n-sproxy
>systemctl restart devstack@n-sproxy
> 
> But this results in the error "Failed to execute operation: No such file
> or directory". The reason is, that there is no systemd "user unit file".
> This file gets written in Devstack at:
> 
> https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1512-L1529
> 
> For that to happen, a service must be in the list "ENABLED_SERVICES":
> 
> https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1572-L1574
> 
> Which is *not* the case for the "n-sproxy" service:
> 
> https://github.com/openstack-dev/devstack/blob/8b8441f3becbae2e704932569bff384dcc5c6713/stackrc#L55-L56
> 
> I'm not sure how to approach this problem. I could:
> 1) add "n-sproxy" to the default ENABLED_SERVICES list for Nova in
>Devstack
> 2) always write the systemd user unit file in Devstack
>(despite being an enabled service)
> 3) Write the "user unit file" on the fly in the hook (in Nova).
> 4) ?
> 
> Right now I tend to option 2, as I think it brings more flexibility (for
> other services too) with less change in the set of default enabled
> services in the gate.
> 
> Is this the right direction? Any other thoughts?
> 
> 

FWIW, here's my attempt to implement 2):
https://review.openstack.org/#/c/469390/

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Davanum Srinivas
Thanks for the help Mikhail,

So just FYI for others, the etcd 3.2.0 is in RC1, We will get a full
set of arch(es) covered once that goes GA

Thanks,
Dims

On Wed, May 24, 2017 at 8:45 AM, Mikhail S Medvedev  wrote:
>
> On 05/24/2017 06:59 AM, Sean Dague wrote:
>>
>> On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
>> > Hi together,
>> >
>> > recently etcd3 was enabled as service in devstack [1]. This breaks
>> > devstack on s390x Linux, as there are no s390x binaries availabe and
>> > there's no way to disable the etcd3 service.
>> >
>> > I pushed a patch to allow disabling the etcd3 service in local.conf [2].
>> > It would be great if we could get that merged soon to get devstack going
>> > again. It seems like that is not used by any of the default services
>> > (nova, neutron, cinder,...) right now.
>> >
>> > In the long run I would like to understand the plans of etcd3 in
>> > devstack. Are the plans to make the default services dependent on etcd3
>> > in the future?
>> >
>> > Thanks a lot!
>> >
>> > Andreas
>> >
>> >
>> > [1]
>> >
>> > https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
>> > [2] https://review.openstack.org/467597
>>
>> Yes, it is designed to be required by base services. See -
>> http://lists.openstack.org/pipermail/openstack-dev/2017-May/117370.html
>>
>> -Sean
>>
> It is designed to be required, but please be aware of other arches. E.g. the
> original change do DevStack [3] did not allow much flexibility, and only
> worked on x86 and aarch. The d-g change [4] broke d-g for non-x86 arches. I
> have submitted [5] to add some flexibility to be able to specify a different
> mirror from which to pull non-x86 etcd3.
>
> In the last couple days I am playing a whack-a-mole with all of that and
> more. At some point I did request a permission to add PowerKVM CI (ppc64) to
> devstack-gate patches, which might have helped to identify the problem
> earlier. Maybe it should be revisited?
>
> [3] https://review.openstack.org/#/c/445432/
> [4] https://review.openstack.org/#/c/466817/
> [5] https://review.openstack.org/#/c/467437/
>
> ---
> Mikhail Medvedev (mmedvede)
> OpenStack CI for KVM on Power
> IBM
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Andreas Scheuring
In the meanwhile I found some more information like [1].

I understood that devstack downloads the binaries from github as distros
don't have the latest version available. But the binaries for s390x are
not yet provided there. I opened a issue to figure out what would need
to be done to get the s390x binary posted as well[2].

If that is not working, we might need to start thinking in a different
direction, e.g. 

- enhance devstack build etcd3 from source (for certain architectures)
- check that etcd3 is already installed (we could install it upfront on
our systems)

I opened a bug against devstack to track the discussion [3]



[1] https://review.openstack.org/#/c/467436/
[2] https://github.com/coreos/etcd/issues/7978
[3] https://bugs.launchpad.net/devstack/+bug/1693192


-- 
-
Andreas 
IRC: andreas_s



On Mi, 2017-05-24 at 13:48 +0200, Andreas Scheuring wrote:
> Hi together, 
> 
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
> 
> I pushed a patch to allow disabling the etcd3 service in local.conf [2].
> It would be great if we could get that merged soon to get devstack going
> again. It seems like that is not used by any of the default services
> (nova, neutron, cinder,...) right now.
> 
> In the long run I would like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
> 
> Thanks a lot!
> 
> Andreas
> 
> 
> [1]
> https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2] https://review.openstack.org/467597
> 
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Mikhail S Medvedev


On 05/24/2017 06:59 AM, Sean Dague wrote:

On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
> Hi together,
>
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
>
> I pushed a patch to allow disabling the etcd3 service in local.conf [2].
> It would be great if we could get that merged soon to get devstack going
> again. It seems like that is not used by any of the default services
> (nova, neutron, cinder,...) right now.
>
> In the long run I would like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
>
> Thanks a lot!
>
> Andreas
>
>
> [1]
> 
https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2] https://review.openstack.org/467597

Yes, it is designed to be required by base services. See -
http://lists.openstack.org/pipermail/openstack-dev/2017-May/117370.html

-Sean


It is designed to be required, but please be aware of other arches. E.g. the 
original change do DevStack [3] did not allow much flexibility, and only worked 
on x86 and aarch. The d-g change [4] broke d-g for non-x86 arches. I have 
submitted [5] to add some flexibility to be able to specify a different mirror 
from which to pull non-x86 etcd3.

In the last couple days I am playing a whack-a-mole with all of that and more. 
At some point I did request a permission to add PowerKVM CI (ppc64) to 
devstack-gate patches, which might have helped to identify the problem earlier. 
Maybe it should be revisited?

[3] https://review.openstack.org/#/c/445432/
[4] https://review.openstack.org/#/c/466817/
[5] https://review.openstack.org/#/c/467437/

---
Mikhail Medvedev (mmedvede)
OpenStack CI for KVM on Power
IBM

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Sean Dague
On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
> Hi together, 
> 
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
> 
> I pushed a patch to allow disabling the etcd3 service in local.conf [2].
> It would be great if we could get that merged soon to get devstack going
> again. It seems like that is not used by any of the default services
> (nova, neutron, cinder,...) right now.
> 
> In the long run I would like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
> 
> Thanks a lot!
> 
> Andreas
> 
> 
> [1]
> https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2] https://review.openstack.org/467597

Yes, it is designed to be required by base services. See -
http://lists.openstack.org/pipermail/openstack-dev/2017-May/117370.html

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-04 Thread Anne Gentle
On Wed, May 3, 2017 at 6:14 PM, Sean Dague  wrote:

> On 05/03/2017 07:08 PM, Doug Hellmann wrote:
>
>> Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
>>
>>> Screen is going away in Queens.
>>>
>>> Making the dev / test runtimes as similar as possible is really
>>> important. And there is so much weird debt around trying to make screen
>>> launch things reliably (like random sleeps) because screen has funny
>>> races in it.
>>>
>>> It does mean some tricks people figured out in screen are going away.
>>>
>>
>> It sounds like maybe we should start building a shared repository of new
>> tips & tricks for systemd/journald.
>>
>
> Agreed, the devstack docs have the following beginnings of that:
>
> https://docs.openstack.org/developer/devstack/development.html - for
> basic flow
>
> which also links to a systemd primer - https://docs.openstack.org/dev
> eloper/devstack/systemd.html
>
> But more contributions are welcomed for sure.
>
> (These docs exist in the devstack tree under doc/source)


Another set of docs that helped me figure out screen in DevStack are in the
Ops Guide [1][2]. Low-hanging fruit, the way I see it, so I've also logged
a doc bug[3].

Anne

1.
https://github.com/openstack/openstack-manuals/blob/master/doc/ops-guide/source/ops-customize-objectstorage.rst

2.
https://github.com/openstack/openstack-manuals/blob/master/doc/ops-guide/source/ops-customize-compute.rst

3. https://bugs.launchpad.net/openstack-manuals/+bug/1688245


>
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Read my blog: justwrite.click 
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-04 Thread David Shrewsbury
These docs are great. As someone who has avoided learning systemd, I really
appreciate
the time folks put into making these docs. Well done.

-Dave

On Wed, May 3, 2017 at 7:14 PM, Sean Dague  wrote:

> On 05/03/2017 07:08 PM, Doug Hellmann wrote:
>
>> Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
>>
>>> Screen is going away in Queens.
>>>
>>> Making the dev / test runtimes as similar as possible is really
>>> important. And there is so much weird debt around trying to make screen
>>> launch things reliably (like random sleeps) because screen has funny
>>> races in it.
>>>
>>> It does mean some tricks people figured out in screen are going away.
>>>
>>
>> It sounds like maybe we should start building a shared repository of new
>> tips & tricks for systemd/journald.
>>
>
> Agreed, the devstack docs have the following beginnings of that:
>
> https://docs.openstack.org/developer/devstack/development.html - for
> basic flow
>
> which also links to a systemd primer - https://docs.openstack.org/dev
> eloper/devstack/systemd.html
>
> But more contributions are welcomed for sure.
>
> (These docs exist in the devstack tree under doc/source)
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
David Shrewsbury (Shrews)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-04 Thread Sean Dague
This is the cantrip in devstack-gate that's collecting the logs into the
compat format:

https://github.com/openstack-infra/devstack-gate/blob/3a21366743d6624fb5c51588fcdb26f818fbd8b5/functions.sh#L794-L797

It's also probably worth dumping the whole journal in native format for
people to download and query later if they want (I expect that will
become more of a thing):

https://github.com/openstack-infra/devstack-gate/blob/3a21366743d6624fb5c51588fcdb26f818fbd8b5/functions.sh#L802-L803


If you are using devstack-gate already, this should be happening for
you. If things are running differently, those are probably the missing
bits you need.

-Sean



On 05/04/2017 03:09 AM, Guy Rozendorn wrote:
> In regards to 3rd party CIs:
> Before this change, the screen logs were saved under $LOGDIR and copied
> to the log servers, and it was pretty much under the same location for
> all the jobs/projects.
> 
> What’s the convention now with switch to systemd?
> * should the logs be collected in journal exported format? or dump to
> simple text files so they could be viewed in the browser? or in journal
> json format?
> * is there a utility function in devstack/devstack-gate that takes care
> of the log collection so it’ll be the same for all jobs/projects?
> 
> 
> 
> On 3 May 2017 at 13:17:14, Sean Dague (s...@dague.net
> ) wrote:
> 
>> As a follow up, there are definitely a few edge conditions we've hit
>> with some jobs, so the following is provided as information in case you
>> have a job that seems to fail in one of these ways.


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Monty Taylor

On 05/03/2017 06:45 PM, James Slagle wrote:

On Tue, May 2, 2017 at 9:19 AM, Monty Taylor  wrote:

I absolutely cannot believe I'm saying this given what the change implements
and my general steaming hatred associated with it ... but this is awesome
work and a definite improvement over what existed before it. If we're going
to be stuck sharing the Bad Trip that is Lennart's projected consciousness,
this is a pleasant surprise of a positive outcome.


In my opinion, these comments about Lennart are quite out of line.
Regardless of whether or not that individual is a member of the
OpenStack community, there are constructive ways to voice your
opinions about systemd without resorting to these types of personal
comments.


Totally fair, and I apologize.


systemd is an open source driven community project. I'd suggest
directing your energy at those technology choices and working towards
what you see as improvements in those choices instead of making
comments such as what you've done here.

While minor (with some thinly veiled praise sprinkled in), I'm a bit
shocked no one else has called attention to your response. It is not
friendly, considerate, and above all else -- it is not respectful.


You are totally right. It is an unacceptable way for me to have 
expressed myself. I will endeavor to do better in the future - and 
although I doubt he's reading this list at the moment, I do earnestly 
apologize to Lennart as well. Personally directed statements such as 
that are, in fact, totally inappropriate.


Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Sean Dague

On 05/03/2017 07:08 PM, Doug Hellmann wrote:

Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:

Screen is going away in Queens.

Making the dev / test runtimes as similar as possible is really
important. And there is so much weird debt around trying to make screen
launch things reliably (like random sleeps) because screen has funny
races in it.

It does mean some tricks people figured out in screen are going away.


It sounds like maybe we should start building a shared repository of new
tips & tricks for systemd/journald.


Agreed, the devstack docs have the following beginnings of that:

https://docs.openstack.org/developer/devstack/development.html - for 
basic flow


which also links to a systemd primer - 
https://docs.openstack.org/developer/devstack/systemd.html


But more contributions are welcomed for sure.

(These docs exist in the devstack tree under doc/source)

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
> Screen is going away in Queens.
> 
> Making the dev / test runtimes as similar as possible is really
> important. And there is so much weird debt around trying to make screen
> launch things reliably (like random sleeps) because screen has funny
> races in it.
> 
> It does mean some tricks people figured out in screen are going away.

It sounds like maybe we should start building a shared repository of new
tips & tricks for systemd/journald.

Doug

> 
> Journalctl provides some pretty serious improvements in querying logs
> https://www.freedesktop.org/software/systemd/man/journalctl.html - you
> can search in time ranges, filter by units (one or more of them), and if
> we get to the bottom of the eventlet interaction, we'll be able to
> search by things like REQUEST_ID as well.
> 
> Plus every modern Linux system uses systemd now, so skills learned
> around systemd and journalctl are transferable both from OpenStack to
> other systems, as well as for new people coming in that understand how
> this works outside of OpenStack. So it helps remove a difference from
> the way we do things from the rest of the world.
> 
> -Sean
> 
> On 05/03/2017 04:02 PM, Hongbin Lu wrote:
> > Hi Sean,
> > 
> > I tried the new systemd devstack and frankly I don't like it. There are 
> > several handy operations in screen that seems to be impossible after 
> > switching to systemd. For example, freeze a process by "Ctrl + a + [". In 
> > addition, navigating though the logs seems difficult (perhaps I am not 
> > familiar with journalctl).
> > 
> > From my understanding, the plan is dropping screen entirely in devstack? I 
> > would argue that it is better to keep both screen and systemd, and let 
> > users choose one of them based on their preference.
> > 
> > Best regards,
> > Hongbin
> > 
> >> -----Original Message-
> >> From: Sean Dague [mailto:s...@dague.net]
> >> Sent: May-03-17 6:10 AM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
> >> default
> >>
> >> On 05/02/2017 08:30 AM, Sean Dague wrote:
> >>> We started running systemd for devstack in the gate yesterday, so far
> >>> so good.
> >>>
> >>> The following patch (which will hopefully land soon), will convert
> >> the
> >>> default local use of devstack to systemd as well -
> >>> https://review.openstack.org/#/c/461716/. It also includes
> >>> substantially updated documentation.
> >>>
> >>> Once you take this patch, a "./clean.sh" is recommended. Flipping
> >>> modes can cause some cruft to build up, and ./clean.sh should be
> >>> pretty good at eliminating them.
> >>>
> >>> https://review.openstack.org/#/c/461716/2/doc/source/development.rst
> >>> is probably specifically interesting / useful for people to read, as
> >>> it shows how the standard development workflows will change (for the
> >>> better) with systemd.
> >>>
> >>> -Sean
> >>
> >> As a follow up, there are definitely a few edge conditions we've hit
> >> with some jobs, so the following is provided as information in case you
> >> have a job that seems to fail in one of these ways.
> >>
> >> Doing process stop / start
> >> ==
> >>
> >> The nova live migration job is special, it was restarting services
> >> manually, however it was doing so with some copy / pasted devstack code,
> >> which means it didn't evolve with the rest of devstack. So the stop
> >> code stopped working (and wasn't robust enough to make it clear that
> >> was the issue).
> >>
> >> https://review.openstack.org/#/c/461803/ is the fix (merged)
> >>
> >> run_process limitations
> >> ===
> >>
> >> When doing the systemd conversion I looked for a path forward which was
> >> going to make 90% of everything just work. The key trick here was that
> >> services start as the "stack" user, and aren't daemonizing away from
> >> the console. We can take the run_process command and make that the
> >> ExecStart in a unit file.
> >>
> >> *Except* that only works if the command is specified by an *absolute
> >> path*.
> >>
> >> So things like this in kuryr-libnetwork become an issue
> >> https://github.com/openst

Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread James Slagle
On Tue, May 2, 2017 at 9:19 AM, Monty Taylor  wrote:
> I absolutely cannot believe I'm saying this given what the change implements
> and my general steaming hatred associated with it ... but this is awesome
> work and a definite improvement over what existed before it. If we're going
> to be stuck sharing the Bad Trip that is Lennart's projected consciousness,
> this is a pleasant surprise of a positive outcome.

In my opinion, these comments about Lennart are quite out of line.
Regardless of whether or not that individual is a member of the
OpenStack community, there are constructive ways to voice your
opinions about systemd without resorting to these types of personal
comments.

systemd is an open source driven community project. I'd suggest
directing your energy at those technology choices and working towards
what you see as improvements in those choices instead of making
comments such as what you've done here.

While minor (with some thinly veiled praise sprinkled in), I'm a bit
shocked no one else has called attention to your response. It is not
friendly, considerate, and above all else -- it is not respectful.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Sean Dague
Screen is going away in Queens.

Making the dev / test runtimes as similar as possible is really
important. And there is so much weird debt around trying to make screen
launch things reliably (like random sleeps) because screen has funny
races in it.

It does mean some tricks people figured out in screen are going away.

Journalctl provides some pretty serious improvements in querying logs
https://www.freedesktop.org/software/systemd/man/journalctl.html - you
can search in time ranges, filter by units (one or more of them), and if
we get to the bottom of the eventlet interaction, we'll be able to
search by things like REQUEST_ID as well.

Plus every modern Linux system uses systemd now, so skills learned
around systemd and journalctl are transferable both from OpenStack to
other systems, as well as for new people coming in that understand how
this works outside of OpenStack. So it helps remove a difference from
the way we do things from the rest of the world.

-Sean

On 05/03/2017 04:02 PM, Hongbin Lu wrote:
> Hi Sean,
> 
> I tried the new systemd devstack and frankly I don't like it. There are 
> several handy operations in screen that seems to be impossible after 
> switching to systemd. For example, freeze a process by "Ctrl + a + [". In 
> addition, navigating though the logs seems difficult (perhaps I am not 
> familiar with journalctl).
> 
> From my understanding, the plan is dropping screen entirely in devstack? I 
> would argue that it is better to keep both screen and systemd, and let users 
> choose one of them based on their preference.
> 
> Best regards,
> Hongbin
> 
>> -Original Message-
>> From: Sean Dague [mailto:s...@dague.net]
>> Sent: May-03-17 6:10 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
>> default
>>
>> On 05/02/2017 08:30 AM, Sean Dague wrote:
>>> We started running systemd for devstack in the gate yesterday, so far
>>> so good.
>>>
>>> The following patch (which will hopefully land soon), will convert
>> the
>>> default local use of devstack to systemd as well -
>>> https://review.openstack.org/#/c/461716/. It also includes
>>> substantially updated documentation.
>>>
>>> Once you take this patch, a "./clean.sh" is recommended. Flipping
>>> modes can cause some cruft to build up, and ./clean.sh should be
>>> pretty good at eliminating them.
>>>
>>> https://review.openstack.org/#/c/461716/2/doc/source/development.rst
>>> is probably specifically interesting / useful for people to read, as
>>> it shows how the standard development workflows will change (for the
>>> better) with systemd.
>>>
>>> -Sean
>>
>> As a follow up, there are definitely a few edge conditions we've hit
>> with some jobs, so the following is provided as information in case you
>> have a job that seems to fail in one of these ways.
>>
>> Doing process stop / start
>> ==
>>
>> The nova live migration job is special, it was restarting services
>> manually, however it was doing so with some copy / pasted devstack code,
>> which means it didn't evolve with the rest of devstack. So the stop
>> code stopped working (and wasn't robust enough to make it clear that
>> was the issue).
>>
>> https://review.openstack.org/#/c/461803/ is the fix (merged)
>>
>> run_process limitations
>> ===
>>
>> When doing the systemd conversion I looked for a path forward which was
>> going to make 90% of everything just work. The key trick here was that
>> services start as the "stack" user, and aren't daemonizing away from
>> the console. We can take the run_process command and make that the
>> ExecStart in a unit file.
>>
>> *Except* that only works if the command is specified by an *absolute
>> path*.
>>
>> So things like this in kuryr-libnetwork become an issue
>> https://github.com/openstack/kuryr-
>> libnetwork/blob/3e2891d6fc5d55b3712258c932a5a8b9b323f6c2/devstack/plugi
>> n.sh#L148
>>
>> There is also a second issue there, which is calling sudo in the
>> run_process line. If you need to run as a user/group different than the
>> default, you need to specify that directly.
>>
>> The run_process command now supports that -
>> https://github.com/openstack-
>> dev/devstack/blob/803acffcf9254e328426ad67380a99f4f5b164ec/functions-
>> common#L1531-L1535
>>
>> And lastly, run_process really always did expect that the thing you
>> started remained attac

Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Hongbin Lu
Hi Sean,

I tried the new systemd devstack and frankly I don't like it. There are several 
handy operations in screen that seems to be impossible after switching to 
systemd. For example, freeze a process by "Ctrl + a + [". In addition, 
navigating though the logs seems difficult (perhaps I am not familiar with 
journalctl).

From my understanding, the plan is dropping screen entirely in devstack? I 
would argue that it is better to keep both screen and systemd, and let users 
choose one of them based on their preference.

Best regards,
Hongbin

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: May-03-17 6:10 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
> default
> 
> On 05/02/2017 08:30 AM, Sean Dague wrote:
> > We started running systemd for devstack in the gate yesterday, so far
> > so good.
> >
> > The following patch (which will hopefully land soon), will convert
> the
> > default local use of devstack to systemd as well -
> > https://review.openstack.org/#/c/461716/. It also includes
> > substantially updated documentation.
> >
> > Once you take this patch, a "./clean.sh" is recommended. Flipping
> > modes can cause some cruft to build up, and ./clean.sh should be
> > pretty good at eliminating them.
> >
> > https://review.openstack.org/#/c/461716/2/doc/source/development.rst
> > is probably specifically interesting / useful for people to read, as
> > it shows how the standard development workflows will change (for the
> > better) with systemd.
> >
> > -Sean
> 
> As a follow up, there are definitely a few edge conditions we've hit
> with some jobs, so the following is provided as information in case you
> have a job that seems to fail in one of these ways.
> 
> Doing process stop / start
> ==
> 
> The nova live migration job is special, it was restarting services
> manually, however it was doing so with some copy / pasted devstack code,
> which means it didn't evolve with the rest of devstack. So the stop
> code stopped working (and wasn't robust enough to make it clear that
> was the issue).
> 
> https://review.openstack.org/#/c/461803/ is the fix (merged)
> 
> run_process limitations
> ===
> 
> When doing the systemd conversion I looked for a path forward which was
> going to make 90% of everything just work. The key trick here was that
> services start as the "stack" user, and aren't daemonizing away from
> the console. We can take the run_process command and make that the
> ExecStart in a unit file.
> 
> *Except* that only works if the command is specified by an *absolute
> path*.
> 
> So things like this in kuryr-libnetwork become an issue
> https://github.com/openstack/kuryr-
> libnetwork/blob/3e2891d6fc5d55b3712258c932a5a8b9b323f6c2/devstack/plugi
> n.sh#L148
> 
> There is also a second issue there, which is calling sudo in the
> run_process line. If you need to run as a user/group different than the
> default, you need to specify that directly.
> 
> The run_process command now supports that -
> https://github.com/openstack-
> dev/devstack/blob/803acffcf9254e328426ad67380a99f4f5b164ec/functions-
> common#L1531-L1535
> 
> And lastly, run_process really always did expect that the thing you
> started remained attached to the console. These are run as "simple"
> services in systemd. If you are running a thing which already
> daemonizes systemd is going to assume (correctly in this simple mode)
> the fact that the process detatched from it means it died, and kill and
> clean it up.
> 
> This is the issue the OpenDaylight plugin ran into.
> https://review.openstack.org/#/c/461889/ is the proposed fix.
> 
> 
> 
> If you run into any other issues please pop into #openstack-qa (or
> respond to this email) and we'll try to work through them.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Matt Riedemann

On 5/3/2017 5:09 AM, Sean Dague wrote:

If you run into any other issues please pop into #openstack-qa (or
respond to this email) and we'll try to work through them.


Something has definitely gone haywire in the cells v1 job since 5/1 and 
the journal log handler:


http://status.openstack.org/elastic-recheck/#1580728

We're seeing UnicodeDecodeErrors. I don't know why it's just that job 
that's failing though since the same code and test tickling it exists in 
all of the other jobs too. It could just be something to do with how 
cells v1 handles vm state changes at the top which turns it into a hard 
failure.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Sean Dague
On 05/02/2017 08:30 AM, Sean Dague wrote:
> We started running systemd for devstack in the gate yesterday, so far so
> good.
> 
> The following patch (which will hopefully land soon), will convert the
> default local use of devstack to systemd as well -
> https://review.openstack.org/#/c/461716/. It also includes substantially
> updated documentation.
> 
> Once you take this patch, a "./clean.sh" is recommended. Flipping modes
> can cause some cruft to build up, and ./clean.sh should be pretty good
> at eliminating them.
> 
> https://review.openstack.org/#/c/461716/2/doc/source/development.rst is
> probably specifically interesting / useful for people to read, as it
> shows how the standard development workflows will change (for the
> better) with systemd.
> 
>   -Sean

As a follow up, there are definitely a few edge conditions we've hit
with some jobs, so the following is provided as information in case you
have a job that seems to fail in one of these ways.

Doing process stop / start
==

The nova live migration job is special, it was restarting services
manually, however it was doing so with some copy / pasted devstack
code, which means it didn't evolve with the rest of devstack. So the
stop code stopped working (and wasn't robust enough to make it clear
that was the issue).

https://review.openstack.org/#/c/461803/ is the fix (merged)

run_process limitations
===

When doing the systemd conversion I looked for a path forward which
was going to make 90% of everything just work. The key trick here was
that services start as the "stack" user, and aren't daemonizing away
from the console. We can take the run_process command and make that
the ExecStart in a unit file.

*Except* that only works if the command is specified by an *absolute
path*.

So things like this in kuryr-libnetwork become an issue
https://github.com/openstack/kuryr-libnetwork/blob/3e2891d6fc5d55b3712258c932a5a8b9b323f6c2/devstack/plugin.sh#L148

There is also a second issue there, which is calling sudo in the
run_process line. If you need to run as a user/group different than
the default, you need to specify that directly.

The run_process command now supports that -
https://github.com/openstack-dev/devstack/blob/803acffcf9254e328426ad67380a99f4f5b164ec/functions-common#L1531-L1535

And lastly, run_process really always did expect that the thing you
started remained attached to the console. These are run as "simple"
services in systemd. If you are running a thing which already
daemonizes systemd is going to assume (correctly in this simple mode)
the fact that the process detatched from it means it died, and kill
and clean it up.

This is the issue the OpenDaylight plugin ran
into. https://review.openstack.org/#/c/461889/ is the proposed fix.



If you run into any other issues please pop into #openstack-qa (or
respond to this email) and we'll try to work through them.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-02 Thread Monty Taylor

On 05/02/2017 08:30 AM, Sean Dague wrote:

We started running systemd for devstack in the gate yesterday, so far so
good.

The following patch (which will hopefully land soon), will convert the
default local use of devstack to systemd as well -
https://review.openstack.org/#/c/461716/. It also includes substantially
updated documentation.

Once you take this patch, a "./clean.sh" is recommended. Flipping modes
can cause some cruft to build up, and ./clean.sh should be pretty good
at eliminating them.

https://review.openstack.org/#/c/461716/2/doc/source/development.rst is
probably specifically interesting / useful for people to read, as it
shows how the standard development workflows will change (for the
better) with systemd.


I absolutely cannot believe I'm saying this given what the change 
implements and my general steaming hatred associated with it ... but 
this is awesome work and a definite improvement over what existed before 
it. If we're going to be stuck sharing the Bad Trip that is Lennart's 
projected consciousness, this is a pleasant surprise of a positive outcome.


Thank you for learning about the topic and for teaching me something in 
the process.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] uwsgi for API services

2017-04-20 Thread Takashi Yamamoto
On Thu, Apr 13, 2017 at 9:01 PM, Sean Dague  wrote:
> One of the many reasons for getting all our API services running wsgi
> under a real webserver is to get out of the custom ports for all
> services game. However, because of some of the limits of apache
> mod_wsgi, we really haven't been able to do that in our development
> enviroment. Plus, the moment we go to mod_wsgi for some services, the
> entire development workflow for "change this library, refresh the
> following services" changes dramatically.
>
> It would be better to have a consistent story here.
>
> So there is some early work up to use apache mod_proxy_uwsgi as the
> listener, and uwsgi processes running under systemd for all the
> services. These bind only to a unix local socket, not to a port.
> https://review.openstack.org/#/c/456344/
>
> Early testing locally has been showing progress. We still need to prove
> out a few things, but this should simplify a bunch of the setup. And
> coming with systemd will converge us back to a more consistent
> development workflow when updating common code in a project that has
> both API services and workers.
>
> For projects that did the mod_wsgi thing in a devstack plugin, this is
> going to require some adjustment. Exactly what is not yet clear, but
> it's going to be worth following that patch.

networking-midonet needed this change.
https://review.openstack.org/#/c/458305

i guess some other projects need similar changes.
http://codesearch.openstack.org/?q=KEYSTONE_AUTH_PORT=nope==

>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] uwsgi for API services - RSN

2017-04-18 Thread Sean Dague
This is all merged now. If you run into any issues with real WSGI
running, please poke up in #openstack-qa and we'll see what we can to to
get things ironned out.

-Sean

On 04/18/2017 07:19 AM, Sean Dague wrote:
> Ok, the patch series has come together now, and
> https://review.openstack.org/#/c/456344/ remains the critical patch.
> 
> This introduces a new global config option: "WSGI_MODE", which will be
> either "uwsgi" or "mod_wsgi" (for the transition).
> 
> https://review.openstack.org/#/c/456717/6/lib/placement shows what it
> takes to make something that's current running under mod_wsgi to run
> under uwsgi in this model.
> 
> The intent is that uwsgi mode becomes primary RSN, as that provides a
> more consistent experience for development, and still exercises the API
> services as real wsgi applications.
> 
>   -Sean
> 
> On 04/13/2017 08:01 AM, Sean Dague wrote:
>> One of the many reasons for getting all our API services running wsgi
>> under a real webserver is to get out of the custom ports for all
>> services game. However, because of some of the limits of apache
>> mod_wsgi, we really haven't been able to do that in our development
>> enviroment. Plus, the moment we go to mod_wsgi for some services, the
>> entire development workflow for "change this library, refresh the
>> following services" changes dramatically.
>>
>> It would be better to have a consistent story here.
>>
>> So there is some early work up to use apache mod_proxy_uwsgi as the
>> listener, and uwsgi processes running under systemd for all the
>> services. These bind only to a unix local socket, not to a port.
>> https://review.openstack.org/#/c/456344/
>>
>> Early testing locally has been showing progress. We still need to prove
>> out a few things, but this should simplify a bunch of the setup. And
>> coming with systemd will converge us back to a more consistent
>> development workflow when updating common code in a project that has
>> both API services and workers.
>>
>> For projects that did the mod_wsgi thing in a devstack plugin, this is
>> going to require some adjustment. Exactly what is not yet clear, but
>> it's going to be worth following that patch.
>>
>>  -Sean
>>
> 
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] uwsgi for API services - RSN

2017-04-18 Thread Sean Dague
Ok, the patch series has come together now, and
https://review.openstack.org/#/c/456344/ remains the critical patch.

This introduces a new global config option: "WSGI_MODE", which will be
either "uwsgi" or "mod_wsgi" (for the transition).

https://review.openstack.org/#/c/456717/6/lib/placement shows what it
takes to make something that's current running under mod_wsgi to run
under uwsgi in this model.

The intent is that uwsgi mode becomes primary RSN, as that provides a
more consistent experience for development, and still exercises the API
services as real wsgi applications.

-Sean

On 04/13/2017 08:01 AM, Sean Dague wrote:
> One of the many reasons for getting all our API services running wsgi
> under a real webserver is to get out of the custom ports for all
> services game. However, because of some of the limits of apache
> mod_wsgi, we really haven't been able to do that in our development
> enviroment. Plus, the moment we go to mod_wsgi for some services, the
> entire development workflow for "change this library, refresh the
> following services" changes dramatically.
> 
> It would be better to have a consistent story here.
> 
> So there is some early work up to use apache mod_proxy_uwsgi as the
> listener, and uwsgi processes running under systemd for all the
> services. These bind only to a unix local socket, not to a port.
> https://review.openstack.org/#/c/456344/
> 
> Early testing locally has been showing progress. We still need to prove
> out a few things, but this should simplify a bunch of the setup. And
> coming with systemd will converge us back to a more consistent
> development workflow when updating common code in a project that has
> both API services and workers.
> 
> For projects that did the mod_wsgi thing in a devstack plugin, this is
> going to require some adjustment. Exactly what is not yet clear, but
> it's going to be worth following that patch.
> 
>   -Sean
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] experimenting with systemd for services

2017-04-05 Thread Matt Riedemann

On 4/5/2017 3:09 PM, Sean Dague wrote:

At the PTG clayg brought up an excellent question about what the
expected flow was to restart a bunch of services in devstack after a
code changes that impacts many of them (be it common code, or a
library). People had created a bunch of various screen hacks over the
years, but screen is flakey, and this is definitely not an ideal workflow.

Over lunch clayg, clarkb, and I chatted about some options. Clark
brought up the idea of doing systemd units for all of the services. A
couple of weeks ago I decided it was time for me to understand systemd
better anyway, and started playing around with what this might look like.

The results landed here https://review.openstack.org/#/c/448323/.
Documentation is here
http://git.openstack.org/cgit/openstack-dev/devstack/tree/SYSTEMD.rst

This is currently an opt in. All the services in base devstack however
do work in this new model, and I and a few others have been using this
mode the last week or so. It's honestly really great. Working on
oslo.log changes it's now:

pip install -U .
sudo systemctl restart devstack@*

And the change is now in all your services.

There is also an oslo.log change for native systemd journal support
(https://review.openstack.org/#/c/451525/), which once that has landed
and been released, will let us do some neat query of the journal during
development to see slices across services like this
https://dague.net/2017/03/30/in-praise-of-systemd/.

ACTION REQUIRED:

If you maintain a devstack plugin that starts any services, now would be
a great time to test to see if this works for them. The biggest issue is
that the commands sent to run_process need to be absolute pathed.


My hope is that by end of cycle this is going to be the default mode in
devstack for *both* the gate and development, which eliminates one major
difference between the two. I'm also hoping that we'll be able to keep
and archive the journals from the runs, so you can download and query
those directly. Especially once the oslo.log enhancements are there to
add the additional structured data.

-Sean



Cool. I've always wanted this. When I started on OpenStack I was used to 
using sysv-init files and service commands with RHEL and wasn't used to 
screen, so pretty much hated that in devstack but felt like I couldn't 
ever express that because all of the cool kids loved screen so much.


Well well well, how the tables have turned.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] experimenting with systemd for services

2017-04-05 Thread Clay Gerrard
On Wed, Apr 5, 2017 at 1:30 PM, Andrea Frittoli 
wrote:
>
>
> I just want to say thank you! to you clarkb clayg and everyone involved :)
> This is so much better!
>
> andreaf
>
>
Sean is throwing credit at me where none is due.  IIRC I was both in the
room and in a very-normal-for-me state of confusion while he and clark
talked about this - but I did not know they were working on it.
Nevertheless, I am dusting off my devstack vm with USE_SYSTEMD=True and
found his blog post interesting ;)

Kudos!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] experimenting with systemd for services

2017-04-05 Thread Andrea Frittoli
On Wed, Apr 5, 2017 at 9:14 PM Sean Dague  wrote:

> At the PTG clayg brought up an excellent question about what the
> expected flow was to restart a bunch of services in devstack after a
> code changes that impacts many of them (be it common code, or a
> library). People had created a bunch of various screen hacks over the
> years, but screen is flakey, and this is definitely not an ideal workflow.
>
> Over lunch clayg, clarkb, and I chatted about some options. Clark
> brought up the idea of doing systemd units for all of the services. A
> couple of weeks ago I decided it was time for me to understand systemd
> better anyway, and started playing around with what this might look like.
>
> The results landed here https://review.openstack.org/#/c/448323/.
> Documentation is here
> http://git.openstack.org/cgit/openstack-dev/devstack/tree/SYSTEMD.rst
>
> This is currently an opt in. All the services in base devstack however
> do work in this new model, and I and a few others have been using this
> mode the last week or so. It's honestly really great. Working on
> oslo.log changes it's now:
>
> pip install -U .
> sudo systemctl restart devstack@*
>
> And the change is now in all your services.
>
> There is also an oslo.log change for native systemd journal support
> (https://review.openstack.org/#/c/451525/), which once that has landed
> and been released, will let us do some neat query of the journal during
> development to see slices across services like this
> https://dague.net/2017/03/30/in-praise-of-systemd/.
>

I just want to say thank you! to you clarkb clayg and everyone involved :)
This is so much better!

andreaf


> ACTION REQUIRED:
>
> If you maintain a devstack plugin that starts any services, now would be
> a great time to test to see if this works for them. The biggest issue is
> that the commands sent to run_process need to be absolute pathed.
>
>
> My hope is that by end of cycle this is going to be the default mode in
> devstack for *both* the gate and development, which eliminates one major
> difference between the two. I'm also hoping that we'll be able to keep
> and archive the journals from the runs, so you can download and query
> those directly. Especially once the oslo.log enhancements are there to
> add the additional structured data.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] A few questions on configuring DevStack for Neutron

2017-03-10 Thread Christopher Aedo
On Thu, Oct 8, 2015 at 5:41 PM, Monty Taylor  wrote:
> On 10/08/2015 07:13 PM, Christopher Aedo wrote:
>>
>> On Thu, Oct 8, 2015 at 9:38 AM, Sean M. Collins 
>> wrote:
>>>
>>> Please see my response here:
>>>
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-October/076251.html
>>>
>>> In the future, do not create multiple threads since responses will get
>>> lost
>>
>>
>> Yep, thank you Sean - saw your response yesterday and was going to
>> follow-up this thread with a "please ignore" and a link to the other
>> thread.  I opted not to in hopes of reducing the noise but I think
>> your note here is correct and will close the loop for anyone who
>> happens across only this thread.
>>
>> (Secretly though I hope this thread somehow becomes never-ending like
>> the "don't -1 for a long commit message" thread!)
>
>
> (I think punctuation should go outside the parenthetical)?

Apologies for the late reply to this Monty but a response is required
considering the importance of proper punctuation as regards
parenthesis.

According to the Internet[1] (which is widely regarded as the
authority in such matters), the original usage of punctuation in this
case is the appropriate usage when the entire sentence is inside the
parenthesis.

Hopefully this diversion from the topic does not warrant a new thread.

[1]: http://www.grammarbook.com/punctuation/parens.asp

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] broken installation at RHEL-based distros

2017-03-02 Thread Sean Dague
On 03/02/2017 08:18 AM, Evgeny Antyshev wrote:
> Hello, devstack!
> 
> I want to draw some attention to the fact, that install_libvirt function
> now (since https://review.openstack.org/#/c/438325 landed)
> only works for Centos 7, but not for other RHEL-based distributions:
> Virtuozzo and, probably, RHEV.
> 
> Both of above have own version for qemu-kvm package: qemu-kvm-vz and
> qemu-kvm-rhev,
> accordingly. These packages provide "qemu-kvm", like qemu-kvm-ev,
> and, when you call "yum install qemu-kvm", they replace the default OS
> package.
> 
> To solve this, I propose install by "qemu-kvm" name, like in the patch:
> https://review.openstack.org/440353

I think that seems fine, but would like Ian to confirm it won't hurt the
centos 7 work.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [tricircle] Problem in placement-api after installing Tricircle with DevStack

2017-02-15 Thread Chris Dent

On Wed, 15 Feb 2017, Vega Cai wrote:


After digging into the log files, we find the reason is that, the placement
API Apache configuration file generated by DevStack doesn't grant necessary
access right to the placement API bin folder. In the first node where
Keystone is running, Apache configuration file for Keystone already grants
access right to "/usr/local/bin" folder, and this folder happens to be the
same folder placement API bin is located, so placement API works fine. But
in the second node, Keystone is not enabled, so request sent to placement
API is rejected and thus we fail to boot an instance.

One temporary workaround is to manually edit placement API configuration
file in the second node to add the following section:


   Require all granted


and restart Apache. We test and it works. Later we are going to submit a
patch to DevStack for this problem.


Good catch and nice analysis. If you haven't already done so I'd
encourage you to create a bug (against devstack [1]) so that when you
submit your patch it's tracked against something.

While your fix is the right one as a stopgap, I suspect that over
the long run we're going to want to centralize some of the mod-wsgi
related configuration so that we are not duplicating the generic
bits between multiple services.

[1] https://bugs.launchpad.net/devstack/+bugs
--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] issues with requiring python3 only tool?

2017-01-24 Thread Sean M. Collins
Sean Dague wrote:
> I'll probably still default this to python3, it is the future direction
> we are headed.

Works for me  :)

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] DRaaS for Keystone

2017-01-17 Thread Lance Bragstad
Hi Wasiq!

On Tue, Jan 17, 2017 at 1:34 PM, Wasiq Noor 
wrote:

> Hello,
>
> I am Wasiq from Namal College Mianwali, Pakistan. Following the link:
> https://wiki.openstack.org/wiki/DisasterRecovery, I have developed a
> disaster recovery solution for Keystone for various recovery mechanism. I
> have the code with me.
>

Do you happen to have bits published anywhere publicly, so that others can
take a look?

Can anybody help how can I make it into the devstack repository.
>

Are you looking to use devstack to test DR scenarios?


> I have followed some links but found them very confusing.
>

Do you have the links handy? Specific feedback can be useful to improve
project documentation.


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] issues with requiring python3 only tool?

2017-01-17 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-01-17 11:50:39 -0500:
> On 01/17/2017 11:46 AM, Victor Stinner wrote:
> > Le 17/01/2017 à 17:36, Sean Dague a écrit :
> >> When putting the cli interface on it, I discovered python3's argparse
> >> has subparsers built in. This makes building up the cli much easier, and
> >> removes pulling in a dependency for that. (Currently the only item in
> >> requirements.txt is pbr). This is useful both from an ease to install,
> >> as well as overall runtime.
> > 
> > Do you mean the argparse module of the Python standard library? It is
> > available on Python 2.7. Subparsers are also supported on Python 2.7, no?
> > https://docs.python.org/2/library/argparse.html#sub-commands
> > 
> > If you need a more recent version of argparse on Python 2.7, you might try:
> > https://pypi.python.org/pypi/argparse
> > 
> > But I'm not sure that this third-party module is used on Python 2.7,
> > since import checks the stdlib before checking site-packages.
> 
> Hmm... I don't know how I missed that in the docs. I guess I was going
> code blind last night. I guess it should be easy to make it all work. I
> did specifically want to avoid installing pypi argparse.
> 
> I'll probably still default this to python3, it is the future direction
> we are headed.

+1

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] issues with requiring python3 only tool?

2017-01-17 Thread Sean Dague
On 01/17/2017 11:46 AM, Victor Stinner wrote:
> Le 17/01/2017 à 17:36, Sean Dague a écrit :
>> When putting the cli interface on it, I discovered python3's argparse
>> has subparsers built in. This makes building up the cli much easier, and
>> removes pulling in a dependency for that. (Currently the only item in
>> requirements.txt is pbr). This is useful both from an ease to install,
>> as well as overall runtime.
> 
> Do you mean the argparse module of the Python standard library? It is
> available on Python 2.7. Subparsers are also supported on Python 2.7, no?
> https://docs.python.org/2/library/argparse.html#sub-commands
> 
> If you need a more recent version of argparse on Python 2.7, you might try:
> https://pypi.python.org/pypi/argparse
> 
> But I'm not sure that this third-party module is used on Python 2.7,
> since import checks the stdlib before checking site-packages.

Hmm... I don't know how I missed that in the docs. I guess I was going
code blind last night. I guess it should be easy to make it all work. I
did specifically want to avoid installing pypi argparse.

I'll probably still default this to python3, it is the future direction
we are headed.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] issues with requiring python3 only tool?

2017-01-17 Thread Victor Stinner

Le 17/01/2017 à 17:36, Sean Dague a écrit :

When putting the cli interface on it, I discovered python3's argparse
has subparsers built in. This makes building up the cli much easier, and
removes pulling in a dependency for that. (Currently the only item in
requirements.txt is pbr). This is useful both from an ease to install,
as well as overall runtime.


Do you mean the argparse module of the Python standard library? It is 
available on Python 2.7. Subparsers are also supported on Python 2.7, no?

https://docs.python.org/2/library/argparse.html#sub-commands

If you need a more recent version of argparse on Python 2.7, you might try:
https://pypi.python.org/pypi/argparse

But I'm not sure that this third-party module is used on Python 2.7, 
since import checks the stdlib before checking site-packages.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]VersionConflict exception during stack.sh - resend with explanation.

2016-12-06 Thread Lenny Verkhovsky
Hi Wanjing,
We had some similar issues in our CI, especially after we are running one of 
the stable branches.
The current workaround we’d found is removing all pip packs after devstack 
./clean.sh

Thanks
Lenny.

-Original Message-
From: Wanjing Xu (waxu) [mailto:w...@cisco.com] 
Sent: Wednesday, December 7, 2016 8:21 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [devstack]VersionConflict exception during 
stack.sh - resend with explanation.

Thanks Tony for reply
This is Jenkins third party.  So it is doing unstack and removing repo and 
stack again from master branch.  

It has been running OK and then it was disable for about a month.  Now when we 
reenabled it, it has this versionconflict one-by-one.  After I manually 
installed the required version for a lot of modules, it is OK now.  But I was 
just wondering why it won’t install the required package automatically instead 
of throwing exception?

You can look at our ci http://192.133.158.2:8080/job/cisco_zm_cinder/, look at 
all the failed cases yesterday.  I manually invoked pip install –c –e …   it 
will throw exception, is this really expected?  If it is not replace the 
modules with the required version, this kind of failure will happen again if 
somebody changed the requirement or constraints.

localadmin@ubuntu-dmz:/opt/stack/horizon$ sudo -H http_proxy= https_proxy= 
no_proxy= PIP_FIND_LINKS= SETUPTOOLS_SYS_PATH_TECHNIQUE=rewrite 
/usr/local/bin/pip2.7 install -c /opt/stack/requirements/upper-constraints.txt 
-v -e /opt/stack/horizon Ignoring dnspython3: markers 'python_version == "3.4"' 
don't match your environment Ignoring dnspython3: markers 'python_version == 
"3.5"' don't match your environment Obtaining file:///opt/stack/horizon
  Running setup.py (path:/opt/stack/horizon/setup.py) egg_info for package from 
file:///opt/stack/horizon
Running command python setup.py egg_info
running egg_info
writing requirements to horizon.egg-info/requires.txt
writing horizon.egg-info/PKG-INFO
writing top-level names to horizon.egg-info/top_level.txt
writing dependency_links to horizon.egg-info/dependency_links.txt
writing pbr to horizon.egg-info/pbr.json
[pbr] Processing SOURCES.txt
[pbr] In git context, generating filelist from git
warning: no files found matching 'AUTHORS'
warning: no files found matching 'ChangeLog'
warning: no previously-included files matching '*.pyc' found anywhere in 
distribution
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.scss' under directory 'doc'
warning: no files found matching '*.js' under directory 'doc'
warning: no files found matching '*.html' under directory 'doc'
warning: no files found matching '*.conf' under directory 'doc'
warning: no files found matching '*.jpg' under directory 'doc'
warning: no files found matching '*.gif' under directory 'doc'
warning: no files found matching '*.csv' under directory 'horizon'
warning: no files found matching '*.template' under directory 'horizon'
warning: no files found matching '*.mo' under directory 'horizon'
warning: no files found matching '*.mo' under directory 
'openstack_dashboard'
warning: no files found matching '*.eot' under directory 
'openstack_dashboard'
warning: no files found matching '*.ttf' under directory 
'openstack_dashboard'
warning: no files found matching '*.woff' under directory 
'openstack_dashboard'
warning: no files found matching 'AUTHORS'
warning: no files found matching 'ChangeLog'
warning: no files found matching 'doc/source/_templates/.placeholder'
warning: no previously-included files found matching 
'openstack_dashboard/local/local_settings.py'
writing manifest file 'horizon.egg-info/SOURCES.txt'
  Source in /opt/stack/horizon has version 11.0.0.0b2.dev67, which satisfies 
requirement horizon==11.0.0.0b2.dev67 from file:///opt/stack/horizon Cleaning 
up...
Exception:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, 
in main
status = self.run(options, args)
  File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 
335, in run
wb.build(autobuilding=True)
  File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, 
in prepare_files
ignore_dependencies=self.ignore_dependencies))
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 521, 
in _prepare_file
req_to_install.check_if_exists()
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 
1036, in check_if_exists
self.req.name
  File 
"/usr/local/lib/python2.7/di

Re: [openstack-dev] [devstack]VersionConflict exception during stack.sh - resend with explanation.

2016-12-06 Thread Wanjing Xu (waxu)
Thanks Tony for reply
This is Jenkins third party.  So it is doing unstack and removing repo and 
stack again from master branch.  

It has been running OK and then it was disable for about a month.  Now when we 
reenabled it, it has this versionconflict one-by-one.  After I manually 
installed the required version for a lot of modules, it is OK now.  But I was 
just wondering why it won’t install the required package automatically instead 
of throwing exception?

You can look at our ci http://192.133.158.2:8080/job/cisco_zm_cinder/, look at 
all the failed cases yesterday.  I manually invoked pip install –c –e …   it 
will throw exception, is this really expected?  If it is not replace the 
modules with the required version, this kind of failure will happen again if 
somebody changed the requirement or constraints.

localadmin@ubuntu-dmz:/opt/stack/horizon$ sudo -H http_proxy= https_proxy= 
no_proxy= PIP_FIND_LINKS= SETUPTOOLS_SYS_PATH_TECHNIQUE=rewrite 
/usr/local/bin/pip2.7 install -c /opt/stack/requirements/upper-constraints.txt 
-v -e /opt/stack/horizon
Ignoring dnspython3: markers 'python_version == "3.4"' don't match your 
environment
Ignoring dnspython3: markers 'python_version == "3.5"' don't match your 
environment
Obtaining file:///opt/stack/horizon
  Running setup.py (path:/opt/stack/horizon/setup.py) egg_info for package from 
file:///opt/stack/horizon
Running command python setup.py egg_info
running egg_info
writing requirements to horizon.egg-info/requires.txt
writing horizon.egg-info/PKG-INFO
writing top-level names to horizon.egg-info/top_level.txt
writing dependency_links to horizon.egg-info/dependency_links.txt
writing pbr to horizon.egg-info/pbr.json
[pbr] Processing SOURCES.txt
[pbr] In git context, generating filelist from git
warning: no files found matching 'AUTHORS'
warning: no files found matching 'ChangeLog'
warning: no previously-included files matching '*.pyc' found anywhere in 
distribution
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.scss' under directory 'doc'
warning: no files found matching '*.js' under directory 'doc'
warning: no files found matching '*.html' under directory 'doc'
warning: no files found matching '*.conf' under directory 'doc'
warning: no files found matching '*.jpg' under directory 'doc'
warning: no files found matching '*.gif' under directory 'doc'
warning: no files found matching '*.csv' under directory 'horizon'
warning: no files found matching '*.template' under directory 'horizon'
warning: no files found matching '*.mo' under directory 'horizon'
warning: no files found matching '*.mo' under directory 
'openstack_dashboard'
warning: no files found matching '*.eot' under directory 
'openstack_dashboard'
warning: no files found matching '*.ttf' under directory 
'openstack_dashboard'
warning: no files found matching '*.woff' under directory 
'openstack_dashboard'
warning: no files found matching 'AUTHORS'
warning: no files found matching 'ChangeLog'
warning: no files found matching 'doc/source/_templates/.placeholder'
warning: no previously-included files found matching 
'openstack_dashboard/local/local_settings.py'
writing manifest file 'horizon.egg-info/SOURCES.txt'
  Source in /opt/stack/horizon has version 11.0.0.0b2.dev67, which satisfies 
requirement horizon==11.0.0.0b2.dev67 from file:///opt/stack/horizon
Cleaning up...
Exception:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, 
in main
status = self.run(options, args)
  File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 
335, in run
wb.build(autobuilding=True)
  File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, 
in prepare_files
ignore_dependencies=self.ignore_dependencies))
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 521, 
in _prepare_file
req_to_install.check_if_exists()
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 
1036, in check_if_exists
self.req.name
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 558, in get_distribution
dist = get_provider(dist)
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 432, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 968, in require
needed = self.resolve(parse_requirements(requirements))
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 859, in resolve
raise VersionConflict(dist, 

Re: [openstack-dev] [devstack]VersionConflict exception during stack.sh - resend with explanation.

2016-12-06 Thread Tony Breeds
On Tue, Dec 06, 2016 at 08:02:23PM +, Wanjing Xu (waxu) wrote:
> 
> Hi, 
> My devstack had been OK a month ago. But recently it keeps having this
> VersionConflict  error.  If I manually install the required module version,
> it will move on but then it will error out at some other module.  I have
> manually installed and fixed about 8 such modules, still it is not ending.  I
> guess I have something fundamental that I missed.  Could somebody please help
> me?  I was reading the pip code, there is some replace_conflicting env, but I
> don’t know how to set it, maybe it is set in vendor package?

You'll need to provide a little more information.  I'm assuming that you have a
devstack setup that you're ./unstack,sh ; ./stack.sh to upgrade? and overtime
the repos are gettiing out of sync.

Can you explain what you're doing?

Also can you please provide the out of:
pip freeze | sort
for repo in /opt/stack/* ; do (cd $repo ; [ -d .git ] && git describe ) ; done

That will help us understand what you have and why the constratints file
appears to be ineffective.

Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack]VersionConflict exception during stack.sh

2016-12-06 Thread Wanjing Xu (waxu)
Thanks for replying .  Sorry I did not write explanation in this email. So I 
sent another one.  Basically, there are too many of this kind of conflicts, I 
already manually fixed about 8, still more.  If I  fix them all this time, it 
may still happen down the road if somebody update constrains or requrements.  I 
am just wondering why this thing was not happening before.  I noticed that pip 
was updated to version 9, my old good one was using pip version 8

On 12/6/16, 11:58 AM, "Beliveau, Ludovic"  wrote

Try to manually uninstall it first: sudo pip uninstall python-heatclient

Then launch devstack again.  It will re-install the right version.

/ludovic

-Original Message-
From: Wanjing Xu (waxu) [mailto:w...@cisco.com] 
Sent: December-06-16 2:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [devstack]VersionConflict exception during stack.sh

 Hi

2016-12-06 18:50:28.095 | +inc/python:setup_package:354 ?[m? 
pip_install -e /opt/stack/horizon
2016-12-06 18:50:28.881 | +inc/python:pip_install:155   ?[m? 
sudo -H http_proxy= https_proxy= no_proxy= PIP_FIND_LINKS= 
SETUPTOOLS_SYS_PATH_TECHNIQUE=rewrite /usr/local/bin/pip2.7 install -c 
/opt/stack/requirements/upper-constraints.txt --upgrade -e /opt/stack/horizon
2016-12-06 18:50:29.930 | Ignoring dnspython3: markers 'python_version == 
"3.4"' don't match your environment
2016-12-06 18:50:29.931 | Ignoring dnspython3: markers 'python_version == 
"3.5"' don't match your environment
2016-12-06 18:50:30.276 | Obtaining file:///opt/stack/horizon
2016-12-06 18:50:33.097 | Exception:
2016-12-06 18:50:33.098 | Traceback (most recent call last):
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, in main
2016-12-06 18:50:33.098 | status = self.run(options, args)
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 335, in 
run
2016-12-06 18:50:33.098 | wb.build(autobuilding=True)
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 749, in build
2016-12-06 18:50:33.098 | 
self.requirement_set.prepare_files(self.finder)
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, in 
prepare_files
2016-12-06 18:50:33.098 | ignore_dependencies=self.ignore_dependencies))
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 521, in 
_prepare_file
2016-12-06 18:50:33.098 | req_to_install.check_if_exists()
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 1036, in 
check_if_exists
2016-12-06 18:50:33.098 | self.req.name
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 558, in get_distribution
2016-12-06 18:50:33.098 | dist = get_provider(dist)
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 432, in get_provider
2016-12-06 18:50:33.098 | return working_set.find(moduleOrReq) or 
require(str(moduleOrReq))[0]
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 968, in require
2016-12-06 18:50:33.098 | needed = 
self.resolve(parse_requirements(requirements))
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 859, in resolve
2016-12-06 18:50:33.098 | raise VersionConflict(dist, 
req).with_context(dependent_req)
2016-12-06 18:50:33.098 | ContextualVersionConflict: (python-heatclient 
1.5.0 (/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('python-heatclient>=1.6.1'), set(['horizon']))
2016-12-06 18:50:33.233 | +inc/python:pip_install:1 ?[m? 
exit_trap
2016-12-06 18:50:33.240 | +./stack.sh:exit_trap:487 ?[m? 
local r=2
2016-12-06 18:50:33.250 | ++./stack.sh:exit_trap:488 ?[m? 
jobs -p
2016-12-06 18:50:33.260 | +./stack.sh:exit_trap:488 ?[m? 
jobs=
2016-12-06 18:50:33.269 | +./stack.sh:exit_trap:491 ?[m? [[ 
-n '' ]]
2016-12-06 18:50:33.278 | +./stack.sh:exit_trap:497 ?[m? 
kill_spinner

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack 

Re: [openstack-dev] [devstack]VersionConflict exception during stack.sh

2016-12-06 Thread Beliveau, Ludovic
Try to manually uninstall it first: sudo pip uninstall python-heatclient

Then launch devstack again.  It will re-install the right version.

/ludovic

-Original Message-
From: Wanjing Xu (waxu) [mailto:w...@cisco.com] 
Sent: December-06-16 2:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [devstack]VersionConflict exception during stack.sh

 Hi

2016-12-06 18:50:28.095 | +inc/python:setup_package:354  
pip_install -e /opt/stack/horizon
2016-12-06 18:50:28.881 | +inc/python:pip_install:155    sudo 
-H http_proxy= https_proxy= no_proxy= PIP_FIND_LINKS= 
SETUPTOOLS_SYS_PATH_TECHNIQUE=rewrite /usr/local/bin/pip2.7 install -c 
/opt/stack/requirements/upper-constraints.txt --upgrade -e /opt/stack/horizon
2016-12-06 18:50:29.930 | Ignoring dnspython3: markers 'python_version == 
"3.4"' don't match your environment
2016-12-06 18:50:29.931 | Ignoring dnspython3: markers 'python_version == 
"3.5"' don't match your environment
2016-12-06 18:50:30.276 | Obtaining file:///opt/stack/horizon
2016-12-06 18:50:33.097 | Exception:
2016-12-06 18:50:33.098 | Traceback (most recent call last):
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, in main
2016-12-06 18:50:33.098 | status = self.run(options, args)
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 335, in 
run
2016-12-06 18:50:33.098 | wb.build(autobuilding=True)
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 749, in build
2016-12-06 18:50:33.098 | self.requirement_set.prepare_files(self.finder)
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, in 
prepare_files
2016-12-06 18:50:33.098 | ignore_dependencies=self.ignore_dependencies))
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 521, in 
_prepare_file
2016-12-06 18:50:33.098 | req_to_install.check_if_exists()
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 1036, in 
check_if_exists
2016-12-06 18:50:33.098 | self.req.name
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 558, in get_distribution
2016-12-06 18:50:33.098 | dist = get_provider(dist)
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 432, in get_provider
2016-12-06 18:50:33.098 | return working_set.find(moduleOrReq) or 
require(str(moduleOrReq))[0]
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 968, in require
2016-12-06 18:50:33.098 | needed = 
self.resolve(parse_requirements(requirements))
2016-12-06 18:50:33.098 |   File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 859, in resolve
2016-12-06 18:50:33.098 | raise VersionConflict(dist, 
req).with_context(dependent_req)
2016-12-06 18:50:33.098 | ContextualVersionConflict: (python-heatclient 1.5.0 
(/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('python-heatclient>=1.6.1'), set(['horizon']))
2016-12-06 18:50:33.233 | +inc/python:pip_install:1  
exit_trap
2016-12-06 18:50:33.240 | +./stack.sh:exit_trap:487  local 
r=2
2016-12-06 18:50:33.250 | ++./stack.sh:exit_trap:488  jobs 
-p
2016-12-06 18:50:33.260 | +./stack.sh:exit_trap:488  jobs=
2016-12-06 18:50:33.269 | +./stack.sh:exit_trap:491  [[ -n 
'' ]]
2016-12-06 18:50:33.278 | +./stack.sh:exit_trap:497  
kill_spinner

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron-lbaas][octavia] About installing Devstack on Ubuntu

2016-12-06 Thread Bernard Cafarelli
On 6 December 2016 at 13:12, Jens Rosenboom  wrote:
> 2016-12-06 7:16 GMT+01:00 Yipei Niu :
>> Hi, All,
>>
>> I failed installing devstack on Ubuntu. The detailed info of local.conf and
>> error is pasted in http://paste.openstack.org/show/591493/.
>>
>> BTW, python2.7 is installed in Ubuntu, and Python.h can be found under
>> /usr/include/python2.7.
>>
>> stack@stack-VirtualBox:~$ locate Python.h
>>
>> /usr/include/python2.7/Python.h
>>
>>
>> However, after I comment the configuration related to Neutron-LBaaS and
>> Octavia in local.conf, it successes.
>>
>> Is it a bug? How to fix it? Look forward to your comments. Thanks.
>
> The failure does not happen on your local machine, but inside building
> a disk-image for octavia. It seems to be a regression caused by
> https://review.openstack.org/402250 and there is a bug report with a
> proposed fix at https://bugs.launchpad.net/tripleo/+bug/1646977

Indeed, this is the reason it fails (breaking the image building
part), Michael Johnson actually sent a few patches to fix it in
Octavia:

https://review.openstack.org/#/c/356590/ (octavia diff to remove
dependency on tripleo-image-elements)
https://review.openstack.org/#/c/406420/ (diskimage-builder fix to
install python in pip-and-virtualenv element)
https://review.openstack.org/#/c/406413/ (diskimage-builder fix to run
sysctl on image, not host)


-- 
Bernard Cafarelli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron-lbaas][octavia] About installing Devstack on Ubuntu

2016-12-06 Thread Jens Rosenboom
2016-12-06 7:16 GMT+01:00 Yipei Niu :
> Hi, All,
>
> I failed installing devstack on Ubuntu. The detailed info of local.conf and
> error is pasted in http://paste.openstack.org/show/591493/.
>
> BTW, python2.7 is installed in Ubuntu, and Python.h can be found under
> /usr/include/python2.7.
>
> stack@stack-VirtualBox:~$ locate Python.h
>
> /usr/include/python2.7/Python.h
>
>
> However, after I comment the configuration related to Neutron-LBaaS and
> Octavia in local.conf, it successes.
>
> Is it a bug? How to fix it? Look forward to your comments. Thanks.

The failure does not happen on your local machine, but inside building
a disk-image for octavia. It seems to be a regression caused by
https://review.openstack.org/402250 and there is a bug report with a
proposed fix at https://bugs.launchpad.net/tripleo/+bug/1646977

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] About installing devstack on Ubuntu

2016-12-06 Thread Qiming Teng
Try:

 apt-get install python-dev 

BTW, this list is about openstack developers. For questions about
installation and usage, please post to openst...@lists.openstack.org
or try ask.openstack.org

Regards,
  Qiming

On Tue, Dec 06, 2016 at 02:07:58PM +0800, Yipei Niu wrote:
> Hi, All,
> 
> I failed installing devstack on Ubuntu. The detailed info of local.conf and
> error is pasted in http://paste.openstack.org/show/591493/.
> 
> BTW, python2.7 is installed in Ubuntu, and Python.h can be found under
> /usr/include/python2.7.
> 
> stack@stack-VirtualBox:~$ locate Python.h
> 
> /usr/include/python2.7/Python.h
> 
> 
> Look forward to your comments. Thanks.
> 
> Best regards,
> Yipei

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Specify OpenvSwitch version in local.conf

2016-11-22 Thread Sean M. Collins
Sean M. Collins wrote:
> zhi wrote:
> > hi, all.
> > 
> > I have a quick question about devstack.
> > 
> > Can I specify OpenvSwitch version in local.conf when during the
> > installation of devstack? I want to OVS 2.6.0 in my devstack. Can I specify
> > it?
> 
> 
> The DevStack plugin for Neutron has a way to build a specific OVS
> version from source
> 
> https://github.com/openstack/neutron/blob/master/devstack/lib/ovs
> 
> However there is not a lot of documentation for how it can be used
> (which really should be fixed).
> 
> I believe it would be something like this in your local.conf:
> 
> 
> enable_plugin neutron https://git.openstack.org/openstack/neutron
> OVS_BRANCH="v2.6.0"
> 
> I haven't tried it locally, but I think that's the idea.
> 


Sorry, you will also need to set:

Q_BUILD_OVS_FROM_GIT=True

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Specify OpenvSwitch version in local.conf

2016-11-22 Thread Sean M. Collins
zhi wrote:
> hi, all.
> 
> I have a quick question about devstack.
> 
> Can I specify OpenvSwitch version in local.conf when during the
> installation of devstack? I want to OVS 2.6.0 in my devstack. Can I specify
> it?


The DevStack plugin for Neutron has a way to build a specific OVS
version from source

https://github.com/openstack/neutron/blob/master/devstack/lib/ovs

However there is not a lot of documentation for how it can be used
(which really should be fixed).

I believe it would be something like this in your local.conf:


enable_plugin neutron https://git.openstack.org/openstack/neutron
OVS_BRANCH="v2.6.0"

I haven't tried it locally, but I think that's the idea.


-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Specify OpenvSwitch version in local.conf

2016-11-18 Thread Geza Gemes

On 11/18/2016 07:33 AM, zhi wrote:

hi, all.

I have a quick question about devstack.

Can I specify OpenvSwitch version in local.conf when during the 
installation of devstack? I want to OVS 2.6.0 in my devstack. Can I 
specify it?



Thanks
Zhi Chang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi,

On Ubuntu and Fedora (I didn't try elsewhere) stack.sh installs the 
latest (except if there is any pinning) version available according to 
apt/yum. So if you have it available in the configured repos then you'll 
have it. Otherwise if you have a package (e.g. locally built) installed 
newer than the available it will also not downgrade it.


Cheers,

Geza

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] - dropping direct route to VMs (FIXED_RANGE)

2016-11-15 Thread Armando M.
On 15 November 2016 at 15:04, Kevin Benton  wrote:

> Hi all,
>
>
> Right now, we do something in devstack that does not reflect how
> deployments are normally done. We setup a route on the parent host to the
> private tenant network that routes through the tenant's router[1]. This
> behavior originates from a very long time ago[2] and I'm not sure if it
> even works correctly right now (because the tenant router has port address
> translation enabled).
>
> I would like to stop this behavior in devstack for a couple of reasons:
>
> 1. If this works, it works by accident. Neutron doesn't have any
> guarantees of behavior when you are pointing routes to a private network
> via a router that has SNAT enabled.
> 2. This method of accessing the VMs is not how access is gained to VMs in
> normal deployments. If you want a VM to be reachable, either attach to the
> same network with a port, setup a provider network, or assign the VM a
> floating IP.
>
>
> I would like to drop the installation of this route, but I'd like to hear
> if there is anyone relying on this behavior. Reply to this email or comment
> on the patch.[3]
>

Thanks for looking into this. Let me add that this is in relation to bug
[1].

Cheers,
Armando

[1] https://bugs.launchpad.net/devstack/+bug/1629133


>
> 1. https://github.com/openstack-dev/devstack/blob/
> 29d13df1a284f8f1a5973ccc826a475156820d23/lib/neutron_
> plugins/services/l3#L378
> 2. https://review.openstack.org/#/c/13693/
> 3. https://review.openstack.org/397987
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   8   >