Re: [openstack-dev] [ironic] Booting IPA from cinder: Was: Summary of ironic sessions from Sydney

2017-11-24 Thread Chris Friesen

On 11/24/2017 10:23 AM, Julia Kreger wrote:

Greetings Michael,

I believe It would need to involve multiple machines at the same time.

I guess there are two different approaches that I think _could_ be
taken to facilitate this:

1) Provide a facility to use a specific volume as the "golden volume"
to boot up for IPA, and then initiate copies of that volume. The
downside that I see is the act of either copying the volume, or
presenting a snapshot of it that will be deleted a little later. I
think that is really going to depend on the back-end, and if the
backend can handle it or not. :\


Don't most reasonable backends support copy-on-write for volumes?  If they do, 
then creating a mostly-read copy of the volume should be low-overhead.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] Where can we find an image with Murano Agent

2017-11-24 Thread Sun, Yicheng (Jerry)
We are trying to test our Murano deployment.
We are working with the Pike version of Murano.

Do you know where we can get an image with Murano Agent?
The community application catalog was taken down.
We tried following the instructions here:
https://murano.readthedocs.io/en/latest/image_builders/linux.html
https://pypi.python.org/pypi/murano-agent/3.3.0

We were not able to build ubuntu with Murano Agent.

Thanks in advance,
Jerry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hyper-v] Hyper-V Support Will be Removed Forever ?

2017-11-24 Thread Alessandro Pilotti
Adam,

If you used an old version of Hyper-V and especially old versions of LIS, 
there’s a night and day difference compared to recent versions. Don’t forget 
that Hyper-V technology is used in Azure, so MSFT has all the reason to invest 
on optimizations.

Getting to the benchmarks, all Rally tests scenarios that we used are open 
source, especially because we wanted people to run the same tests on their 
environment and validate them, instead of blindly believe our results.

If this isn’t objective enough for you, feel free to propose any change or come 
up with other scenarios, but please respond with data instead of FUD ;)

As for VMware, they have a clause in the EULA that forbids to publish 
benchmarks unless they approve them (!!), so I can’t comment on that 
unfortunately.

Alessandro

On 24 Nov 2017, at 18:20, Adam Heczko 
> wrote:

In regards to these benchmarks honestly I don't think that the measurement 
results are objective enough.
According to my past experiences with Hyper-v Microsoft's hypervisor heavily 
uses block layer caching also for guest read/write operations.
AFAIK this is very different to Linux+KVM or VMware where there is no caching 
for vm guests.

On Fri, Nov 24, 2017 at 4:46 PM, Alessandro Pilotti 
> wrote:
Hyper-V support in OpenStack is alive and well, see for example this blog 
series comparing KVM and Hyper-V: [1].

The fact that SUSE / HPE might or might not support it, is just a matter of 
commercial choices unrelated to the upstream projects (which is what matters in 
this ML). Other vendors (e.g. Red Hat, Mirantis, Canonical) have partnership 
with us (Cloudbase) for Hyper-V commercial support.

Cheers,

Alessandro

[1] https://cloudbase.it/openstack-newton-benchmarking-part-5/

On 24 Nov 2017, at 17:19, Vahric MUHTARYAN 
> wrote:

Hello,

We are using HPE Helion Openstack . For a long time they are always removing 
hyper-v support from their distro.  We started to discuss with SUSE and i 
believe everybody know HPE HOS and SUSE Openstack are migratated and it will 
become single product and we learn from SUSE they are also stop supporting 
Hyper-V.

I know cloudbase.it working hard for port too much thing 
but ı would like to learn really Hyper-V hypervisor support will be removed 
from Openstack forever ?

Regards
VM
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] resource providers update 42

2017-11-24 Thread Chris Dent


This is update 42. Thanks to Eric for doing a few of these while I was
otherwise engaged. Thanks also for preserving the magical number 42.
It turns out the question for life, the universe and everything is the
rather mundane "what's up with placement?". Who knew?

I am not fully caught up on the state of things but will endeavor to
use my usual techniques to dig up pending changes relevant to
placement.

# Most Important

The three main themes (see below) remain the main focus. A fair few
bugs have been revealed during the creation the nested work and the
refactoring of the handling behind allocation candidates. Solutions
are in progress but there are likely others lurking so plenty of
review and experimentation is warranted.

# What's Changed

Not entirely sure. There's a lot of code to review but I'm not yet
aware of any fundamental changes of plan.

There's a topic of changes that are needed across the board to make
things proper and right that don't otherwise fit in a theme:

https://review.openstack.org/#/q/topic:accumulated_nits

# Summit Actions

There was a placement update forum session, here is the etherpad:

https://etherpad.openstack.org/p/SYD-forum-nova-placement-update

One thing that surprised me from that session was that there was an
assumption that the idea of "consumer types" was well known. Maybe it
was but I didn't know it. The idea is that a set of allocations made
by one consumer could use a 'type' to indicate whether it is an
'instance' or 'volume' or 'migration'. The driving force is to be able
to distinguish between a 'migration' and 'instance' allocation when
in the middle of a migration. I thought we could do this a special
migration project or user id but apparently we want to be able to
account for the migration allocations when doing quota handling. It
seems a bit complicated but may be the right solution. One of the
todos from the etherpad is for the idea to get more socialization.

# Help Wanted

Another takeaway from summit is that we need, where possible,
benchmarking info from people who are making the transition from old
methods of scheduling to the newer allocation_candidate driven modes.
While detailed numbers will be most useful, even anecdotal summaries
of "woot it's way better" or "hmmm, no it seems worse" are useful.

# Docs

There's an effort in progress to enhance the placement docs:

https://review.openstack.org/#/q/topic:bp/placement-doc-enhancement-queens

This is great to see. Docs needs continuous refactoring, they are
pretty much impossible to get perfect in one go.

# Main Themes

## Nested Providers

There's a lot of code on this topic

https://review.openstack.org/#/q/topic:bp/nested-resource-providers

on both sides of the HTTP divide.

Related are granular resource requests, which allow, in part, traits on
specific nested providers (pairing a class with a trait, for example).
Some code for this is in place but the actual implementation is
waiting:

https://review.openstack.org/#/q/topic:bp/granular-resource-requests

Those two topics, plus

https://review.openstack.org/#/q/topic:accumulated_nits

tie lots of things together.

## Alternate Hosts

Having the scheduler request and use alternate hosts is getting close:

https://review.openstack.org/#/q/topic:bp/return-alternate-hosts

## Migration allocations

Do allocation "doubling" using the migration uuid for the consumer for
one half. This is also very close:

https://review.openstack.org/#/c/507638/

The related work to allow changing multiple allocations in one POST
needs review. It is at the top of stack which clean up the allocations
representation:

   https://review.openstack.org/#/c/500073/

# Other

* https://review.openstack.org/#/c/522002/
  skip authentication on root URI
  (good to get this one in soon, is aligned with all the version
  discovery work)

* https://review.openstack.org/#/q/topic:bp/placement-osc-plugin
  Build the placement osc plugin

* https://review.openstack.org/#/q/topic:bug/1702420
  These are fixes for the combination of resource providers being
  wrong when shared providers are involved. These series may have been
  superseded by the grand refactoring. If so it should be abandoned.

* https://review.openstack.org/#/c/508555/
  Re-use existing ComputeNode on ironic rebalance

* https://review.openstack.org/#/c/511936/
  Neutron's placement client

* https://review.openstack.org/#/c/506175/
  get_inventory for vmware driver

* https://review.openstack.org/#/c/518223/
  set accept to application/json if accept not set

* https://review.openstack.org/#/c/521639/
  cache-related headers for placement

* https://review.openstack.org/#/q/topic:bp/request-traits-in-nova
  request traits in nova

* https://review.openstack.org/#/c/513041/
  Extract instance allocation removal code

* https://review.openstack.org/#/c/493865/
  cover migration cases with functional tests

* https://review.openstack.org/#/c/501252/
  doc: note that custom 

Re: [openstack-dev] [tripleo] configuring qemu.conf using puppet or ansible

2017-11-24 Thread Alex Schultz
On Fri, Nov 24, 2017 at 5:03 AM, Saravanan KR  wrote:
> Hello,
>
> For dpdk in ovs2.8, the default permission of vhost user ports are
> modified from root:root  to openvswitch:hugeltbfs. The vhost user
> ports are shared between ovs and libvirt (qemu). More details on BZ
> [1].
>
> The "group" option in /etc/libvirt/qemu.conf [2] need to set as
> "hugetlbfs" for vhost port to be shared between ovs and libvirt. In
> order to configure qemu.conf, I could think of multiple options:
>
> * By using puppet-libvirt[3] module, but this module is altering lot
> of configurations on the qemu.conf as it is trying to rewrite the
> complete qemu.conf file. It may be different version of conf file
> altogether as we might override the package defaults, depending on the
> package version used.
>

We currently do not use puppet-libvirt and qemu settings are managed
via puppet-nova with augeas[0][1].

> * Other possibility is to configure the qemu.conf file directly using
> the "init_setting" module like [4].
>
> * Considering the move towards ansible, I would prefer if we can add
> ansible based configuration along with docker-puppet for any new
> modules going forward. But I am not sure of the direction.
>

So you could use ansible provided that the existing settings are not
managed via another puppet module. The problem with mixing both puppet
and ansible is ensuring that only one owns the thing being touched.
Since we use augeas in puppet-nova, this should not conflict with the
usage of ini_setting with ansible.  Unfortunately libvirt is not
currently managed as a standalone service so perhaps it's time to
evaluate how we configure it since multiple services (nova/ovs) need
to factor into it's configuration.

Thanks,
-Alex

[0] 
https://github.com/openstack/puppet-nova/blob/30f9d47ec43519599f63f8a6f8da43b7dcb86242/manifests/compute/libvirt/qemu.pp
[1] 
https://github.com/openstack/puppet-nova/blob/9b98e3b0dee5f103c9fa32b37ff1a29df4296957/manifests/migration/qemu.pp

> Prefer the feedback before proceeding with an approach.
>
> Regards,
> Saravanan KR
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1515269
> [2]  https://github.com/libvirt/libvirt/blob/master/src/qemu/qemu.conf#L412
> [3] https://github.com/thias/puppet-libvirt
> [4] https://review.openstack.org/#/c/522796/1/manifests/profile/base/dpdk.pp
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Booting IPA from cinder: Was: Summary of ironic sessions from Sydney

2017-11-24 Thread Julia Kreger
Greetings Michael,

I believe It would need to involve multiple machines at the same time.

I guess there are two different approaches that I think _could_ be
taken to facilitate this:

1) Provide a facility to use a specific volume as the "golden volume"
to boot up for IPA, and then initiate copies of that volume. The
downside that I see is the act of either copying the volume, or
presenting a snapshot of it that will be deleted a little later. I
think that is really going to depend on the back-end, and if the
backend can handle it or not. :\

2) The other possibility that I'm wondering is if we boot machines to
a read-only "golden volume", with a bootloader and a ramdisk that ends
up switching root over to a file stored on the filesystem being
mounted via the loopback. It seems evil, but I think it should work
just fine and does not involve as much network traffic as copying all
of the contents of a huge ramdisk over the wire into RAM. Of course,
then the problem is keeping kernels/drivers in sync...

I guess the question becomes, what is more appealing from an operator
standpoint, in terms of care and feeding. I suspect #2 would involve
some very specific ramdisk modifications, which may not improve the
hurdle being encountered with iterating with drivers faster.

I can't help but wonder if the headache is building an image with dib,
uploading it, then updating the node driver_info, and then deploying
to that node again.

Maybe if we could have dib write a fully formed image out to an iscsi
target... Maybe that might make the world moderately happy?

The more I think about it, the more I like #1 from a simplicity
standpoint that we could iterate upon, we would just likely have to
classify the feature as "very experimental" and stress operational
behavior issues that would/could arise, and maybe at some point
provide some tunable behavior settings as we better understand needs.

-Julia

On Wed, Nov 22, 2017 at 6:45 PM, Michael Still  wrote:
> Thanks for this summary. I'd say the cinder-booted IPA is definitely of
> interest to the operators I've met. Building new IPAs, especially when
> trying to iterate on what drivers are needed is a pain so being able to
> iterate faster would be very useful. That said, I guess this implies booting
> more than one machine off a volume at once?
>
> Michael
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hyper-v] Hyper-V Support Will be Removed Forever ?

2017-11-24 Thread Adam Heczko
In regards to these benchmarks honestly I don't think that the measurement
results are objective enough.
According to my past experiences with Hyper-v Microsoft's hypervisor
heavily uses block layer caching also for guest read/write operations.
AFAIK this is very different to Linux+KVM or VMware where there is no
caching for vm guests.

On Fri, Nov 24, 2017 at 4:46 PM, Alessandro Pilotti <
apilo...@cloudbasesolutions.com> wrote:

> Hyper-V support in OpenStack is alive and well, see for example this blog
> series comparing KVM and Hyper-V: [1].
>
> The fact that SUSE / HPE might or might not support it, is just a matter
> of commercial choices unrelated to the upstream projects (which is what
> matters in this ML). Other vendors (e.g. Red Hat, Mirantis, Canonical) have
> partnership with us (Cloudbase) for Hyper-V commercial support.
>
> Cheers,
>
> Alessandro
>
> [1] https://cloudbase.it/openstack-newton-benchmarking-part-5/
>
> On 24 Nov 2017, at 17:19, Vahric MUHTARYAN  wrote:
>
> Hello,
>
>
>
> We are using HPE Helion Openstack . For a long time they are always
> removing hyper-v support from their distro.  We started to discuss with
> SUSE and i believe everybody know HPE HOS and SUSE Openstack are migratated
> and it will become single product and we learn from SUSE they are also stop
> supporting Hyper-V.
>
>
>
> I know cloudbase.it working hard for port too much thing but ı would like
> to learn really Hyper-V hypervisor support will be removed from Openstack
> forever ?
>
>
>
> Regards
>
> VM
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-13, November 25 - December 1

2017-11-24 Thread Sean McGinnis
Development Focus
-

We are coming up on the Queens-2 milestone the week of December 4. Please be
aware of many project specific deadlines.

General Information
---

Membership freeze coincides with milestone 2 [0]. This means projects that have
not done a release yet must do so for the next two milestones to be included in
the Queens release.

[0] https://releases.openstack.org/queens/schedule.html#q-mf

We still have a few projects following cycle-with-milestones that have not done
a Queens-1 release:

congress-dashboard
freezer[-web-ui]
searchlight[-ui]
senlin

There are quite a few projects that have not responded to the policy-in-code
[1] and split-tempest [2] Queens series goals. Just a reminder that teams
should respond to these goals, even if they do not trigger any work for your
specific project.

[1] https://governance.openstack.org/tc/goals/queens/policy-in-code.html
[2] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html

Upcoming Deadlines & Dates
--

Queens-2 Milestone: December 7
Rocky PTG in Dublin: Week of February 26, 2018

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hyper-v] Hyper-V Support Will be Removed Forever ?

2017-11-24 Thread Alessandro Pilotti
Hyper-V support in OpenStack is alive and well, see for example this blog 
series comparing KVM and Hyper-V: [1].

The fact that SUSE / HPE might or might not support it, is just a matter of 
commercial choices unrelated to the upstream projects (which is what matters in 
this ML). Other vendors (e.g. Red Hat, Mirantis, Canonical) have partnership 
with us (Cloudbase) for Hyper-V commercial support.

Cheers,

Alessandro

[1] https://cloudbase.it/openstack-newton-benchmarking-part-5/

On 24 Nov 2017, at 17:19, Vahric MUHTARYAN 
> wrote:

Hello,

We are using HPE Helion Openstack . For a long time they are always removing 
hyper-v support from their distro.  We started to discuss with SUSE and i 
believe everybody know HPE HOS and SUSE Openstack are migratated and it will 
become single product and we learn from SUSE they are also stop supporting 
Hyper-V.

I know cloudbase.it working hard for port too much thing but ı would like to 
learn really Hyper-V hypervisor support will be removed from Openstack forever ?

Regards
VM
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [hyper-v] Hyper-V Support Will be Removed Forever ?

2017-11-24 Thread Vahric MUHTARYAN
Hello,

 

We are using HPE Helion Openstack . For a long time they are always removing 
hyper-v support from their distro.  We started to discuss with SUSE and i 
believe everybody know HPE HOS and SUSE Openstack are migratated and it will 
become single product and we learn from SUSE they are also stop supporting 
Hyper-V. 

 

I know cloudbase.it working hard for port too much thing but ı would like to 
learn really Hyper-V hypervisor support will be removed from Openstack forever 
? 

 

Regards

VM

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing internet access from unit test gates

2017-11-24 Thread Jens Harbott
2017-11-21 15:04 GMT+00:00 Jeremy Stanley :
> On 2017-11-21 09:28:20 +0100 (+0100), Thomas Goirand wrote:
> [...]
>> The only way that I see going forward, is having internet access
>> removed from unit tests in the gate, or probably just the above
>> variables set.
> [...]
...
> Removing network access from the machines running these jobs won't
> work, of course, because our job scheduling and execution service
> needs to reach them over the Internet to start jobs, monitor
> progress and collect results.

I have tested a variant that would accomodate this: Run the tests in a
new network namespace that no network configuration at all. There are
some issues with this still:

- One needs sudo access in order to run something similar to "ip netns
exec ns1 tox ...". This could still be set up in a way such that the
tox user/environment itself does not need sudo.
- I found some unit tests that do need to talk to localhost, so one
still has to setup lo with 127.0.0.1/32.
- Most important issue that prevents me from successfully running tox
currently though is that even if I prepared the venv beforehand with
"tox -epy27 --notest", the next tox run will still want to reinstall
the project itself and most projects have something like

install_command =
pip install -U
-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
{opts} {packages}

in their tox.ini, which will obviously fail without network
connectivity. Running something like

sudo ip netns exec ns1 su -c ".tox/py27/bin/stestr run" $USER

does work rather well though. Does anyone have an idea how to force
tox to just run the tests without doing any installation steps? Then I
guess one could come up with a small wrapper to handle the other
steps.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Need opinion: request zero root disk for boot-from-volume instances

2017-11-24 Thread Shewale, Bhagyashri
Hi Matt

Thank you :)

Regards,
Bhagyashri Shewale

-Original Message-
From: Matt Riedemann [mailto:mriede...@gmail.com] 
Sent: Thursday, November 23, 2017 9:05 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Need opinion: request zero root disk for 
boot-from-volume instances

On 11/22/2017 12:51 AM, Shewale, Bhagyashri wrote:
> Hi nova devs,
> 
> Just wanted to ask regarding request zero root disk for 
> boot-from-volume instances patches [2] [3].
> 
> When user boot instance using bootable volume and flavor having 
> non-zero disk_gb
> 
> then in that case it considers flavor disk_gb which is incorrect the 
> calculation of host
> 
> disk space.
> 
> *_Request_:*
> 
> What are reconditions from the community to resolve the LP bug
> Reference: [1]?
> 
> Through multiple discussion on the patches there are two options to 
> address this issue:
> 
> *_Option 1_:*
> 
> Merging of patches Reference: [2] and [3]
> 
> *_Option 2_:*
> 
> Operators will need to create new flavor with root_disk=0 for BFV
> 
> and ask users to use this new flavor if they want to boot the instance 
> from volume.
> 
> (Release note may need to be updated in this case)
> 
> Could you please give opinion above two options.
> 
> Reference:
> 
> [1]: https://bugs.launchpad.net/nova/+bug/1469179
> 
> [2]: https://review.openstack.org/#/c/428481
> 
> [3]: https://review.openstack.org/#/c/428505
> 
> Regards,
> 
> Bhagyashri Shewale
> 

You might be interested in this spec I started in Queens:

https://review.openstack.org/#/c/511965/

It's based on some discussions we've had related to this issue since the Boston 
summit for Pike. It needs work but might be helpful.

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [I18n] New Language Team for Esperanto

2017-11-24 Thread Frank Kloeker

Hello,

I kindly want to announce that we have setup a new language on our 
translation platform: Esperanto [1].

Please welcome our newest team member in this context: Georg Hennemann.
He wants to take care of the concerns of this particular language at 
OpenStack. Please support him in his future work and the build of a 
language team.


many thanks

Frank
PTL I18n

[1] https://en.wikipedia.org/wiki/Esperanto


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] configuring qemu.conf using puppet or ansible

2017-11-24 Thread Saravanan KR
Hello,

For dpdk in ovs2.8, the default permission of vhost user ports are
modified from root:root  to openvswitch:hugeltbfs. The vhost user
ports are shared between ovs and libvirt (qemu). More details on BZ
[1].

The "group" option in /etc/libvirt/qemu.conf [2] need to set as
"hugetlbfs" for vhost port to be shared between ovs and libvirt. In
order to configure qemu.conf, I could think of multiple options:

* By using puppet-libvirt[3] module, but this module is altering lot
of configurations on the qemu.conf as it is trying to rewrite the
complete qemu.conf file. It may be different version of conf file
altogether as we might override the package defaults, depending on the
package version used.

* Other possibility is to configure the qemu.conf file directly using
the "init_setting" module like [4].

* Considering the move towards ansible, I would prefer if we can add
ansible based configuration along with docker-puppet for any new
modules going forward. But I am not sure of the direction.

Prefer the feedback before proceeding with an approach.

Regards,
Saravanan KR

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1515269
[2]  https://github.com/libvirt/libvirt/blob/master/src/qemu/qemu.conf#L412
[3] https://github.com/thias/puppet-libvirt
[4] https://review.openstack.org/#/c/522796/1/manifests/profile/base/dpdk.pp

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Status update, November 24th

2017-11-24 Thread Thierry Carrez
Hi!

This is the weekly summary of Technical Committee initiatives. You can
find the full list of all open topics (updated twice a week) at:

https://wiki.openstack.org/wiki/Technical_Committee_Tracker

If you are working on something (or plan to work on something) that is
not on the tracker, feel free to add to it !


== Recently-approved changes ==

* Approved LOCI as new official OpenStack project [1][2]
* Add supports-accessible-upgrade tag to ironic [3]
* Goal updates: freezer, searchlight
* New repos: puppet-openstack-guide, heat-tempest-plugin

[1] https://review.openstack.org/#/c/513851/
[2] https://review.openstack.org/#/c/516005/
[3] https://review.openstack.org/#/c/516671/

Another limited activity week with the US shutting down for
Thanksgiving. The most significant change is of course the addition of a
new official OpenStack project, LOCI. LOCI is a packaging project,
providing tooling that can output light, OCI-compliant container images
of OpenStack components:

https://governance.openstack.org/tc/reference/projects/loci.html

== Voting in progress ==

Ian Wienand's update to the PTI to remove mentions of
releasenotes/requirements.txt has reached majority and will be approved
by Tuesday next week unless there are late objections posted:

https://review.openstack.org/521398

Graham Hayes proposed the addition of the tc:approved-release tag to
Designate, to match its recent addition to an add-on trademark program.
This change is needed to comply with the language in the Foundation
bylaws, and is missing a couple of votes:

https://review.openstack.org/521587

My proposal to recenter the Stable policy on OpenStack cloud components
is also missing a couple of votes:

https://review.openstack.org/521049


== Under review ==

It's time to propose and review community-wide goals for the Rocky
cycle. Kendall Nelson posted a proposal around Storyboard Migration.
Please review it at:

https://review.openstack.org/513875

Matt Treinish proposed an update to the Python PTI for tests to be
specific and explicit. Please review at:

https://review.openstack.org/519751

It should not be necessary to wait until we have 5 items in our "help
wanted" list, nor require the presence of 5 elements at all times.
Please review the list rename at:

https://review.openstack.org/520619

The Mogan team application is still up for review. General feedback from
the Summit forum session was that the overlap and complementarity
between Nova, Ironic and Mogan makes for a complex landscape, and the
strategy going forward needs to be clarified before we can approve this
application. It is therefore likely that it will be delayed until Rocky,
Please comment at:

https://review.openstack.org/#/c/508400/


== TC member actions for the coming week(s) ==

Doug should update or abandon the "champions and stewards" top help
wanted addition (https://review.openstack.org/510656)


== Office hours ==

To be more inclusive of all timezones and more mindful of people for
which English is not the primary language, the Technical Committee
dropped its dependency on weekly meetings. So that you can still get
hold of TC members on IRC, we instituted a series of office hours on
#openstack-tc:

* 09:00 UTC on Tuesdays
* 01:00 UTC on Wednesdays
* 15:00 UTC on Thursdays

For the coming week, I expect we'll discuss stale governance reviews and
start brainstorming about potential Rocky goals.

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-24 Thread Spyros Trigazis
Hi Sergio,

On 22 November 2017 at 20:37, Sergio Morales Acuña  wrote:
> Dear Spyros:
>
> Thanks for your answer. I'm moving my cloud to Pike!.
>
> The problems I encountered were with the TCP listeners for the etcd's
> LoadBalancer and the "curl -sf" from the nodes to the etcd LB (I have to put
> a -k).

[1] [2] the certs are passed to curl. Is there another issue and you need -k ?

[1] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/kubernetes/fragments/network-config-service.sh?h=stable%2Focata#n50
[2] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/network-config-service.sh?h=stable/ocata#n56

>
> I'm using Kolla Binary with Centos 7, so I also have problems with kubernets
> python libreries (they needed updates to be able to handle IPADDRESS on
> certificates)

I think this problem is fixed in ocata [3], what did you have to change?

[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/kubernetes/fragments/make-cert.sh?h=stable%2Focata

>
> Cheers and thanks again.

If you discover any bugs please report them and if you need anything free
to ask here or in #openstack-containers.

Cheers,
Spyros

>
>
> El mié., 22 nov. 2017 a las 5:30, Spyros Trigazis ()
> escribió:
>>
>> Hi Sergio,
>>
>> On 22 November 2017 at 03:31, Sergio Morales Acuña 
>> wrote:
>> > I'm using Openstack Ocata and trying Magnum.
>> >
>> > I encountered a lot of problems but I been able to solved many of them.
>>
>> Which problems did you encounter? Can you be more specific? Can we solve
>> them
>> for everyone else?
>>
>> >
>> > Now I'm curious about some aspects of Magnum:
>> >
>> > ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
>> > create a custom fedora-atomic-27? What about RBAC?
>>
>> Since Pike, magnum is running kubernetes in containers on fedora 26.
>> In fedora atomic 27 kubernetes etcd and flannel are removed from the
>> base image so running them in containers is the only way.
>>
>> For RBAC, you need 1.8 and with Pike you can get it. just by changing
>> one parameter.
>>
>> >
>> > ¿Any one here using Magnum on daily basis? If yes, What version are you
>> > using?
>>
>> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are
>> running
>> Pike and we use only the fedora atomic drivers.
>>
>> http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
>> Vexxhost is running magnum:
>> https://vexxhost.com/public-cloud/container-services/kubernetes/
>> Stackhpc:
>> https://www.stackhpc.com/baremetal-cloud-capacity.html
>>
>> >
>> > ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need
>> > to
>> > upgrade Magnum to follow K8S's crazy changes?
>>
>> Atomic is maintained and supported much more than CoreOS in magnum.
>> There wasn't much interest from developers for CoreOS.
>>
>> >
>> > ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>>
>> Magnum Ocata is not too old but it will eventually be since it misses the
>> capability of running kubernetes on containers. Pike allows this option
>> and can
>> keep up with kubernetes easily.
>>
>> >
>> > ¿Where I can found updated articles about the state of Magnum and it's
>> > future?
>>
>> I did the project update presentation for magnum at the Sydney summit.
>> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>>
>> Chees,
>> Spyros
>>
>> >
>> > Cheers
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev