Re: [openstack-dev] [tripleo] Newton End-Of-Life (EOL) next month (reminder #1)

2017-09-26 Thread Tony Breeds
On Tue, Sep 26, 2017 at 10:31:59PM -0700, Emilien Macchi wrote:
> On Tue, Sep 26, 2017 at 10:17 PM, Tony Breeds  wrote:
> > With that in mind I'd suggest that your review isn't appropriate for
> 
> If we have to give up backports that help customers to get
> production-ready environments, I would consider giving up stable
> policy tag which probably doesn't fit for projects like installers. In
> a real world, users don't deploy master or Pike (even not Ocata) but
> are still on Liberty, and most of the time Newton.

I agree the stable policy doesn't map very well to deployment projects
and that's something I'd like to address.  I admit I'm not certain *how*
to address it but it almost certainly starts with a discussion like this
;P

I've proposed a forum session to further this discussion, even if that
doesn't happen there's always the hall-way track :)
 
> What proposing Giulio probably comes from the real world, the field,
> who actually manage OpenStack at scale and on real environments (not
> in devstack from master). If we can't have this code in-tree, we'll
> probably carry this patch downstream (which is IMHO bad because of
> maintenance and lack of CI). In that case, I'll vote to give up
> stable:follows-policy so we can do what we need.

Rather than give up on the stable:follows policy tag it is possibly
worth looking at which portions of tripleo make that assertion.

In this specific case, there isn't anything in the bug that indicates
it comes from a user report which is all the stable team has to go on
when making these types of decisions.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Newton End-Of-Life (EOL) next month (reminder #1)

2017-09-26 Thread Emilien Macchi
On Tue, Sep 26, 2017 at 10:17 PM, Tony Breeds  wrote:
> With that in mind I'd suggest that your review isn't appropriate for

If we have to give up backports that help customers to get
production-ready environments, I would consider giving up stable
policy tag which probably doesn't fit for projects like installers. In
a real world, users don't deploy master or Pike (even not Ocata) but
are still on Liberty, and most of the time Newton.

What proposing Giulio probably comes from the real world, the field,
who actually manage OpenStack at scale and on real environments (not
in devstack from master). If we can't have this code in-tree, we'll
probably carry this patch downstream (which is IMHO bad because of
maintenance and lack of CI). In that case, I'll vote to give up
stable:follows-policy so we can do what we need.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Newton End-Of-Life (EOL) next month (reminder #1)

2017-09-26 Thread Tony Breeds
On Wed, Sep 27, 2017 at 06:55:13AM +0200, Giulio Fidente wrote:
> On 09/26/2017 06:58 PM, Emilien Macchi wrote:
> > Newton is officially EOL next month:
> > https://releases.openstack.org/index.html#release-series
> > 
> > As an action from our weekly meeting, we decided to accelerate the
> > reviews for stable/newton before it's too late.
> > This email is a reminder and a last reminder will be sent out before
> > we EOL for real.
> > 
> > If you need any help to get backport merged, please raise it here or
> > ask on IRC as usual.
> 
> I was thinking to backport this [1] into both ocata and newton.
> 
> It should be relatively safe as it is basically only a change for a
> default value which we'd like to make more production-friendly

According to https://releases.openstack.org/ both Ocata and Newton are
in Phase II which means "Only critical bugfixes and security patches are
acceptable"
(https://docs.openstack.org/project-team-guide/stable-branches.html#support-phases).
The bug associated with that review indicates it's a medium priority.

With that in mind I'd suggest that your review isn't appropriate for
backport.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Newton End-Of-Life (EOL) next month (reminder #1)

2017-09-26 Thread Giulio Fidente
On 09/26/2017 06:58 PM, Emilien Macchi wrote:
> Newton is officially EOL next month:
> https://releases.openstack.org/index.html#release-series
> 
> As an action from our weekly meeting, we decided to accelerate the
> reviews for stable/newton before it's too late.
> This email is a reminder and a last reminder will be sent out before
> we EOL for real.
> 
> If you need any help to get backport merged, please raise it here or
> ask on IRC as usual.

I was thinking to backport this [1] into both ocata and newton.

It should be relatively safe as it is basically only a change for a
default value which we'd like to make more production-friendly

1. https://review.openstack.org/#/c/506330/
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova

2017-09-26 Thread Mooney, Sean K


> -Original Message-
> From: Sahid Orentino Ferdjaoui [mailto:sferd...@redhat.com]
> Sent: Tuesday, September 26, 2017 1:46 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] vGPUs support for Nova
> 
> On Mon, Sep 25, 2017 at 04:59:04PM +, Jianghua Wang wrote:
> > Sahid,
> >
> > Just share some background. XenServer doesn't expose vGPUs as mdev or
> > pci devices.
> 
> That does not make any sense. There is physical device (PCI) which
> provides functions (vGPUs). These functions are exposed through mdev
> framework. What you need is the mdev UUID related to a specific vGPU
> and I'm sure that XenServer is going to expose it. Something which
> XenServer may not expose is the NUMA node where the physical device is
> plugged on but in such situation you could still use sysfs.
[Mooney, Sean K] this is implementation specific. Amd support virtualizing
There gpu using sriov http://www.amd.com/Documents/Multiuser-GPU-White-Paper.pdf
In that case you can use the existing pci pass-through support without any 
modification.
For intel and nvidia gpus we need speficic hypervisor support as the device 
partitioning
Is done in the host gpu driver rather than via sirov. There are two level of 
abstraction
That we must keep separate. 1 how does the hardware support configuration and 
enumeration
Of the virutalised resources (amd in hardware via sriov, intel/nvidia via 
driver/software manager). 
2 how does the hypervisor report the vgpus to openstack and other clients.

In the amd case I would not expect any hypervisor to have mdevs associated with 
The sriov vf as that is not the virtualization model they have implemented.
In the intel gvt case yes you will have mdevs but the virtual gpus are not 
Represented on the pci bus so we should not model them as pci deveices.

Some more comments below.
> 
> > I proposed a spec about one year ago to make fake pci devices so that
> > we can use the existing PCI mechanism to cover vGPUs. But that's not
> a
> > good design and got strongly objection. After that, we switched to
> use
> > the resource providers by following the advice from the core team.
> >
> > Regards,
> > Jianghua
> >
> > -Original Message-
> > From: Sahid Orentino Ferdjaoui [mailto:sferd...@redhat.com]
> > Sent: Monday, September 25, 2017 11:01 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] vGPUs support for Nova
> >
> > On Mon, Sep 25, 2017 at 09:29:25AM -0500, Matt Riedemann wrote:
> > > On 9/25/2017 5:40 AM, Jay Pipes wrote:
> > > > On 09/25/2017 05:39 AM, Sahid Orentino Ferdjaoui wrote:
> > > > > There is a desire to expose the vGPUs resources on top of
> > > > > Resource Provider which is probably the path we should be going
> > > > > in the long term. I was not there for the last PTG and you
> > > > > probably already made a decision about moving in that direction
> > > > > anyway. My personal feeling is that it is premature.
> > > > >
> > > > > The nested Resource Provider work is not yet feature-complete
> > > > > and requires more reviewer attention. If we continue in the
> > > > > direction of Resource Provider, it will need at least 2 more
> > > > > releases to expose the vGPUs feature and that without the
> > > > > support of NUMA, and with the feeling of pushing something
> which is not stable/production-ready.
[Mooney, Sean K] Not all gpus have numam affinity. Intel integrated gpus do 
not. they have
Dedicated edram on the processor die so there memory accesses never leave
The processor package sot they do not have numa affinity. I would assume the
Same is true for amd integrated gpus so only descreet gpus will have numa 
affinity.
> > > > >
> > > > > It's seems safer to first have the Resource Provider work well
> > > > > finalized/stabilized to be production-ready. Then on top of
> > > > > something stable we could start to migrate our current virt
> > > > > specific features like NUMA, CPU Pinning, Huge Pages and
> finally PCI devices.
> > > > >
> > > > > I'm talking about PCI devices in general because I think we
> > > > > should implement the vGPU on top of our /pci framework which is
> > > > > production ready and provides the support of NUMA.
> > > > >
> > > > > The hardware vendors building their drivers using mdev and the
This is vendor specific intel uses mdevs for intel GVT(kvmgt/xengt).
Amd uses sriov and does not use mdevs it uses sriov 
http://www.amd.com/Documents/Multiuser-GPU-White-Paper.pdf.

Amd is simple because you just do a pci passthough of the sriov-vf and your are 
done.
No explcit support need in the hypervisior.

Looking at https://images.nvidia.com/content/grid/pdf/GRID-vGPU-User-Guide.pdf 
section
5.3.3.2. when we query a specific physical gpu for the support paramaters via 
xen it reports the pci address of that
Physical gpu and part of the responce


Re: [openstack-dev] [nova] reset key pair during rebuilding

2017-09-26 Thread Matt Riedemann

On 9/23/2017 8:58 AM, LIU Yulong wrote:

Hi nova developers,

This mail is proposed to reconsider the key pair resetting of instance. 
The nova queens PTG discuss is here: 
https://etherpad.openstack.org/p/nova-ptg-queens 
 L498. And there are 
now two proposals.


1. SPEC 1: https://review.openstack.org/#/c/375221/ 
 started by me (liuyulong) 
since sep 2016.


    This spec will allow setting the new key_name for the instance 
during rebuild API. That’s a very simple and well-understood approach:


  * Make consistent with rebuild API properties, such as name, imageRef,
metadata, adminPass etc.
  * Rebuild API is something like `recreating`, this is the right way to
do key pair updating. For keypair-login-only VM, this is the key point.
  * This does not involve to other APIs like reboot/unshelve etc.


This was one of the issues I brought up in IRC, is that if we just 
implemented this for the rebuild API, then someone could also ask that 
we do it for things like reboot, cold migrate/resize, unshelve, etc. 
Anything that involves re-creating the guest.



  * Easy to use, only one API.


Until someone says we should also do it for the other APIs, as noted above.



By the way, here is the patch (https://review.openstack.org/#/c/379128/ 
) which has implemented this 
spec. And it stays there more than one year too.


It's been open because the spec was never approved. Just a procedural issue.



2. SPEC 2 : https://review.openstack.org/#/c/506552/ 
 propose by Kevin_zheng.


This spec supposed to add a new updating API for one instance’s key 
pair. This one has one foreseeable advantage for this is to do instance 
running key injection.


But it may cause some issues:

  * This approach needs to update the instance key pair first (one step,
API call). And then do a reboot/rebuild or any actions causing the
vm restart (second step, another API call). Firstly, this is waste,
it use two API calls. Secondly, if updating key pair was done, and
the reboot was not. That may result an inconsistent between instance
DB key pair and guest VM inside key. Cloud user may confused to
choose which key should be used to login.


1. I don't think multiple API calls is a problem. Any GUI or 
orchestration tool can stitch these APIs together for what appears to be 
a single operation for the end user. Furthermore, with multiple options 
about what to do after the instance.key_name is updated, something like 
a GUI could present the user with the option to picking if they want to 
reboot or rebuild after the key is updated.


2. An orchestrator or GUI would make sure that both APIs are called. For 
a user that is updating the key_name, they should realize they need to 
make another API call to enable it. This would all be in the API 
reference documentation, CLI help, etc, that anyone doing this should 
read and understand.



  * For the second step (reboot), there is a strong constraint is that
cloud-init config needs to be set to running-per-booting. But if a
cloud platform all images are set cloud-init to per-deployment. In
order to achieve this new API goal, the entire cloud platform images
need updating. This will cause a huge upgrading work for entire
cloud platform images. They need to change all the images cloud-init
config from running-per-deployment to running-every-boot. But that
still can not solve the inconsistent between DB keypair and
guest-key. For instance, if the running VM is based on a
running-once-cloud-init image: 1. create image from this VM; 2.
change the keypair of the new VM; 3. reboot still can not work
because of the old config of per-deployment-running.


This is per-cloud configuration, and as such it should be documented 
with the cloud documentation. I can't say what is more "normal" for 
OpenStack clouds to be doing with cloud-init here, that would be a good 
question for the operators community (I've cross posted to the ops ML).



  * For another second step (rebuild), if they have to rebuild, or
rebuild is the only one way to deployment new key, we are going back
to rebuild SPEC 1. Two steps for keypair updating are not good. Why
don’t directly using SPEC 1?


Because of the points made above, which are if I can simply reboot my 
instance to use the new keypair rather than rebuild it, that is much 
better. Plus then it doesn't just limit us to rebuild, but the new key 
could also be used after unshelving or cold migrating the instance.



  * Another perspective to think about this is that SPEC 2 is expanding
the functionality of reboot. What about one day user want to change
password/name/personality at a reboot?


Kevin's spec does not propose any change to the reboot (or rebuild) APIs.


  * Cloud user may still 

Re: [openstack-dev] [octavia] haproxy fails to receive datagram

2017-09-26 Thread Yipei Niu
Hi, Michael,

The instructions are listed as follows.

First, create a net1.
$ neutron net-create net1
$ neutron subnet-create net1 10.0.1.0/24 --name subnet1

Second, boot two vms in net1
$ nova boot --flavor 1 --image $image_id --nic net-id=$net1_id vm1
$ nova boot --flavor 1 --image $image_id --nic net-id=$net1_id vm2

Third, logon to the two vms, respectively. Here take vm1 as an example.
$ MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print
$1}')
$ while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo
nc -l -p 80 ; done&

Fourth, exit vms and update the default security group shared by the vms by
adding a rule of allowing traffic to port 80.
$ neutron security-group-rule-create --direction ingress --protocol tcp
--port-range-min 80 --port-range-max 80 --remote-ip-refix 0.0.0.0/0
$default_security_group
Note: make sure "sudo ip netns exec $qdhcp-net1_id curl -v $vm_ip" works.
In other words, make sure the vms can accept HTTP requests and return its
IP, respectively.

Fifth, create a lb, a listener, and a pool. Then add the two vms to the
pool as members.
$ neutron lbaas-loadbalancer-create --name lb1 subnet1
$ neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP
--protocol-port 80 --name listener1
$ neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1
--protocol HTTP --name pool1
$ neutron baas-member-create --subnet subnet1 --address $vm1_ip
--protocol-port 80 pool1
$ neutron baas-member-create --subnet subnet1 --address $vm2_ip
--protocol-port 80 pool1

Finally, try "sudo ip netns qdhcp-net1_id curl -v $VIP" to see whether
lbaas works.

Best regards,
Yipei

On Wed, Sep 27, 2017 at 1:30 AM, Yipei Niu  wrote:

> Hi, Michael,
>
> I think the octavia is the latest, since I pull the up-to-date repo of
> octavia manually to my server before installation.
>
> Anyway, I run "sudo ip netns exec amphora-haproxy ip route show table 1"
> in the amphora, and find that the route table exists. The info is listed as
> follows.
>
> default via 10.0.1.1 dev eth1 onlink
>
> I think it may not be the source.
>
> Best regards,
> Yipei
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Rochelle Grober
Clint Byrum wrote:
> Excerpts from Jonathan Proulx's message of 2017-09-26 16:01:26 -0400:
> > On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
> >
> > :OpenStack is big. Big enough that a user will likely be fine with
> > learning :a new set of tools to manage it.
> >
> > New users in the startup sense of new, probably.
> >
> > People with entrenched environments, I doubt it.
> >
> 
> Sorry no, I mean everyone who doesn't have an OpenStack already.
> 
> It's nice and all, if you're a Puppet shop, to get to use the puppet modules.
> But it doesn't bring you any closer to the developers as a group. Maybe a few
> use Puppet, but most don't. And that means you are going to feel like
> OpenStack gets thrown over the wall at you once every
> 6 months.
> 
> > But OpenStack is big. Big enough I think all the major config systems
> > are fairly well represented, so whether I'm right or wrong this
> > doesn't seem like an issue to me :)
> >
> 
> They are. We've worked through it. But that doesn't mean potential users
> are getting our best solution or feeling well integrated into the community.
> 
> > Having common targets (constellations, reference architectures,
> > whatever) so all the config systems build the same things (or a subset
> > or superset of the same things) seems like it would have benefits all
> > around.
> >
> 
> It will. It's a good first step. But I'd like to see a world where developers 
> are
> all well versed in how operators actually use OpenStack.

Hear, hear!  +1000  Take a developer to work during peak operations.

For Walmart, that would be Black Firday/Cyber Monday.
For schools, usually a few days into the new session.
For otherseach has a time when things break more.  Having a developer 
experience what operators do to predict/avoid/recover/work around the normal 
state of operations would help each to understand the macro work flows.  Those 
are important, too.  Full stack includes Ops.

< Snark off />

--Rocky

> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project

2017-09-26 Thread Jeremy Stanley
On 2017-09-27 09:15:21 +0800 (+0800), Zhenguo Niu wrote:
[...]
> I don't mean there are deficiencies in Ironic. Ironic itself is cool, it
> works well with TripleO, Nova, Kolla, etc. Mogan just want to be another
> client to schedule workloards on Ironic and provide bare metal specific
> APIs for users who seeks a way to provider virtual machines and bare metals
> separately, or just bare metal cloud without interoperble with other
> compute resources under Nova.
[...]

The short explanation which clicked for me (granted it's probably an
oversimplification, but still) was this: Ironic provides an admin
API for managing bare metal resources, while Mogan gives you a user
API (suitable for public cloud use cases) to your Ironic backend. I
suppose it could have been implemented in Ironic, but implementing
it separately allows Ironic to be agnostic to multiple user
frontends and also frees the Ironic team up from having to take on
yet more work directly.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project

2017-09-26 Thread Zhenguo Niu
Thanks Erik for the response!

I don't mean there are deficiencies in Ironic. Ironic itself is cool, it
works well with TripleO, Nova, Kolla, etc. Mogan just want to be another
client to schedule workloards on Ironic and provide bare metal specific
APIs for users who seeks a way to provider virtual machines and bare metals
separately, or just bare metal cloud without interoperble with other
compute resources under Nova.

On Wed, Sep 27, 2017 at 8:53 AM, Erik McCormick 
wrote:

> My main question here would be this: If you feel there are deficiencies in
> Ironic, why not contribute to improving Ironic rather than spawning a whole
> new project?
>
> I am happy to take a look at it, and I'm by no means trying to contradict
> your assumptions here. I just get concerned with the overhead and confusion
> that comes with competing projects.
>
> Also, if you'd like to discuss this in detail with a room full of bodies,
> I suggest proposing a session for the Forum in Sydney. If some of the
> contributors will be there, it would be a good opportunity for you to get
> feedback.
>
> Cheers,
> Erik
>
>
> On Sep 26, 2017 8:41 PM, "Matt Riedemann"  wrote:
>
>> On 9/25/2017 6:27 AM, Zhenguo Niu wrote:
>>
>>> Hi folks,
>>>
>>> First of all, thanks for the audiences for Mogan project update in the
>>> TC room during Denver PTG. Here we would like to get more suggestions
>>> before we apply for inclusion.
>>>
>>> Speaking only for myself, I find the current direction of one
>>> API+scheduler for vm/baremetal/container unfortunate. After containers
>>> management moved out to be a separated project Zun, baremetal with Nova and
>>> Ironic continues to be a pain point.
>>>
>>> #. API
>>> Only part of the Nova APIs and parameters can apply to baremetal
>>> instances, meanwhile for interoperable with other virtual drivers, bare
>>> metal specific APIs such as deploy time RAID, advanced partitions can not
>>>  be included. It's true that we can support various compute drivers, but
>>> the reality is that the support of each of hypervisor is not equal,
>>> especially for bare metals in a virtualization world. But I understand the
>>> problems with that as Nova was designed to provide compute
>>> resources(virtual machines) instead of bare metals.
>>>
>>> #. Scheduler
>>> Bare metal doesn't fit in to the model of 1:1 nova-compute to resource,
>>> as nova-compute processes can't be run on the inventory nodes themselves.
>>> That is to say host aggregates, availability zones and such things based on
>>> compute service(host) can't be applied to bare metal resources. And for
>>> grouping like anti-affinity, the granularity is also not same with virtual
>>> machines, bare metal users may want their HA instances not on the same
>>> failure domain instead of the node itself. Short saying, we can only get a
>>> rigid resource class only scheduling for bare metals.
>>>
>>>
>>> And most of the cloud providers in the market offering virtual machines
>>> and bare metals as separated resources, but unfortunately, it's hard to
>>> achieve this with one compute service. I heard people are deploying
>>> seperated Nova for virtual machines and bare metals with many downstream
>>> hacks to the bare metal single-driver Nova but as the changes to Nova would
>>> be massive and may invasive to virtual machines, it seems not practical to
>>> be upstream.
>>>
>>> So we created Mogan [1] about one year ago, which aims to offer bare
>>> metals as first class resources to users with a set of bare metal specific
>>> API and a baremetal-centric scheduler(with Placement service). It was like
>>> an experimental project at the beginning, but the outcome makes us believe
>>> it's the right way. Mogan will fully embrace Ironic for bare metal
>>> provisioning and with RSD server [2] introduced to OpenStack, it will be a
>>> new world for bare metals, as with that we can compose hardware resources
>>> on the fly.
>>>
>>> Also, I would like to clarify the overlaps between Mogan and Nova, I bet
>>> there must be some users who wants to use one API for the compute resources
>>> management as they don't care about whether it's a virtual machine or a
>>> bare metal server. Baremetal driver with Nova is still the right choice for
>>> such users to get raw performance compute resources. On the contrary, Mogan
>>> is for real bare metal users and cloud providers who wants to offer bare
>>> metals as a separated resources.
>>>
>>> Thank you for your time!
>>>
>>>
>>> [1] https://wiki.openstack.org/wiki/Mogan
>>> [2] https://www.intel.com/content/www/us/en/architecture-and-tec
>>> hnology/rack-scale-design-overview.html
>>>
>>> --
>>> Best Regards,
>>> Zhenguo Niu
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> 

Re: [openstack-dev] [infra][mogan] Need help for replacing the current master

2017-09-26 Thread Zhenguo Niu
Thanks Clark Boylan,

We have frozen the Mogan repo since this mail sent out, and there's no need
to update the replacement master. So please help out when you got time.

On Wed, Sep 27, 2017 at 8:10 AM, Clark Boylan  wrote:

> On Tue, Sep 26, 2017, at 02:18 AM, Zhenguo Niu wrote:
> > It's very appreciated if you shed some light on what the next steps would
> > be to move this along.
>
> We should schedule a period of time to freeze the Mogan repo, update the
> replacement master (if necessary) then we can either force push that
> over the existing branch or push it into a new branch and have you
> propose and then merge a merge commit. Considering that the purpose of
> this is to better update the history of the master branch the force push
> is likely the most appropriate option. Using a merge commit will result
> in potentially complicated history which won't help with the objective
> here.
>
> What is a good time to freeze, update and push? In total you probably
> want to allocate a day to this, we can likely get by with less but it is
> easy to block off a day and then we don't have to rush.
>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project

2017-09-26 Thread Erik McCormick
My main question here would be this: If you feel there are deficiencies in
Ironic, why not contribute to improving Ironic rather than spawning a whole
new project?

I am happy to take a look at it, and I'm by no means trying to contradict
your assumptions here. I just get concerned with the overhead and confusion
that comes with competing projects.

Also, if you'd like to discuss this in detail with a room full of bodies, I
suggest proposing a session for the Forum in Sydney. If some of the
contributors will be there, it would be a good opportunity for you to get
feedback.

Cheers,
Erik


On Sep 26, 2017 8:41 PM, "Matt Riedemann"  wrote:

> On 9/25/2017 6:27 AM, Zhenguo Niu wrote:
>
>> Hi folks,
>>
>> First of all, thanks for the audiences for Mogan project update in the TC
>> room during Denver PTG. Here we would like to get more suggestions before
>> we apply for inclusion.
>>
>> Speaking only for myself, I find the current direction of one
>> API+scheduler for vm/baremetal/container unfortunate. After containers
>> management moved out to be a separated project Zun, baremetal with Nova and
>> Ironic continues to be a pain point.
>>
>> #. API
>> Only part of the Nova APIs and parameters can apply to baremetal
>> instances, meanwhile for interoperable with other virtual drivers, bare
>> metal specific APIs such as deploy time RAID, advanced partitions can not
>>  be included. It's true that we can support various compute drivers, but
>> the reality is that the support of each of hypervisor is not equal,
>> especially for bare metals in a virtualization world. But I understand the
>> problems with that as Nova was designed to provide compute
>> resources(virtual machines) instead of bare metals.
>>
>> #. Scheduler
>> Bare metal doesn't fit in to the model of 1:1 nova-compute to resource,
>> as nova-compute processes can't be run on the inventory nodes themselves.
>> That is to say host aggregates, availability zones and such things based on
>> compute service(host) can't be applied to bare metal resources. And for
>> grouping like anti-affinity, the granularity is also not same with virtual
>> machines, bare metal users may want their HA instances not on the same
>> failure domain instead of the node itself. Short saying, we can only get a
>> rigid resource class only scheduling for bare metals.
>>
>>
>> And most of the cloud providers in the market offering virtual machines
>> and bare metals as separated resources, but unfortunately, it's hard to
>> achieve this with one compute service. I heard people are deploying
>> seperated Nova for virtual machines and bare metals with many downstream
>> hacks to the bare metal single-driver Nova but as the changes to Nova would
>> be massive and may invasive to virtual machines, it seems not practical to
>> be upstream.
>>
>> So we created Mogan [1] about one year ago, which aims to offer bare
>> metals as first class resources to users with a set of bare metal specific
>> API and a baremetal-centric scheduler(with Placement service). It was like
>> an experimental project at the beginning, but the outcome makes us believe
>> it's the right way. Mogan will fully embrace Ironic for bare metal
>> provisioning and with RSD server [2] introduced to OpenStack, it will be a
>> new world for bare metals, as with that we can compose hardware resources
>> on the fly.
>>
>> Also, I would like to clarify the overlaps between Mogan and Nova, I bet
>> there must be some users who wants to use one API for the compute resources
>> management as they don't care about whether it's a virtual machine or a
>> bare metal server. Baremetal driver with Nova is still the right choice for
>> such users to get raw performance compute resources. On the contrary, Mogan
>> is for real bare metal users and cloud providers who wants to offer bare
>> metals as a separated resources.
>>
>> Thank you for your time!
>>
>>
>> [1] https://wiki.openstack.org/wiki/Mogan
>> [2] https://www.intel.com/content/www/us/en/architecture-and-tec
>> hnology/rack-scale-design-overview.html
>>
>> --
>> Best Regards,
>> Zhenguo Niu
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Cross-posting to the operators list since they are the community that
> you'll likely need to convince the most about Mogan and whether or not they
> want to start experimenting with it.
>
> --
>
> Thanks,
>
> Matt
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [tc][nova][ironic][mogan] Evaluate Mogan project

2017-09-26 Thread Matt Riedemann

On 9/25/2017 6:27 AM, Zhenguo Niu wrote:

Hi folks,

First of all, thanks for the audiences for Mogan project update in the 
TC room during Denver PTG. Here we would like to get more suggestions 
before we apply for inclusion.


Speaking only for myself, I find the current direction of one 
API+scheduler for vm/baremetal/container unfortunate. After containers 
management moved out to be a separated project Zun, baremetal with Nova 
and Ironic continues to be a pain point.


#. API
Only part of the Nova APIs and parameters can apply to baremetal 
instances, meanwhile for interoperable with other virtual drivers, bare 
metal specific APIs such as deploy time RAID, advanced partitions can 
not  be included. It's true that we can support various compute drivers, 
but the reality is that the support of each of hypervisor is not equal, 
especially for bare metals in a virtualization world. But I understand 
the problems with that as Nova was designed to provide compute 
resources(virtual machines) instead of bare metals.


#. Scheduler
Bare metal doesn't fit in to the model of 1:1 nova-compute to resource, 
as nova-compute processes can't be run on the inventory nodes 
themselves. That is to say host aggregates, availability zones and such 
things based on compute service(host) can't be applied to bare metal 
resources. And for grouping like anti-affinity, the granularity is also 
not same with virtual machines, bare metal users may want their HA 
instances not on the same failure domain instead of the node itself. 
Short saying, we can only get a rigid resource class only scheduling for 
bare metals.



And most of the cloud providers in the market offering virtual machines 
and bare metals as separated resources, but unfortunately, it's hard to 
achieve this with one compute service. I heard people are deploying 
seperated Nova for virtual machines and bare metals with many downstream 
hacks to the bare metal single-driver Nova but as the changes to Nova 
would be massive and may invasive to virtual machines, it seems not 
practical to be upstream.


So we created Mogan [1] about one year ago, which aims to offer bare 
metals as first class resources to users with a set of bare metal 
specific API and a baremetal-centric scheduler(with Placement service). 
It was like an experimental project at the beginning, but the outcome 
makes us believe it's the right way. Mogan will fully embrace Ironic for 
bare metal provisioning and with RSD server [2] introduced to OpenStack, 
it will be a new world for bare metals, as with that we can compose 
hardware resources on the fly.


Also, I would like to clarify the overlaps between Mogan and Nova, I bet 
there must be some users who wants to use one API for the compute 
resources management as they don't care about whether it's a virtual 
machine or a bare metal server. Baremetal driver with Nova is still the 
right choice for such users to get raw performance compute resources. On 
the contrary, Mogan is for real bare metal users and cloud providers who 
wants to offer bare metals as a separated resources.


Thank you for your time!


[1] https://wiki.openstack.org/wiki/Mogan
[2] 
https://www.intel.com/content/www/us/en/architecture-and-technology/rack-scale-design-overview.html


--
Best Regards,
Zhenguo Niu


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Cross-posting to the operators list since they are the community that 
you'll likely need to convince the most about Mogan and whether or not 
they want to start experimenting with it.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][mogan] Need help for replacing the current master

2017-09-26 Thread Clark Boylan
On Tue, Sep 26, 2017, at 02:18 AM, Zhenguo Niu wrote:
> It's very appreciated if you shed some light on what the next steps would
> be to move this along.

We should schedule a period of time to freeze the Mogan repo, update the
replacement master (if necessary) then we can either force push that
over the existing branch or push it into a new branch and have you
propose and then merge a merge commit. Considering that the purpose of
this is to better update the history of the master branch the force push
is likely the most appropriate option. Using a merge commit will result
in potentially complicated history which won't help with the objective
here.

What is a good time to freeze, update and push? In total you probably
want to allocate a day to this, we can likely get by with less but it is
easy to block off a day and then we don't have to rush.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] Improving the process for release marketing

2017-09-26 Thread Jay S Bryant



On 9/26/2017 7:00 PM, Doug Hellmann wrote:

Excerpts from Jay S Bryant's message of 2017-09-26 18:39:34 -0500:

On 9/26/2017 4:33 PM, Anne Bertucio wrote:


Release marketing is a critical part of sharing what’s new in each release, and 
we want to rework how the marketing community and projects work together to 
make the release communications happen.

Having multiple, repetetive demands to summarize "top features" during release time can 
be pestering and having to recollect the information each time isn't an effective use of time. 
Being asked to make polished, "press-friendly" messages out of release notes can feel too 
far outside of the PTL's focus areas or skills. At the same time, for technical content marketers, 
attempting to find the key features from release notes, ML posts, specs, Roadmap, etc., means 
interesting features are sometimes overlooked. Marketing teams don't have the latest on what 
features landed and with what caveats.

To address this gap, the Release team and Foundation marketing team propose collecting information as part of 
the release tagging process. Similar to the existing (unused) "highlights" field for an individual 
tag, we will collect some text in the deliverable file to provide highlights for the series (about 3 items). 
That text will then be used to build a landing page on release.openstack.org that shows the "key 
features" flagged by PTLs that marketing teams should be looking at during release communication times. 
The page will link to the release notes, so marketers can start there to gather additional information, 
eliminating repetitive asks of PTLs. The "pre selection" of features means marketers can spend more 
time diving into release note details and less sifting through them.

To supplement the written information, the marketing community is also going to work 
together to consolidate follow up questions and deliver them in "press corps" 
style (i.e. a single phone call to be asked questions from multiple parties vs. multiple 
phone calls from individuals).

We will provide more details about the implementation for the highlights page 
when that is ready, but want to gather feedback about both aspects of the plan 
early.

Thanks for your input,
Anne Bertucio and Sean McGinnis






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Anne and Sean,

Thank you for starting this effort.  I have been amazed how many times I
have been asked for the same information since the end of Pike.  I think
that having a more automated process for this would be very helpful.

One request would be that the process allow for more than just a 'one
liner' on key features.  If this is being targeted at marketing people
we will need to be able to provide more information.  So, if we could
have something like little commit messages where there is a summary of
what the highlight is and then can provide a bit more verbose
explanation (a few sentences) I think that would make this addition more
helpful.

I was planning on the usual "embed RST in YAML" pattern. The amount
of detail you leave will then be completely up to you.

Doug

+2  Sounds like a good plan to me.  Thank you!

I look forward to hearing more about this!

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] Improving the process for release marketing

2017-09-26 Thread Doug Hellmann
Excerpts from Jay S Bryant's message of 2017-09-26 18:39:34 -0500:
> On 9/26/2017 4:33 PM, Anne Bertucio wrote:
> 
> > Release marketing is a critical part of sharing what’s new in each release, 
> > and we want to rework how the marketing community and projects work 
> > together to make the release communications happen.
> >
> > Having multiple, repetetive demands to summarize "top features" during 
> > release time can be pestering and having to recollect the information each 
> > time isn't an effective use of time. Being asked to make polished, 
> > "press-friendly" messages out of release notes can feel too far outside of 
> > the PTL's focus areas or skills. At the same time, for technical content 
> > marketers, attempting to find the key features from release notes, ML 
> > posts, specs, Roadmap, etc., means interesting features are sometimes 
> > overlooked. Marketing teams don't have the latest on what features landed 
> > and with what caveats.
> >
> > To address this gap, the Release team and Foundation marketing team propose 
> > collecting information as part of the release tagging process. Similar to 
> > the existing (unused) "highlights" field for an individual tag, we will 
> > collect some text in the deliverable file to provide highlights for the 
> > series (about 3 items). That text will then be used to build a landing page 
> > on release.openstack.org that shows the "key features" flagged by PTLs that 
> > marketing teams should be looking at during release communication times. 
> > The page will link to the release notes, so marketers can start there to 
> > gather additional information, eliminating repetitive asks of PTLs. The 
> > "pre selection" of features means marketers can spend more time diving into 
> > release note details and less sifting through them.
> >
> > To supplement the written information, the marketing community is also 
> > going to work together to consolidate follow up questions and deliver them 
> > in "press corps" style (i.e. a single phone call to be asked questions from 
> > multiple parties vs. multiple phone calls from individuals).
> >
> > We will provide more details about the implementation for the highlights 
> > page when that is ready, but want to gather feedback about both aspects of 
> > the plan early.
> >
> > Thanks for your input,
> > Anne Bertucio and Sean McGinnis
> >
> >
> >
> >
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Anne and Sean,
> 
> Thank you for starting this effort.  I have been amazed how many times I 
> have been asked for the same information since the end of Pike.  I think 
> that having a more automated process for this would be very helpful.
> 
> One request would be that the process allow for more than just a 'one 
> liner' on key features.  If this is being targeted at marketing people 
> we will need to be able to provide more information.  So, if we could 
> have something like little commit messages where there is a summary of 
> what the highlight is and then can provide a bit more verbose 
> explanation (a few sentences) I think that would make this addition more 
> helpful.

I was planning on the usual "embed RST in YAML" pattern. The amount
of detail you leave will then be completely up to you.

Doug

> 
> I look forward to hearing more about this!
> 
> Jay
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][ptls][install] Install guide vs. tutorial

2017-09-26 Thread Jay S Bryant



On 9/25/2017 3:47 AM, Alexandra Settle wrote:
  
 > I completely agree consistency is more important, than bike shedding over the

 > name :)
 > To be honest, it would be easier to change everything to ‘guide’ – 
seeing as
 > all our URLs are ‘install-guide’.
 > But that’s the lazy in me speaking.
 >
 > Industry wise – there does seem to be more of a trend towards ‘guide’ 
rather
 > than ‘tutorial’. Although, that is at a cursory glance.
 >
 > I am happy to investigate further, if this matter is of some contention 
to
 > people?
 
 This is the first time I'm hearing about "Install Tutorial". I'm also lazy, +1

 with sticking to install guide.
 
Just to clarify: https://docs.openstack.org/install-guide/ The link is “install-guide” but the actual title on the page is “OpenStack Installation Tutorial”.


Apologies if I haven’t been clear enough in this thread! Context always helps :P

Oy!  The URL says guide but the page says tutorial?  That is even more 
confusing.  I think it would be good to make it consistent and just with 
guide then.  All for your laziness when it leads to consistency.  :-)


Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] Improving the process for release marketing

2017-09-26 Thread Jay S Bryant

On 9/26/2017 4:33 PM, Anne Bertucio wrote:


Release marketing is a critical part of sharing what’s new in each release, and 
we want to rework how the marketing community and projects work together to 
make the release communications happen.

Having multiple, repetetive demands to summarize "top features" during release time can 
be pestering and having to recollect the information each time isn't an effective use of time. 
Being asked to make polished, "press-friendly" messages out of release notes can feel too 
far outside of the PTL's focus areas or skills. At the same time, for technical content marketers, 
attempting to find the key features from release notes, ML posts, specs, Roadmap, etc., means 
interesting features are sometimes overlooked. Marketing teams don't have the latest on what 
features landed and with what caveats.

To address this gap, the Release team and Foundation marketing team propose collecting information as part of 
the release tagging process. Similar to the existing (unused) "highlights" field for an individual 
tag, we will collect some text in the deliverable file to provide highlights for the series (about 3 items). 
That text will then be used to build a landing page on release.openstack.org that shows the "key 
features" flagged by PTLs that marketing teams should be looking at during release communication times. 
The page will link to the release notes, so marketers can start there to gather additional information, 
eliminating repetitive asks of PTLs. The "pre selection" of features means marketers can spend more 
time diving into release note details and less sifting through them.

To supplement the written information, the marketing community is also going to work 
together to consolidate follow up questions and deliver them in "press corps" 
style (i.e. a single phone call to be asked questions from multiple parties vs. multiple 
phone calls from individuals).

We will provide more details about the implementation for the highlights page 
when that is ready, but want to gather feedback about both aspects of the plan 
early.

Thanks for your input,
Anne Bertucio and Sean McGinnis






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Anne and Sean,

Thank you for starting this effort.  I have been amazed how many times I 
have been asked for the same information since the end of Pike.  I think 
that having a more automated process for this would be very helpful.


One request would be that the process allow for more than just a 'one 
liner' on key features.  If this is being targeted at marketing people 
we will need to be able to provide more information.  So, if we could 
have something like little commit messages where there is a summary of 
what the highlight is and then can provide a bit more verbose 
explanation (a few sentences) I think that would make this addition more 
helpful.


I look forward to hearing more about this!

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Ansible and Trove support

2017-09-26 Thread Amy Marrich
Michael,

There are release notes for each release that will go over what's new,
what's on it's way out or even gone as well as bug fixes and other
information. Here's a link to the Ocata release notes for OpenStack-Ansible
which includes the announcement of the Trove role.

https://docs.openstack.org/releasenotes/openstack-ansible/ocata.html

Thanks,

Amy (spotz)

On Tue, Sep 26, 2017 at 6:04 PM, Michael Gale 
wrote:

> Hello,
>
>Based on github and https://docs.openstack.org/openstack-ansible-os_
> trove/latest/ it looks like OpenStack-Ansible will support Trove under
> the Ocata release.
>
> Is that assumption correct? is there a better method to determine when a
> software component will likely be included in a release?
>
> Michael
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Clint Byrum
Excerpts from Jonathan Proulx's message of 2017-09-26 16:01:26 -0400:
> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
> 
> :OpenStack is big. Big enough that a user will likely be fine with learning
> :a new set of tools to manage it.
> 
> New users in the startup sense of new, probably.
> 
> People with entrenched environments, I doubt it.
> 

Sorry no, I mean everyone who doesn't have an OpenStack already.

It's nice and all, if you're a Puppet shop, to get to use the puppet
modules. But it doesn't bring you any closer to the developers as a
group. Maybe a few use Puppet, but most don't. And that means you are
going to feel like OpenStack gets thrown over the wall at you once every
6 months.

> But OpenStack is big. Big enough I think all the major config systems
> are fairly well represented, so whether I'm right or wrong this
> doesn't seem like an issue to me :)
> 

They are. We've worked through it. But that doesn't mean potential
users are getting our best solution or feeling well integrated into
the community.

> Having common targets (constellations, reference architectures,
> whatever) so all the config systems build the same things (or a subset
> or superset of the same things) seems like it would have benefits all
> around.
> 

It will. It's a good first step. But I'd like to see a world where
developers are all well versed in how operators actually use OpenStack.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Samuel Cassiba

Michał Jastrzębski  wrote:


On 26 September 2017 at 13:54, Alex Schultz  wrote:
On Tue, Sep 26, 2017 at 2:34 PM, Michał Jastrzębski   
wrote:

In Kolla, during this PTG, we came up with idea of scenario based
testing+documentation. Basically what we want to do is to provide set
of kolla configurations, howtos and tempest configs to test out
different "constellations" or use-cases. If, instead of in Kolla, do
these in cross-community manner (and just host kolla-specific things
in kolla), I think that would partially address what you're asking for
here.


So I'd like to point out that we do a lot of these similar deployments
in puppet[0] and tripleo[1] for a while now but more to get the most
coverage out of the fewest jobs in terms of CI.  They aren't
necessarily realistic deployment use cases. We can't actually fully
test deployment scenarios given the limited resources available.

The problem with trying to push the constellation concept to
deployment tools is that you're effectively saying in that the
upstream isn't going to bother to doing it and is relying on an
understaffed (see chef/puppet people emails) groups to now implement
the thing you expect end users to use.  Simplification in openstack
needs to not be pushed off to someone else as we're all responsible
for it.  Have you seen the number of feature/configuration options the
upstream services have? Now multiply by 20-30. Welcome to OpenStack
configuration management.  Oh an try and keep up with all the new ones
and the ones being deprecated every 6 months. /me cries

Honestly it's time to stop saying yes to things unless they have some
sort of minimum viability or it makes sense why we would force it on
the end user (as confirmed by the end user, not because it sounds like
a good idea).

OpenStack has always been a pick your poison and construct your own
cloud. The problem is that those pieces used for building are getting
more complex and have even more inter-dependencies being added each
cycle without a simple way for the operator to install or be able to
migrate between versions.

Thanks,
-Alex

[0] https://github.com/openstack/puppet-openstack-integration
[1]  
https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html


Right, I don't think anyone considers addressing *all* of them... But
if you break down actual use cases, most people wants nova (qemu+kvm),
nautron (vxlan, potentially vlan), cinder+ceph ... if we agree to
cover 90% of users, that'll boil down to 4-5 different
"constellations". If you want fancy networking, we will try out best
to make it possible, but not necessarily as easy as just 20 or so node
mini private cloud for vms. I think if we could provide these 4 or 5
use cases, easy to deploy and manage, provide testing suite so people
can validate env, provide robust upgrades and so on, that alone would
make a lot of people happy.


I’ve been working to make OpenStack work in my local testing environment,
and it boiled down to this issue:
https://github.com/test-kitchen/test-kitchen/issues/873 - tl;dr being that  
while

everyone was generally +1, no paying customers pressed the issue enough
to allocate time from one of a small number of qualified people to implement
it. The main problem is the deficiency around machine orchestration, which
is not just a Chef problem. Look across the board and you’ll see everyone
has hacked their own way, which sorta works so long as you don’t sneeze
too hard near it. What works for one doesn’t work for the other and so on.

Why did I single out test-kitchen? It’s pluggable using community resources,
meaning that I can test Puppet, Ansible and Chef, on Ubuntu 16.04 and  
CentOS 7

all using the same tool on the same set of hardware. I am, by no means,
advocating burning down CI for it, but using an example from my realm. An
idempotent, repeatable, maintainable deployment would make a lot of people
happy, too.

The install docs still suggest hand configuring machines in 2017. It’s only  
after

people fall down that snake pit that they find projects like
TripleO/Ansible/Puppet/Chef, and wonder why everyone doesn’t use this stuff.
The established shops that are already using one of those methods will keep  
on
keeping on, so long as the pain is tolerable. It’s not that we have to pick  
one
thing at the detriment of others, but simply make people more aware of what  
is
out there that they don’t have to sacrifice small children and animals to  
get a working
cloud. The problem is, we keep kicking the can on who owns that bullhorn,  
so it

doesn’t get done.

However, I digress. The conversations about scenarios have happened in my  
area,
too, and while we agreed that it would be a worthwhile thing, there was no  
one

person who could reasonably take on such an undertaking. It’s all grand until
the elusive Nobody gets assigned all the work.




On 26 September 2017 at 13:01, Jonathan Proulx  

[openstack-dev] OpenStack-Ansible testing with OpenVSwitch

2017-09-26 Thread Michael Gale
Hello,

I am trying to build a Pike All-in-One instance for OpenStack Ansible
testing, currently I have a few OpenStack versions being deployed using the
default Linux Bridge implementation.

However I need a test environment to validate OpenVSwitch implementation,
is there a simple method to get an AIO installed?

I tried following
https://medium.com/@travistruman/configuring-openstack-ansible-for-open-vswitch-b7e70e26009d
however Neutron is blowing up because it can't determine the name for the
Neutron Server. I am not sure if that is my issue or not, a reference
implementation of OpenStack AIO with OpenVSwitch would help me a lot.

Thanks
Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] Zuul v3 migration update

2017-09-26 Thread Monty Taylor

Hey everybody,

We got significantly further along with our Zuul v3 rollout today. We 
uncovered some fun bugs in the migration but were able to fix most of 
them rather quickly.


We've pretty much run out of daylight though for the majority of the 
team and there is a tricky zuul-cloner related issue to deal with, so 
we're not going to push things further tonight. We're leaving most of 
today's work in place, having gotten far enough that we feel comfortable 
not rolling back.


The project-config repo should still be considered frozen except for 
migration-related changes. Hopefully we'll be able to flip the final 
switch early tomorrow.


If you haven't yet, please see [1] for information about the transition.

[1] https://docs.openstack.org/infra/manual/zuulv3.html

Thanks,

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack-Ansible and Trove support

2017-09-26 Thread Michael Gale
Hello,

   Based on github and
https://docs.openstack.org/openstack-ansible-os_trove/latest/ it looks like
OpenStack-Ansible will support Trove under the Ocata release.

Is that assumption correct? is there a better method to determine when a
software component will likely be included in a release?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Newton End-Of-Life (EOL) next month (reminder #1)

2017-09-26 Thread Tony Breeds
On Tue, Sep 26, 2017 at 10:58:46AM -0600, Emilien Macchi wrote:
> Newton is officially EOL next month:
> https://releases.openstack.org/index.html#release-series
> 
> As an action from our weekly meeting, we decided to accelerate the
> reviews for stable/newton before it's too late.
> This email is a reminder and a last reminder will be sent out before
> we EOL for real.
> 
> If you need any help to get backport merged, please raise it here or
> ask on IRC as usual.

For projects that need to be integreated with upper-constraints
the deadline is this week though given I tend to do my stable/* release
reviews on Mondays I'd accepet anything that's ready for review then.

For projects that don't need to be integrated with upper-constratints
the deadline is Oct 11th.  I'll be generating my list of repos this
week for review by the community.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disk Image Builder for redhat 7.4

2017-09-26 Thread Tony Breeds
On Tue, Sep 26, 2017 at 10:19:45PM +0530, Amit Singla wrote:
> Hi,
> 
> Could you tell me how I can create qcow2 image for rhel 7.4 by disk image
> builder and I want also to install oracle 12.2 on that image with DIB. Is
> it possible?

For the RHEL 7.4 side of things there is a rhel7 dib target, that starts
with a guest image supplied by Red Hat and customises it for your needs.

No idea about oracle 12.2.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Queens PTG: Thursday summary

2017-09-26 Thread Blair Bethwaite
Hi Belmiro,

On 20 Sep. 2017 7:58 pm, "Belmiro Moreira" <
moreira.belmiro.email.li...@gmail.com> wrote:
> Discovering the latest image release is hard. So we added an image
property "recommended"
> that we update when a new image release is available. Also, we patched
horizon to show
> the "recommended" images first.

There is built in support in Horizon that allows displaying multiple image
category tabs where each takes contents from the list of images owned by a
specific project/tenant. In the Nectar research cloud this is what we rely
on to distinguish between "Public", "Project", "Nectar" (the base images we
maintain), and "Contributed" (images contributed by users who wish them to
be tested by us and effectively promoted as quality assured). When we
update a "Nectar" or "Contributed" image the old version stays public but
is moved into a project for deprecated images of that category, where
eventually we can clean it up.

> This helps our users to identify the latest image release but we continue
to show for
> each project the full list of public images + all personal user images.

Could you use the same model as us?

Cheers,
b1airo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 39

2017-09-26 Thread Chris Dent


(If you prefer, there's html: )

It has been a while since the last one of these that [had any
substance](https://anticdent.org/tc-report-33.html). The run up to the
[PTG](https://www.openstack.org/ptg) and travel to and fro meant
either that not much was happening or I didn't have time to write.
This week I'll attempt to catch up with TC activities (that I'm aware
of) from the PTG and this past week.

# Board Meeting

The Sunday before the PTG there was an all day meeting of the
Foundation Board, the Technical Committee, the User Committee and
members of the Interop and Product working groups. The
[agenda](https://wiki.openstack.org/wiki/Governance/Foundation/10Sep2017BoardMeeting)
was oriented towards updates on the current strategic focus
areas:

* Better communicate about OpenStack
* Community Health
* Requirements: Close the feedback loop
* Increase complementarity with adjacent technologies
* Simplify OpenStack

Each group gave an overview of the progress they've made since
[Boston](/openstack-pike-board-meeting-notes.html). [Mark
McLoughlin](https://crustyblaa.com/september-10-2017-openstack-foundation-board-meeting.html)
has a good overview of most of the topics covered.

I was on the hook to discuss what might be missing from the strategic
areas. In the "Community Health" section we often discuss making the
community inviting to new people, especially to under-represented
groups and making sure the community is capable of creating new
leaders. Both of these are very important (especially the first) but
what I felt was missing was attention to the experience of the regular
contributor to OpenStack who has been around for a while. A topic we
might call "developer happiness". There are a lot of dimensions to
that happiness, not all of which OpenStack is great at balancing.

It turns out that this was already a topic within the domain of
Community Health but had been set aside while progress was being made
on other topics. So now I've been drafted to be a member of that
group. I will start writing about it soon.

# PTG

The PTG was five days long, I intend to write a separate update about
the days in the API and Nova rooms, what follows are notes from the
TC-related sessions that I was able to attend.

As is the norm, there was an
[etherpad](https://etherpad.openstack.org/p/queens-PTG-TC-SWG) for the
whole week, which for at least some things has relatively good notes.
There's too much to report all that happened, so here are some
interesting highlights:

* To encourage community diversity and accept the reality of
  less-than-full time contributors it will become necessary to have
  more cores, even if they don't know everything there is to know
  about a project.
* Before the next TC election (coming soon: nominations start 29
  September) a report will be made on the progress made by the TC in
  the last 12 months, especially with regard to the goals expressed in
  the [vision
  
statement](https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html).
  We should have been doing this all along, but is perhaps an
  especially good idea now that [regular meetings have
  
stopped](https://governance.openstack.org/tc/resolutions/20170425-drop-tc-weekly-meetings.html).
* The TC will take greater action to make sure that strategic
  priorities (in the sense of "these are some of the things the TC
  observes that OpenStack should care about") are effectively
  publicised. These are themes that fit neither in the urgency of the
  [Top 5
  list](https://governance.openstack.org/tc/reference/top-5-help-wanted.html)
  nor in the concreteness of [OpenStack-wide
  Goals](https://governance.openstack.org/tc/goals/index.html). One
  idea is to prepare a short list before each PTG to set the tone.
  Work remains to flesh this one out.

# The Past Week

The week after the PTG it's hard to get rolling, so there's not a
great deal to report from office hours or otherwise. The busiest day
in `#openstack-tc` was
[Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-09-21.log.html)
where the discussion was mostly about Glare's application to [be
official](https://review.openstack.org/#/c/479285/). This has raised a
lot of questions, many of which are in the IRC log or on the review.
As is often the case with contentious project applications, the
questions frequently reflect (as they should) the biases and goals the
reviewers have for OpenStack as a whole. For example I asked "Why
should Glare be an _OpenStack_ project rather than a more global
project (that happens to have support for keystone)?" while others
expressed concern for any overlap (or perception thereof) between
Glance and Glare and still others said the equivalent of "come on,
enough with this, let's just get on with it, there's enough work to go
around."

And with that I must end this for this week, as there's plenty of other
work to do.

--
Chris Dent

[openstack-dev] [release][ptl] Improving the process for release marketing

2017-09-26 Thread Anne Bertucio
Release marketing is a critical part of sharing what’s new in each release, and 
we want to rework how the marketing community and projects work together to 
make the release communications happen. 

Having multiple, repetetive demands to summarize "top features" during release 
time can be pestering and having to recollect the information each time isn't 
an effective use of time. Being asked to make polished, "press-friendly" 
messages out of release notes can feel too far outside of the PTL's focus areas 
or skills. At the same time, for technical content marketers, attempting to 
find the key features from release notes, ML posts, specs, Roadmap, etc., means 
interesting features are sometimes overlooked. Marketing teams don't have the 
latest on what features landed and with what caveats.

To address this gap, the Release team and Foundation marketing team propose 
collecting information as part of the release tagging process. Similar to the 
existing (unused) "highlights" field for an individual tag, we will collect 
some text in the deliverable file to provide highlights for the series (about 3 
items). That text will then be used to build a landing page on 
release.openstack.org that shows the "key features" flagged by PTLs that 
marketing teams should be looking at during release communication times. The 
page will link to the release notes, so marketers can start there to gather 
additional information, eliminating repetitive asks of PTLs. The "pre 
selection" of features means marketers can spend more time diving into release 
note details and less sifting through them.

To supplement the written information, the marketing community is also going to 
work together to consolidate follow up questions and deliver them in "press 
corps" style (i.e. a single phone call to be asked questions from multiple 
parties vs. multiple phone calls from individuals).

We will provide more details about the implementation for the highlights page 
when that is ready, but want to gather feedback about both aspects of the plan 
early.

Thanks for your input,
Anne Bertucio and Sean McGinnis






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Michał Jastrzębski
On 26 September 2017 at 13:54, Alex Schultz  wrote:
> On Tue, Sep 26, 2017 at 2:34 PM, Michał Jastrzębski  wrote:
>> In Kolla, during this PTG, we came up with idea of scenario based
>> testing+documentation. Basically what we want to do is to provide set
>> of kolla configurations, howtos and tempest configs to test out
>> different "constellations" or use-cases. If, instead of in Kolla, do
>> these in cross-community manner (and just host kolla-specific things
>> in kolla), I think that would partially address what you're asking for
>> here.
>>
>
> So I'd like to point out that we do a lot of these similar deployments
> in puppet[0] and tripleo[1] for a while now but more to get the most
> coverage out of the fewest jobs in terms of CI.  They aren't
> necessarily realistic deployment use cases. We can't actually fully
> test deployment scenarios given the limited resources available.
>
> The problem with trying to push the constellation concept to
> deployment tools is that you're effectively saying in that the
> upstream isn't going to bother to doing it and is relying on an
> understaffed (see chef/puppet people emails) groups to now implement
> the thing you expect end users to use.  Simplification in openstack
> needs to not be pushed off to someone else as we're all responsible
> for it.  Have you seen the number of feature/configuration options the
> upstream services have? Now multiply by 20-30. Welcome to OpenStack
> configuration management.  Oh an try and keep up with all the new ones
> and the ones being deprecated every 6 months. /me cries
>
> Honestly it's time to stop saying yes to things unless they have some
> sort of minimum viability or it makes sense why we would force it on
> the end user (as confirmed by the end user, not because it sounds like
> a good idea).
>
> OpenStack has always been a pick your poison and construct your own
> cloud. The problem is that those pieces used for building are getting
> more complex and have even more inter-dependencies being added each
> cycle without a simple way for the operator to install or be able to
> migrate between versions.
>
> Thanks,
> -Alex
>
> [0] https://github.com/openstack/puppet-openstack-integration
> [1] 
> https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html

Right, I don't think anyone considers addressing *all* of them... But
if you break down actual use cases, most people wants nova (qemu+kvm),
nautron (vxlan, potentially vlan), cinder+ceph ... if we agree to
cover 90% of users, that'll boil down to 4-5 different
"constellations". If you want fancy networking, we will try out best
to make it possible, but not necessarily as easy as just 20 or so node
mini private cloud for vms. I think if we could provide these 4 or 5
use cases, easy to deploy and manage, provide testing suite so people
can validate env, provide robust upgrades and so on, that alone would
make a lot of people happy.

>> On 26 September 2017 at 13:01, Jonathan Proulx  wrote:
>>> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
>>>
>>> :OpenStack is big. Big enough that a user will likely be fine with learning
>>> :a new set of tools to manage it.
>>>
>>> New users in the startup sense of new, probably.
>>>
>>> People with entrenched environments, I doubt it.
>>>
>>> But OpenStack is big. Big enough I think all the major config systems
>>> are fairly well represented, so whether I'm right or wrong this
>>> doesn't seem like an issue to me :)
>>>
>>> Having common targets (constellations, reference architectures,
>>> whatever) so all the config systems build the same things (or a subset
>>> or superset of the same things) seems like it would have benefits all
>>> around.
>>>
>>> -Jon
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Alex Schultz
On Tue, Sep 26, 2017 at 2:34 PM, Michał Jastrzębski  wrote:
> In Kolla, during this PTG, we came up with idea of scenario based
> testing+documentation. Basically what we want to do is to provide set
> of kolla configurations, howtos and tempest configs to test out
> different "constellations" or use-cases. If, instead of in Kolla, do
> these in cross-community manner (and just host kolla-specific things
> in kolla), I think that would partially address what you're asking for
> here.
>

So I'd like to point out that we do a lot of these similar deployments
in puppet[0] and tripleo[1] for a while now but more to get the most
coverage out of the fewest jobs in terms of CI.  They aren't
necessarily realistic deployment use cases. We can't actually fully
test deployment scenarios given the limited resources available.

The problem with trying to push the constellation concept to
deployment tools is that you're effectively saying in that the
upstream isn't going to bother to doing it and is relying on an
understaffed (see chef/puppet people emails) groups to now implement
the thing you expect end users to use.  Simplification in openstack
needs to not be pushed off to someone else as we're all responsible
for it.  Have you seen the number of feature/configuration options the
upstream services have? Now multiply by 20-30. Welcome to OpenStack
configuration management.  Oh an try and keep up with all the new ones
and the ones being deprecated every 6 months. /me cries

Honestly it's time to stop saying yes to things unless they have some
sort of minimum viability or it makes sense why we would force it on
the end user (as confirmed by the end user, not because it sounds like
a good idea).

OpenStack has always been a pick your poison and construct your own
cloud. The problem is that those pieces used for building are getting
more complex and have even more inter-dependencies being added each
cycle without a simple way for the operator to install or be able to
migrate between versions.

Thanks,
-Alex

[0] https://github.com/openstack/puppet-openstack-integration
[1] 
https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html

> On 26 September 2017 at 13:01, Jonathan Proulx  wrote:
>> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
>>
>> :OpenStack is big. Big enough that a user will likely be fine with learning
>> :a new set of tools to manage it.
>>
>> New users in the startup sense of new, probably.
>>
>> People with entrenched environments, I doubt it.
>>
>> But OpenStack is big. Big enough I think all the major config systems
>> are fairly well represented, so whether I'm right or wrong this
>> doesn't seem like an issue to me :)
>>
>> Having common targets (constellations, reference architectures,
>> whatever) so all the config systems build the same things (or a subset
>> or superset of the same things) seems like it would have benefits all
>> around.
>>
>> -Jon
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Jonathan Proulx
On Tue, Sep 26, 2017 at 01:34:14PM -0700, Michał Jastrzębski wrote:
:In Kolla, during this PTG, we came up with idea of scenario based
:testing+documentation. Basically what we want to do is to provide set
:of kolla configurations, howtos and tempest configs to test out
:different "constellations" or use-cases. If, instead of in Kolla, do
:these in cross-community manner (and just host kolla-specific things
:in kolla), I think that would partially address what you're asking for
:here.

Yeas, that sounds like a great idea.

-Jon

:On 26 September 2017 at 13:01, Jonathan Proulx  wrote:
:> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
:>
:> :OpenStack is big. Big enough that a user will likely be fine with learning
:> :a new set of tools to manage it.
:>
:> New users in the startup sense of new, probably.
:>
:> People with entrenched environments, I doubt it.
:>
:> But OpenStack is big. Big enough I think all the major config systems
:> are fairly well represented, so whether I'm right or wrong this
:> doesn't seem like an issue to me :)
:>
:> Having common targets (constellations, reference architectures,
:> whatever) so all the config systems build the same things (or a subset
:> or superset of the same things) seems like it would have benefits all
:> around.
:>
:> -Jon
:>
:> __
:> OpenStack Development Mailing List (not for usage questions)
:> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
:
:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Michał Jastrzębski
In Kolla, during this PTG, we came up with idea of scenario based
testing+documentation. Basically what we want to do is to provide set
of kolla configurations, howtos and tempest configs to test out
different "constellations" or use-cases. If, instead of in Kolla, do
these in cross-community manner (and just host kolla-specific things
in kolla), I think that would partially address what you're asking for
here.

On 26 September 2017 at 13:01, Jonathan Proulx  wrote:
> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
>
> :OpenStack is big. Big enough that a user will likely be fine with learning
> :a new set of tools to manage it.
>
> New users in the startup sense of new, probably.
>
> People with entrenched environments, I doubt it.
>
> But OpenStack is big. Big enough I think all the major config systems
> are fairly well represented, so whether I'm right or wrong this
> doesn't seem like an issue to me :)
>
> Having common targets (constellations, reference architectures,
> whatever) so all the config systems build the same things (or a subset
> or superset of the same things) seems like it would have benefits all
> around.
>
> -Jon
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder][third-party][ci] Tintri Cinder CI failure

2017-09-26 Thread Apoorva Deshpande
I patched sos-ci and logs are available now [1]. First exception occurrence
I spot in c-vol.txt is here [2]

[1] http://openstack-ci.tintri.com/tintri/refs-changes-59-507359-1/logs/
[2] http://paste.openstack.org/show/621983/

On Mon, Sep 25, 2017 at 11:32 PM, Silvan Kaiser  wrote:

> Hi Apoorva!
> The test run is sadly missing the service logs, probably because you're
> using a current DevStack (systemd based services) but an older sos-ci
> version? If you apply https://github.com/j-griffith/sos-ci/commit/
> f0f2ce2e2f2b12727ee5aa75a751376dcc1ea3a4 you should be able to get the
> logs for new test runs. This will help debugging this.
> Best
> Silvan
>
>
>
> 2017-09-26 1:54 GMT+02:00 Apoorva Deshpande :
>
>> Hello,
>>
>> Tintri's Cinder CI started failing around Sept 19, 2017. There are 29
>> tests failing[1] with following errors [2][3][4]. Tintri Cinder driver
>> inherit nfs cinder driver and it's available here[5].
>>
>> Please let me know if anyone has recently seen these failures or has any
>> pointers on how to fix.
>>
>> Thanks,
>> Apoorva
>>
>> IRC: Apoorva
>>
>> [1] http://openstack-ci.tintri.com/tintri/refs-changes-57-
>> 505357-1/testr_results.html
>> [2] http://paste.openstack.org/show/621886/
>> [3] http://paste.openstack.org/show/621858/
>> [4] http://paste.openstack.org/show/621857/
>> [5] https://github.com/openstack/cinder/blob/master/cinder/
>> volume/drivers/tintri.py
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Dr. Silvan Kaiser
> Quobyte GmbH
> Hardenbergplatz 2, 10623 Berlin - Germany
> +49-30-814 591 800 <+49%2030%20814591800> - www.quobyte.com quobyte.com/>
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Ironic 3rd Party CI Meetings

2017-09-26 Thread Rajini.Karthik
Dell - Internal Use - Confidential


Hi all,
It was actually discussed in irc, after the ironic meeting yesterday that we 
will have weekly/biweekly 3rd Party CI IRC meetings going forward.

The goal is to harden the third-party CI results for ironic, share ideas to 
make it robust and trust worthy.

Would like to know if this time slot works for you all?
http://eavesdrop.openstack.org/#Ironic/neutron_Integration_team_meeting - Not 
in use now
Weekly on Monday at 1600 UTC 
 in 
#openstack-meeting-4 (IRC 
webclient)

Regards
Rajini


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][oslo.messaging][all] Notice: upcoming change to oslo.messaging RPC server

2017-09-26 Thread Ken Giusti
Hi Folks,

Just a head's up:

In Queens the default access policy for RPC Endpoints will change from
LegacyRPCAccessPolicy to DefaultRPCAccessPolicy.  RPC calls to private
('_' prefix) methods will no longer be possible.  If you want to allow
RPC Clients to invoke private methods, you must explicitly set the
access_policy to LegacyRPCAccessPolicy when you call get_rpc_server()
or instantiate an RPCDispatcher.  This change [0] has been merged to
oslo.messaging master and will appear in the next release of
oslo.messaging.

"Umm What?"

Good question!  Here's the TL;DR details:

Since forever it's been possible for a client to make an RPC call
against _any_ method defined in the RPC Endpoint object.  And by "any"
we mean "all methods including private ones (method names prefixed by
'_' )"

Naturally this ability came as a surprise many folk [1], including
yours truly and others on the oslo team [2].  It was agreed that
having this be the default behavior was indeed A Bad Thing.

So starting in Ocata oslo.messaging has provided a means for
controlling access to Endpoint methods [3].  Oslo.messaging now
defines three different "access control policies" that can be applied
to an RPC Server:

LegacyRPCAccessPolicy: original behavior - any method can be invoked
by an RPC client
DefaultRPCAccessPolicy: prevent RPC access to private '_' methods, all
others may be invoked
ExplicitRPCAccessPolicy: only allow access to those methods that have
been decorated with @expose decorator

See [4] for more details.

In order not to break anything at the time the default access policy
was set to 'LegacyRPCAccessPolicy'.  This has been the default for
Ocata and Pike.

Starting in Queens this will no longer be the case.
DefaultRPCAccessPolicy will become the default if no access policy is
specified when calling get_rpc_server() or directly instantiating an
RPCDispatcher.  To keep the old behavior you must explicitly set the
access policy to LegacyRPCAccessPolicy:

from oslo_messaging.rpc import LegacyRPCAccessPolicy
...
server = get_rpc_server(transport, target, endpoints,
 access_policy=LegacyRPCAccessPolicy)



Reply here if you have any questions or hit any issues, thanks!

-K

[0] https://review.openstack.org/#/c/500456/
[1] https://bugs.launchpad.net/oslo.messaging/+bug/1194279
[2] https://bugs.launchpad.net/oslo.messaging/+bug/1555845
[3] https://review.openstack.org/#/c/358359/
[4] https://docs.openstack.org/oslo.messaging/latest/reference/server.html
-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Jonathan Proulx
On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:

:OpenStack is big. Big enough that a user will likely be fine with learning
:a new set of tools to manage it.

New users in the startup sense of new, probably.

People with entrenched environments, I doubt it.

But OpenStack is big. Big enough I think all the major config systems
are fairly well represented, so whether I'm right or wrong this
doesn't seem like an issue to me :)

Having common targets (constellations, reference architectures,
whatever) so all the config systems build the same things (or a subset
or superset of the same things) seems like it would have benefits all
around.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pike Retrospective & Status reporting

2017-09-26 Thread Giulio Fidente

On 09/26/2017 08:55 PM, Alex Schultz wrote:

On Mon, Sep 18, 2017 at 12:50 PM, Alex Schultz  wrote:

Hey folks,

We started off our PTG with a retrospective for Pike. The output of
which can be viewed here[0][1].

One of the recurring themes from the retrospective and the PTG was the
need for better communication during the cycle.  One of the ideas that
was mentioned was adding a section to the weekly meeting calling for
current status from the various tripleo squads[2].  Starting next week
(Sept 26th), I would like for folks who are members of one of the
squads be able to provide a brief status or a link to the current
status during the weekly meeting.  There will be a spot added to the
agenda to do a status roll call.


I forgot to do this during the meeting[0] this week. I will make sure
to add it for the meeting next week.  Please remember to have a person
prepare a squad status for next time.

As a remember for those who didn't want to click the link, the listed
squads are:
ci
ui/cli
upgrade
validations
workflows
containers
networking
integration
python3

Thanks,
-Alex

[0] 
http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-09-26-14.00.html

great, thanks!

I think it will also help getting more attention/feedback/reviews on the 
various efforts

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Jay Pipes

On 09/26/2017 02:04 AM, Blair Bethwaite wrote:

I've been watching this thread and I think we've already seen an
excellent and uncontroversial suggestion towards simplifying initial
deployment of OS - that was to push towards encoding Constellations
into the deployment and/or config management projects.


ack. +1 to this. I supported the constellation concept when it was 
originally proposed in the TC vision draft thing.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Dragonflow] Virtual PTG

2017-09-26 Thread Omer Anson
Hello, all.

Since the Dragonflow team didn't hold any meetings during the last PTG, we
thought of holding a virtual PTG with video streaming where others can join
remotely.

The dates are set for the 18th-19th October (2017, for those of us from the
future). The times are flexible for now. The schedule is being constructed
here[1]. So if a specific topic interests you, we can navigate the schedule
so that it will be during relatively comfortable hours. Feel free to add
topics, suggestions, requests, etc.

The technical specifics on how to connect will be sent later.

[1] https://etherpad.openstack.org/p/dragonflow-queens

Thanks,
Dragonflow team.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Clint Byrum
Excerpts from Samuel Cassiba's message of 2017-09-25 17:27:25 -0700:
> 
> > On Sep 25, 2017, at 16:52, Clint Byrum  wrote:
> > 
> > Excerpts from Jonathan D. Proulx's message of 2017-09-25 11:18:51 -0400:
> >> On Sat, Sep 23, 2017 at 12:05:38AM -0700, Adam Lawson wrote:
> >> 
> >> :Lastly, I do think GUI's make deployments easier and because of that, I
> >> :feel they're critical. There is more than one vendor whose built and
> >> :distributes a free GUI to ease OpenStack deployment and management. That's
> >> :a good start but those are the opinions of a specific vendor - not he OS
> >> :community. I have always been a big believer in a default cloud
> >> :configuration to ease the shock of having so many options for everything. 
> >> I
> >> :have a feeling however our commercial community will struggle with
> >> :accepting any method/project other than their own as being part a default
> >> :config. That will be a tough one to crack.
> >> 
> >> Different people have differnt needs, so this is not meant to
> >> contradict Adam.
> >> 
> >> But :)
> >> 
> >> Any unique deployment tool would be of no value to me as OpenStack (or
> >> anyother infrastructure component) needs to fit into my environment.
> >> I'm not going to adopt something new that requires a new parallel
> >> management tool to what I use.
> >> 
> > 
> > You already have that if you run OpenStack.
> > 
> > The majority of development testing and gate testing happens via
> > Devstack. A parallel management tool to what most people use to actually
> > operate OpenStack.
> > 
> >> I think focusing on the existing configuration management projects it
> >> the way to go. Getting Ansible/Puppet/Chef/etc.. to support a well
> >> know set of "constellations" in an opinionated would make deployment
> >> easy (for most people who are using one of those already) and ,
> >> ussuming the opionions are the same :) make consumption easier as
> >> well.
> >> 
> >> As an example when I started using OpenStack (Essex) we  had recently
> >> switch to Ubuntu as our Linux platform and Pupept as our config
> >> management. Ubuntu had a "one click MAAS install of OpenStack" which
> >> was impossible as it made all sorts of assumptions about our
> >> environment and wanted controll of most of them so it could provide a
> >> full deployemnt solution.  Puppet had a good integrated example config
> >> where I plugged in some local choices and and used existing deploy
> >> methodologies.
> >> 
> >> I fought with MAAS's "simple" install for a week.  When I gave up and
> >> went with Puppet I had live users on a substantial (for the time)
> >> cloud in less htan 2 days.
> >> 
> >> I don't think this has to do with the relative value of MASS and
> >> Puppet at the time, but rather what fit my existing deploy workflows.
> >> 
> >> Supporting multiple config tools may not be simple from an upstream
> >> perspective, but we do already have these projects and it is simpler
> >> to consume for brown field deployers at least.
> >> 
> > 
> > I don't think anybody is saying we would slam the door in the face of
> > people who use any one set of tools.
> > 
> > But rather, we'd start promoting and using a single solution for the bulk
> > of community efforts. Right now we do that with devstack as a reference
> > implementation that nobody should use for anything but dev/test. But
> > it would seem like a good idea for us to promote a tool for going live
> > as well.
> 
> Except by that very statement, you slam the door in the face of tons of 
> existing
> knowledge within organizations. This slope has a sheer face.
> 
> Promoting a single solution would do as much harm as it would good, for all 
> it’s
> worth. In such a scenario, the most advocated method would become the only
> understood method, in spite of all other deployment efforts. Each project that
> did not have the most mindshare would become more irrelevant than they are now
> and further slip into decay. For those that did not have the fortune or
> foresight to land on this hypothetical winning side, what for their efforts,
> evolve or gtfo?
> 
> I'm not saying Fuel or Salt or Chef or Puppet or Ansible needs to be the
> 'winner', because there isn't a competition, at least in my opinion. The way I
> see it, we're all working to get to the same place. Our downstream consumers
> don’t really care how that happens in the grand scheme, only that it does.
> 

I definitely think you're right, those that aren't chosen will be
relegated to the 'contrib' section and see less and less attention.

But they're already there. Everybody is already there. The suggestion
is that we shine a light on one so developers can at least have a more
realistic target to hit that they can have a conversation with users
about.

While I'm glad you have spent your time on Chef, I don't think it's the
best use of the community's time to learn all the tools. It is, however,
in the user's best interest that they have common ground 

Re: [openstack-dev] [TripleO][DIB] how create triplo overcloud image with latest kernel?

2017-09-26 Thread Ben Nemec



On 09/26/2017 05:43 AM, Moshe Levi wrote:

Hi all,

As part of the OVS Hardware Offload [1] [2],  we need to create new 
Centos/Redhat 7 image  with latest kernel/ovs/iproute.


We tried to use virsh-customize to install the packages and we were able 
to update iproute and ovs, but for the kernel there is no space.


We also tried with virsh-customize to uninstall the old kenrel but we no 
luck.


Is other ways to replace kernel  package in existing image?


Do you have to use an existing image?  The easiest way to do this would 
be to create a DIB element that installs what you want and just include 
that in the image build in the first place.  I don't think that would be 
too difficult to do now that we're keeping the image definitions in 
simple YAML files.




[1] - https://review.openstack.org/#/c/504911/ 
 



[2] - https://review.openstack.org/#/c/502313/ 
 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pike Retrospective & Status reporting

2017-09-26 Thread Alex Schultz
On Mon, Sep 18, 2017 at 12:50 PM, Alex Schultz  wrote:
> Hey folks,
>
> We started off our PTG with a retrospective for Pike. The output of
> which can be viewed here[0][1].
>
> One of the recurring themes from the retrospective and the PTG was the
> need for better communication during the cycle.  One of the ideas that
> was mentioned was adding a section to the weekly meeting calling for
> current status from the various tripleo squads[2].  Starting next week
> (Sept 26th), I would like for folks who are members of one of the
> squads be able to provide a brief status or a link to the current
> status during the weekly meeting.  There will be a spot added to the
> agenda to do a status roll call.

I forgot to do this during the meeting[0] this week. I will make sure
to add it for the meeting next week.  Please remember to have a person
prepare a squad status for next time.

As a remember for those who didn't want to click the link, the listed
squads are:
ci
ui/cli
upgrade
validations
workflows
containers
networking
integration
python3

Thanks,
-Alex

[0] 
http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-09-26-14.00.html

> It was mentioned that folks may
> prefer to send a message to the ML and just be able to link to it
> similar to what the CI squad currently does[3].  We'll give this a few
> weeks and review how it works.
>
> Additionally it might be a good time to re-evaluate the squad
> breakdown as currently defined. I'm not sure we have anyone working on
> python3 items.
>
> Thanks,
> -Alex
>
> [0] http://people.redhat.com/aschultz/denver-ptg/tripleo-ptg-retro.jpg
> [1] https://etherpad.openstack.org/p/tripleo-ptg-queens-pike-retrospective
> [2] 
> https://github.com/openstack/tripleo-specs/blob/master/specs/policy/squads.rst#squads
> [3] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-September/121881.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] patches for simple typo fixes

2017-09-26 Thread Jay S Bryant



On 9/25/2017 7:24 AM, Sean Dague wrote:

On 09/25/2017 07:56 AM, Chris Dent wrote:

On Fri, 22 Sep 2017, Paul Belanger wrote:


This is not a good example of encouraging anybody to contribute to the
project.

Yes. This entire thread was a bit disturbing to read. Yes, I totally
agree that mass patches that do very little are a big cost to
reviewer and CI time but a lot of the responses sound like: "go away
you people who don't understand our special culture and our
important work".

That's not a good look.

Matt's original comment is good in and of itself: I saw a thing,
let's remember to curtail this stuff and do it in a nice way.

But then we generate a long thread about it. It's odd to me that
these threads sometimes draw more people out then discussions about
actually improving the projects.

It's also odd that if OpenStack were small and differently
structured, any self-respecting maintainer would be happy to see
a few typo fixes and generic cleanups. Anything to push the quality
forward is nice. But because of the way we do review and because of
the way we do CI these things are seen as expensive distractions[1].
We're old and entrenched enough now that our tooling enforces our
culture and our culture enforces our tooling.

[1] Note that I'm not denying they are expensive distractions nor
that they need to be managed as such. They are, but a lot of that
is on us.

I was trying to ignore the thread in the hopes it would die out quick.
But torches and pitchforks all came out from the far corners, so I'm
going to push back on that a bit.

I'm not super clear why there is always so much outrage about these
patches. They are fixing real things. When I encounter them, I just
approve them to get them merged quickly and not backing up the review
queue, using more CI later if they need rebasing. They are fixing real
things. Maybe there is a CI cost, but the faster they are merged the
less likely someone else is to propose it in the future, which keeps
down the CI cost. And if we have a culture of just fixing typos later,
then we spend less CI time on patches the first time around with 2 or 3
iterations catching typos.
Thank you for saying what I failed to say in my most recent response.  I 
know some people don't care about typos, etc but they are things that 
make us look like a lower quality community.  It is stuff to fix and I 
think we are wasting more resource in this discussion than just getting 
the patches through.

I think the concern is the ascribed motive for why people are putting
these up. That's fine to feel that people are stat padding (and that too
many things are driven off metrics). But, honestly, that's only
important if we make it important. Contributor stats are always going to
be pretty much junk stats. They are counting things to be the same which
are wildly variable in meaning (number of patches, number of Lines of
Code).

My personal view is just merge things that fix things that are wrong,
don't care why people are doing it. If it gets someone a discounted
ticket somewhere, so be it. It's really not any skin off our back in the
process.
+2  I am going to assume the voice of reason has been heard and not 
frustrate myself further with this thread.

If people are deeply concerned about CI resources, step one is to get
some better accounting into the existing system to see where resources
are currently spent, and how we could ensure that time is fairly spread
around to ensure maximum productivity by all developers.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Garbage patches for simple typo fixes

2017-09-26 Thread Jay S Bryant



On 9/23/2017 10:11 AM, Doug Hellmann wrote:

Excerpts from Huang Zhiteng's message of 2017-09-23 10:00:00 +0800:

On Sat, Sep 23, 2017 at 8:34 AM, Zhipeng Huang  wrote:

Hi Paul,

Unfortunately I know better on this matter and it is not the matter of topic
dispute as many people on this thread who has been disturbed and annoyed by
the padding/trolling.

So yes I'm sticking with stupid because it hurts the OpenStack community as
a whole and hurts the reputation of the dev community from my country which
in large are great people with good hearts and skills.

I'm not giving even an inch of the benefit of doubt to these padding
activities and people behind it.

Hi Zhipeng,

Not sure how much you have been involved in the dev community in
China, but it's now a good time to talk to those companies (in public
or private) and ask them to stop encourage their developers to submit
such changes.

I would prefer to set up a system where we can have those sorts of
conversations in private, to encourage people to contribute
constructively instead of shaming them.

Doug

+2

This, in some cases, may be due to people trying to pad their numbers.  
Perhaps it is just people who do not yet know the best way to help out 
and want to do something.


I agree with Doug's comments about this needing to be done in private 
and with Ildiko's comments on providing mentoring.  This is something I 
will consider as I put together the on-boarding education for the Sydney 
Summit.


From a Cinder standpoint I have been trying to be inclusive and not 
block things unless it just appears to be blatantly pointless. Trying to 
keep on the side of community inclusion.


Jay




On Sat, Sep 23, 2017 at 8:16 AM, Paul Belanger 
wrote:

On Fri, Sep 22, 2017 at 10:26:09AM +0800, Zhipeng Huang wrote:

Let's not forget the epic fail earlier on the "contribution.rst fix"
that
almost melt down the community CI system.

For any companies that are doing what Matt mentioned, please be aware
that
the dev community of the country you belong to is getting hurt by your
stupid activity.

Stop patch trolling and doing something meaningful.


Sorry, but I found this comment over the line. Just because you disagree
with
the $topic at hand, doesn't mean you should default to calling it
'stupid'. Give
somebody the benefit of not knowing any better.

This is not a good example of encouraging anybody to contribute to the
project.

-Paul


On Fri, Sep 22, 2017 at 10:21 AM, Matt Riedemann 
wrote:


I just wanted to highlight to people that there seems to be a series
of
garbage patches in various projects [1] which are basically doing
things
like fixing a single typo in a code comment, or very narrowly changing
http
to https in links within docs.

Also +1ing ones own changes.

I've been trying to snuff these out in nova, but I see it's basically
a
pattern widespread across several projects.

This is the boilerplate comment I give with my -1, feel free to employ
it
yourself.

"Sorry but this isn't really a useful change. Fixing typos in code
comments when the context is still clear doesn't really help us, and
mostly
seems like looking for padding stats on stackalytics. It's also a
drain on
our CI environment.

If you fixed all of the typos in a single module, or in user-facing
documentation, or error messages, or something in the logs, or
something
that actually doesn't make sense in code comments, then maybe, but
this
isn't one of those things."

I'm not trying to be a jerk here, but this is annoying to the point I
felt
the need to say something publicly.

[1] https://review.openstack.org/#/q/author:%255E.*inspur.*

--

Thanks,

Matt


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Zhipeng (Howard) Huang

Standard 

[openstack-dev] [neutron] Rotating bugs deputy role

2017-09-26 Thread Miguel Lavalle
Hi Neutrinos,

As discussed during the Denver PTG, we have experienced difficulty in our
IRC meetings to get volunteers for the bugs deputy role, who takes
responsibility for one week to triage bug reports (please see
https://docs.openstack.org/neutron/latest/contributor/policies/bugs.html#neutron-bug-deputy)
. As a consequence, we decided in the PTG to create a rotation roster for
this role. In other words, the people in the roster will take one week
turns playing the role of Neutron bugs deputy. The more volunteers we have,
the less frequently each one of us will be fulfilling this duty. So, please
help the Neutron community and add your name to this etherpad:
https://etherpad.openstack.org/p/neutron-rotating-bug-deputy-volunteers.
Based on the names in the etherpad, I will propose a roster over the nest
few days in the "Bugs deputy" section of the Networking meeting wiki page (
https://wiki.openstack.org/wiki/Network/Meetings)

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] patches for simple typo fixes

2017-09-26 Thread Joshua Harlow

Sean Dague wrote:

I think the concern is the ascribed motive for why people are putting
these up. That's fine to feel that people are stat padding (and that too
many things are driven off metrics). But, honestly, that's only
important if we make it important. Contributor stats are always going to
be pretty much junk stats. They are counting things to be the same which
are wildly variable in meaning (number of patches, number of Lines of
Code).


If this is a real thing (which I don't know if it is, but I could 
believe that it is) due to management or other connecting those stats to 
involvement (and likely at some point $$) why don't we just turn off 
http://stackalytics.com/ or make it require a launchpad login (make it a 
little harder to access) or put a big warning banner on it that says 
these stats are not-representative of much of anything...


The hard part is it's not us as a community deciding 'important if we 
make it important' because such motives are not directly associated to 
contributors but instead may or may not be connected to management of 
said contributor (and management of the contributors that are paid to 
work on openstack has always been in the background somewhere, like a 
ghost...).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] haproxy fails to receive datagram

2017-09-26 Thread Yipei Niu
Hi, Michael,

I think the octavia is the latest, since I pull the up-to-date repo of
octavia manually to my server before installation.

Anyway, I run "sudo ip netns exec amphora-haproxy ip route show table 1" in
the amphora, and find that the route table exists. The info is listed as
follows.

default via 10.0.1.1 dev eth1 onlink

I think it may not be the source.

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] api.fault notification is never emitted

2017-09-26 Thread Matt Riedemann
Cross-posting to the operators list since they are the ones that would 
care about this.


Basically, the "notify_on_api_faults" config option hasn't worked since 
probably Kilo when the 2.1 microversion wsgi stack code was added.


Rackspace added it back in 2012:

https://review.openstack.org/#/c/13288/

Gibi has a patch proposed to remove it since it's dead code:

https://review.openstack.org/#/c/505164/

Given how long this has been regressed, and no one has noticed, it seems 
fair to just remove this.


Is anyone relying on this and for some reason disagrees with the bug and 
that we should try to fix this?


On 6/20/2017 7:22 AM, Balazs Gibizer wrote:

Hi,

I come across a questionable behavior of nova while I tried to use the 
notify_on_api_faults configuration option [0] while testing the related 
versioned notification transformation patch [1]. Based on the 
description of the config option and the code that uses it [2] nova 
sends and api.fault notification if the nova-api service encounters an 
unhandle exception. There is a FaultWrapper class [3] added to the 
pipeline of the REST request which catches every exception and calls the 
notification sending.
Based on some debugging in devstack this FaultWrapper never catches any 
exception. I injected a ValueError to the beginning of 
nova.objects.aggregate.Aggregate.create method. This resulted in an 
HTTPInternalServerError exception and HTTP 500 error code but the 
exception handling part of the FaultWrapper [4] was never reached. So I 
dig a bit deeper and I think I found the reason. Every REST API method 
is decorated with expected_errors decorator [5] which as a last resort 
translate the unexpected exception to HTTPInternalServerError. In the 
wsgi stack the actual REST api call is guarded with 
ResourceExceptionHandler context manager [7] which translates 
HTTPException to a Fault [8]. Then Fault is catched and translated to 
the REST response [7]. This way the exception never propagates back to 
the FaultWrapper in [6] and therefore the api.fault notification is 
never emitted.


You can see the api logs here [9] and the patch that I used to add the 
extra traces here [10]. Please note that there is a compute.exception 
notification visible in the log but that is a different notification 
emitted from wrap_exception decorator [11] used in compute.manager [12] 
and compute.api [13] only.


So my questions are:
1) Is it a bug in the nova wsgi or it is expected that the wsgi code 
catches everything?
2) Do we need FaultWrapper at all if the wsgi stack catches every 
exception?
3) Do we need api.fault notification at all? It seems nobody missed it 
so far.
4) If we want to have api.fault notification then what would be the good 
place to emit it? Maybe ResourceExceptionHandler at [8]?


I filed a bug for tracking purposes [14].

Cheers,
gibi


[0] 
https://github.com/openstack/nova/blob/e66e5822abf0e9f933cf6bd1b4c63007b170/nova/conf/notifications.py#L49 


[1] https://review.openstack.org/#/c/469038
[2] 
https://github.com/openstack/nova/blob/d68626595ed54698c7eb013a788ee3b98e068cdd/nova/notifications/base.py#L83 

[3] 
https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/__init__.py#L79 

[4] 
https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/__init__.py#L87 

[5] 
https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/extensions.py#L325 

[6] 
https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/extensions.py#L368 

[7] 
https://github.com/openstack/nova/blob/4a0fb6ae79acedabf134086d4dce6aae0e4f6209/nova/api/openstack/wsgi.py#L637 

[8] 
https://github.com/openstack/nova/blob/4a0fb6ae79acedabf134086d4dce6aae0e4f6209/nova/api/openstack/wsgi.py#L418 


[9] https://pastebin.com/Eu6rBjNN
[10] https://pastebin.com/en4aFutc
[11] 
https://github.com/openstack/nova/blob/master/nova/exception_wrapper.py#L57
[12] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L105

[13] https://github.com/openstack/nova/blob/master/nova/compute/api.py#L92
[14] https://bugs.launchpad.net/nova/+bug/1699115


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Newton End-Of-Life (EOL) next month (reminder #1)

2017-09-26 Thread Emilien Macchi
Newton is officially EOL next month:
https://releases.openstack.org/index.html#release-series

As an action from our weekly meeting, we decided to accelerate the
reviews for stable/newton before it's too late.
This email is a reminder and a last reminder will be sent out before
we EOL for real.

If you need any help to get backport merged, please raise it here or
ask on IRC as usual.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disk Image Builder for redhat 7.4

2017-09-26 Thread Amit Singla
Hi,

Could you tell me how I can create qcow2 image for rhel 7.4 by disk image
builder and I want also to install oracle 12.2 on that image with DIB. Is
it possible?


Regards,
Amit Singla
Cont. - 09990130499
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-26 Thread Michał Jastrzębski
On 26 September 2017 at 07:34, Attila Fazekas  wrote:
> decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
>
> Fully pulling all container takes something like ~4.5 min (from localhost,
> one leaf request at a time),
> but on the gate vm  we usually have 4 core,
> so it is possible to go bellow 2 min with better pulling strategy,
> unless we hit some disk limit.

Check your $docker info. If you kept defaults, storage driver will be
devicemapper on loopback, which is awfully slow and not very reliable.
Overlay2 is much better and should speed things up quite a bit. For me
deployment of 5 node openstack on vms similar to gate took 6min (I had
registry available in same network). Also if you pull single image it
will download all base images as well, so next one will be
significantly faster.

>
> On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski 
> wrote:
>>
>> On 22 September 2017 at 17:21, Paul Belanger 
>> wrote:
>> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
>> >> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> >> > "if DevStack gets custom images prepped to make its jobs
>> >> > run faster, won't Triple-O, Kolla, et cetera want the same and where
>> >> > do we draw that line?). "
>> >> >
>> >> > IMHO we can try to have only one big image per distribution,
>> >> > where the packages are the union of the packages requested by all
>> >> > team,
>> >> > minus the packages blacklisted by any team.
>> >> [...]
>> >>
>> >> Until you realize that some projects want packages from UCA, from
>> >> RDO, from EPEL, from third-party package repositories. Version
>> >> conflicts mean they'll still spend time uninstalling the versions
>> >> they don't want and downloading/installing the ones they do so we
>> >> have to optimize for one particular set and make the rest
>> >> second-class citizens in that scenario.
>> >>
>> >> Also, preinstalling packages means we _don't_ test that projects
>> >> actually properly declare their system-level dependencies any
>> >> longer. I don't know if anyone's concerned about that currently, but
>> >> it used to be the case that we'd regularly add/break the package
>> >> dependency declarations in DevStack because of running on images
>> >> where the things it expected were preinstalled.
>> >> --
>> >> Jeremy Stanley
>> >
>> > +1
>> >
>> > We spend a lot of effort trying to keep the 6 images we have in nodepool
>> > working
>> > today, I can't imagine how much work it would be to start adding more
>> > images per
>> > project.
>> >
>> > Personally, I'd like to audit things again once we roll out zuulv3, I am
>> > sure
>> > there are some tweaks we could make to help speed up things.
>>
>> I don't understand, why would you add images per project? We have all
>> the images there.. What I'm talking about is to leverage what we'll
>> have soon (registry) to lower time of gates/DIB infra requirements
>> (DIB would hardly need to refresh images...)
>>
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] patches for simple typo fixes

2017-09-26 Thread Amrith Kumar
Sean,

Each fix is a valid change, in and of itself. But what kind of lazy person
would fix (in three patches) the exact same thing, in three places in a
single project?

Or which person would submit the exact same kind of patch multiple times to
change URL's from http:// to https:// in places which are (literally)
comments in the code?

Or submit multiple changes to fix something that is a python stylistic
thing, not enforced by pep8 or some project wide checks for style?
Typically, when I see changes like this, I ask the submitter to make a
corresponding change to the accepted tests (enable an otherwise disabled
change, and show that the tests pass for the whole project). Well, that's
real work and the change gets abandoned.

Those are the kinds of (for lack of a PC word) bad behavior that I think we
should, as a community, reject.




-amrith


On Mon, Sep 25, 2017 at 8:24 AM, Sean Dague  wrote:

> On 09/25/2017 07:56 AM, Chris Dent wrote:
> > On Fri, 22 Sep 2017, Paul Belanger wrote:
> >
> >> This is not a good example of encouraging anybody to contribute to the
> >> project.
> >
> > Yes. This entire thread was a bit disturbing to read. Yes, I totally
> > agree that mass patches that do very little are a big cost to
> > reviewer and CI time but a lot of the responses sound like: "go away
> > you people who don't understand our special culture and our
> > important work".
> >
> > That's not a good look.
> >
> > Matt's original comment is good in and of itself: I saw a thing,
> > let's remember to curtail this stuff and do it in a nice way.
> >
> > But then we generate a long thread about it. It's odd to me that
> > these threads sometimes draw more people out then discussions about
> > actually improving the projects.
> >
> > It's also odd that if OpenStack were small and differently
> > structured, any self-respecting maintainer would be happy to see
> > a few typo fixes and generic cleanups. Anything to push the quality
> > forward is nice. But because of the way we do review and because of
> > the way we do CI these things are seen as expensive distractions[1].
> > We're old and entrenched enough now that our tooling enforces our
> > culture and our culture enforces our tooling.
> >
> > [1] Note that I'm not denying they are expensive distractions nor
> > that they need to be managed as such. They are, but a lot of that
> > is on us.
>
> I was trying to ignore the thread in the hopes it would die out quick.
> But torches and pitchforks all came out from the far corners, so I'm
> going to push back on that a bit.
>
> I'm not super clear why there is always so much outrage about these
> patches. They are fixing real things. When I encounter them, I just
> approve them to get them merged quickly and not backing up the review
> queue, using more CI later if they need rebasing. They are fixing real
> things. Maybe there is a CI cost, but the faster they are merged the
> less likely someone else is to propose it in the future, which keeps
> down the CI cost. And if we have a culture of just fixing typos later,
> then we spend less CI time on patches the first time around with 2 or 3
> iterations catching typos.
>
> I think the concern is the ascribed motive for why people are putting
> these up. That's fine to feel that people are stat padding (and that too
> many things are driven off metrics). But, honestly, that's only
> important if we make it important. Contributor stats are always going to
> be pretty much junk stats. They are counting things to be the same which
> are wildly variable in meaning (number of patches, number of Lines of
> Code).
>
> My personal view is just merge things that fix things that are wrong,
> don't care why people are doing it. If it gets someone a discounted
> ticket somewhere, so be it. It's really not any skin off our back in the
> process.
>
> If people are deeply concerned about CI resources, step one is to get
> some better accounting into the existing system to see where resources
> are currently spent, and how we could ensure that time is fairly spread
> around to ensure maximum productivity by all developers.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] patches for simple typo fixes

2017-09-26 Thread Amrith Kumar
Sean, I quantified it in 2016 for some of the patches that came in to
Trove; approximately 130 hours of CI time per patch given the number of
hacks that the person took before even getting pep8 to run.

Close to a release boundary, that had very bad results on an already
fragile Trove gate.


-amrith


On Mon, Sep 25, 2017 at 9:42 AM, Sean Dague  wrote:

> On 09/25/2017 09:28 AM, Doug Hellmann wrote:
> 
> > I'm less concerned with the motivation of someone submitting the
> > patches than I am with their effect. Just like the situation we had
> > with the bug squash days a year or so ago, if we had a poorly timed
> > set of these trivial patches coming in at our feature freeze deadline,
> > it would be extremely disruptive. So to me the fact that we're
> > seeing them in large batches means we have people who are not fully
> > engaged with the community and don't understand the impact they're
> > having. My goal is to reach out and try to improve that engagement,
> > and try to help them become more fully constructive contributors.
>
> I think that quantifying how big that impact is would be good before
> deciding it needs to be a priority to act upon. There are lots of things
> that currently swamp our system, and on my back of the envelope math and
> spot checking on resources used, these really aren't a big concern.
>
> But it's harder to see that until we really start accounting for CI time
> by project / person, and what kinds of things really do consume the system.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Samuel Cassiba

> On Sep 25, 2017, at 22:44, Adam Lawson  wrote:
> 
> Hey Jay,
> I think a GUI with a default config is a good start. Much would need to 
> happen to enable that of course but that's where my mind goes. Any talk about 
> 'default' kind of infringes on what we've all strived to embrace; a cloud 
> architecture without bakes in assumptions. A default-anything need not mean 
> other options are not available - only that a default gets them started. I 
> would never ever agree to a default that consists of KVM+Contrail+NetApp. 
> Something neutral would be great- easier said than done of course.
> 
> Samuel,
> Default configuration as I envision it != "Promoting a single solution". I 
> really hope a working default install would allow new users to get started 
> with OpeStack without promoting anything. OpenStack lacking a default install 
> results in an unfriendly deployment exercise. I know for a fact the entire 
> community at webhostingtalk.com ignores OS for the most part because of how 
> hard it is to deploy. They use Fuel or other third-party solutions because we 
> as a OS community continue to fail to acknowledge the importance of an easier 
> of implementation. Imagine thousands of hosting providers deploying OpenStack 
> because we made it easy. That is money in the bank IMHO. I totally get the 
> thinking about avoiding the term default for the reasons you provided but 
> giving users a starting point does not necessarily mean we're trying to get 
> them to adopt that as their final design. Giving them a starting point must 
> take precedence over not giving them any starting point.
> 

I’ll pick on my own second job for a moment, Chef. We have an amazing single 
node deployment strategy, and we have a so-so multinode deployment strategy for 
the simple fact that the orchestration story for every configuration management 
flavor equates to a dumpster fire in the middle of a tire fire. Let me be clear 
up front: I say ‘we’ a lot, but in many cases, the ‘we’ comes down to really 
just me. Not to discredit my teammates, I sleep a _lot_ less.

I've said it in the past, but Chef consist of nothing but part-timers with much 
more pressing issues at $dayJob[0]. If the README.md doesn’t get updated, it’s 
because none of us have the time to dedicate to evangelism. We talked about 
spreading the word back when we were still having IRC meetings, but it all 
boiled down to E_NOTIME.

As time has gone on, the roles in the Chef OpenStack project have been changing 
from less facilitator to more circus barker. It’s coming down to almost begging 
people for feedback, if we can find them. What I can do is provide a means to 
get to OpenStack about 80-90% of the way, provided the consumer can grok the 
tooling, key phrase. That said, we don’t teach people to use Chef, merely how 
one might OpenStack with it should they choose to kick the tires. The problem 
is, those potential downstream consumers, for some reason or other, don’t file 
bugs or even communicate back with the maintainers to get an idea if their 
problem would/could be addressed. They just move on, sight unseen and a bit 
grumpier. I can’t change that by doing more work.

If I shift gears to working on an installation method abstracted behind a GUI, 
am I now expected to bring in bits of Xorg simply so I can run that installer 
from my remote systems? Are your security people okay with Xorg on servers? 
Will the bootstrapping now take place entirely from a laptop/workstation, 
outright ignoring existing development workflows and pipelines? Who’s writing 
this code? Is there a GitHub repo where I can start testing this pièce de 
résistance?

If you’ll excuse the morning snark and “poisonous” words, as you put it a few 
days ago, I don’t necessarily see how bundling the install process into a 
graphical installer would help. If anything, it might prove more distraction 
than it’s worth because now there have to be graphical installer experts within 
whatever team(s) may be doing this effort.

Maybe it’s because I’ve been using Chef, the tool, for as long as I have, but 
it isn’t exactly a mash of random, disparate tooling that we’re using over 
here. We use community-standard tooling bundled in the ChefDK for the basic 
building blocks, even to our detriment at times. For integration testing, we 
used chef-provisioning until it rotted away, now being replaced by test-kitchen 
and InSpec. If anything, we were the ones lagging behind because we number so 
few and are beholden to E_NOTIME. Is there a knowledge barrier to entry? Sure 
is, and you do have to be this tall to ride. Those that do find the IRC channel 
and stick around long enough for one of us to respond generally get the 
assistance they need, but we’re not omnipresent.

As an operator in the deployment space, my whole point of contributing back is 
to make things less complicated. As PTL, my job isn’t to make Chef, the 
software, less complicated, but how to get to 

Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-26 Thread Attila Fazekas
decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.

Fully pulling all container takes something like ~4.5 min (from localhost,
one leaf request at a time),
but on the gate vm  we usually have 4 core,
so it is possible to go bellow 2 min with better pulling strategy,
unless we hit some disk limit.


On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski 
wrote:

> On 22 September 2017 at 17:21, Paul Belanger 
> wrote:
> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> >> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> >> > "if DevStack gets custom images prepped to make its jobs
> >> > run faster, won't Triple-O, Kolla, et cetera want the same and where
> >> > do we draw that line?). "
> >> >
> >> > IMHO we can try to have only one big image per distribution,
> >> > where the packages are the union of the packages requested by all
> team,
> >> > minus the packages blacklisted by any team.
> >> [...]
> >>
> >> Until you realize that some projects want packages from UCA, from
> >> RDO, from EPEL, from third-party package repositories. Version
> >> conflicts mean they'll still spend time uninstalling the versions
> >> they don't want and downloading/installing the ones they do so we
> >> have to optimize for one particular set and make the rest
> >> second-class citizens in that scenario.
> >>
> >> Also, preinstalling packages means we _don't_ test that projects
> >> actually properly declare their system-level dependencies any
> >> longer. I don't know if anyone's concerned about that currently, but
> >> it used to be the case that we'd regularly add/break the package
> >> dependency declarations in DevStack because of running on images
> >> where the things it expected were preinstalled.
> >> --
> >> Jeremy Stanley
> >
> > +1
> >
> > We spend a lot of effort trying to keep the 6 images we have in nodepool
> working
> > today, I can't imagine how much work it would be to start adding more
> images per
> > project.
> >
> > Personally, I'd like to audit things again once we roll out zuulv3, I am
> sure
> > there are some tweaks we could make to help speed up things.
>
> I don't understand, why would you add images per project? We have all
> the images there.. What I'm talking about is to leverage what we'll
> have soon (registry) to lower time of gates/DIB infra requirements
> (DIB would hardly need to refresh images...)
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Forum Reminder! Submissions due by Sept 29th

2017-09-26 Thread Thierry Carrez
Miguel Lavalle wrote:
> Is this the link we are supposed to use:
> http://forumtopics.openstack.org/cfp/create

Yes.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Forum Reminder! Submissions due by Sept 29th

2017-09-26 Thread Miguel Lavalle
Jimmy,

Is this the link we are supposed to use: http://forumtopics.openstack.
org/cfp/create? http://odsreg.openstack.org/ returns an internal server
error

Cheers

On Mon, Sep 25, 2017 at 8:25 AM, Jimmy McArthur  wrote:

> Hello!
>
> This is a friendly reminder that all proposed Forum session leaders must
> submit their abstracts at:
>
> http://odsreg.openstack.org
>
> before 11:59PM UTC on Friday, September 29th!
>
> Cheers,
> Jimmy
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][DIB] how create triplo overcloud image with latest kernel?

2017-09-26 Thread Jan Gutter
I've used the following trick in the past to enlarge the images:

qemu-img resize ${OVERCLOUD_IMG} +3G
LIBGUESTFS_BACKEND=direct virt-customize -m 2048 -a
${OVERCLOUD_IMG} --run-command 'xfs_growfs /dev/sda'

Please also double-check the filesystem, some overcloud images use
ext4, for example. After modifying the image, I then run virt-sparsify
to shrink them a bit.

On Tue, Sep 26, 2017 at 12:43 PM, Moshe Levi  wrote:
> Hi all,
>
>
>
> As part of the OVS Hardware Offload [1] [2],  we need to create new
> Centos/Redhat 7 image  with latest kernel/ovs/iproute.
>
> We tried to use virsh-customize to install the packages and we were able to
> update iproute and ovs, but for the kernel there is no space.
>
> We also tried with virsh-customize to uninstall the old kenrel but we no
> luck.
>
> Is other ways to replace kernel  package in existing image?
>
>
>
>
>
>
>
> [1] - https://review.openstack.org/#/c/504911/
>
> [2] - https://review.openstack.org/#/c/502313/
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Jan Gutter
Embedded Networking Software Engineer

Netronome | First Floor Suite 1, Block A, Southdowns Ridge Office Park,
Cnr Nellmapius and John Vorster St, Irene, Pretoria, 0157
Phone: +27 (12) 665-4427 | Skype: jangutter |  www.netronome.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Senlin] Senlin Queens Meetup

2017-09-26 Thread liu.xuefeng1
I will join.

If time was changed on Oct 14th, 21th or 22th, It's also ok from me:) 









原始邮件



发件人: 
收件人: 
日 期 :2017年09月19日 22:06
主 题 :[openstack-dev] [Senlin] Senlin Queens Meetup






Hi all,
We are going to have a meetup to discuss the features and some other 
details about Senlin in Oct.
Tentatively schedule:
Date: 15th Oct.
Location: Beijing, CHN


Please leave your comments if you have any suggestion or the have conflict with 
the date.

Sincerely,
ruijie__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova

2017-09-26 Thread Sahid Orentino Ferdjaoui
On Mon, Sep 25, 2017 at 04:59:04PM +, Jianghua Wang wrote:
> Sahid,
> 
> Just share some background. XenServer doesn't expose vGPUs as mdev
> or pci devices.

That does not make any sense. There is physical device (PCI) which
provides functions (vGPUs). These functions are exposed through mdev
framework. What you need is the mdev UUID related to a specific vGPU
and I'm sure that XenServer is going to expose it. Something which
XenServer may not expose is the NUMA node where the physical device is
plugged on but in such situation you could still use sysfs.

> I proposed a spec about one year ago to make fake pci devices so
> that we can use the existing PCI mechanism to cover vGPUs. But
> that's not a good design and got strongly objection. After that, we
> switched to use the resource providers by following the advice from
> the core team.
>
> Regards,
> Jianghua
> 
> -Original Message-
> From: Sahid Orentino Ferdjaoui [mailto:sferd...@redhat.com] 
> Sent: Monday, September 25, 2017 11:01 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] vGPUs support for Nova
> 
> On Mon, Sep 25, 2017 at 09:29:25AM -0500, Matt Riedemann wrote:
> > On 9/25/2017 5:40 AM, Jay Pipes wrote:
> > > On 09/25/2017 05:39 AM, Sahid Orentino Ferdjaoui wrote:
> > > > There is a desire to expose the vGPUs resources on top of Resource 
> > > > Provider which is probably the path we should be going in the long 
> > > > term. I was not there for the last PTG and you probably already 
> > > > made a decision about moving in that direction anyway. My personal 
> > > > feeling is that it is premature.
> > > > 
> > > > The nested Resource Provider work is not yet feature-complete and 
> > > > requires more reviewer attention. If we continue in the direction 
> > > > of Resource Provider, it will need at least 2 more releases to 
> > > > expose the vGPUs feature and that without the support of NUMA, and 
> > > > with the feeling of pushing something which is not 
> > > > stable/production-ready.
> > > > 
> > > > It's seems safer to first have the Resource Provider work well 
> > > > finalized/stabilized to be production-ready. Then on top of 
> > > > something stable we could start to migrate our current virt 
> > > > specific features like NUMA, CPU Pinning, Huge Pages and finally PCI 
> > > > devices.
> > > > 
> > > > I'm talking about PCI devices in general because I think we should 
> > > > implement the vGPU on top of our /pci framework which is 
> > > > production ready and provides the support of NUMA.
> > > > 
> > > > The hardware vendors building their drivers using mdev and the 
> > > > /pci framework currently understand only SRIOV but on a quick 
> > > > glance it does not seem complicated to make it support mdev.
> > > > 
> > > > In the /pci framework we will have to:
> > > > 
> > > > * Update the PciDevice object fields to accept NULL value for
> > > >    'address' and add new field 'uuid'
> > > > * Update PciRequest to handle a new tag like 'vgpu_types'
> > > > * Update PciDeviceStats to also maintain pool of vGPUs
> > > > 
> > > > The operators will have to create alias(-es) and configure 
> > > > flavors. Basically most of the logic is already implemented and 
> > > > the method 'consume_request' is going to select the right vGPUs 
> > > > according the request.
> > > > 
> > > > In /virt we will have to:
> > > > 
> > > > * Update the field 'pci_passthrough_devices' to also include GPUs
> > > >    devices.
> > > > * Update attach/detach PCI device to handle vGPUs
> > > > 
> > > > We have a few people interested in working on it, so we could 
> > > > certainly make this feature available for Queen.
> > > > 
> > > > I can take the lead updating/implementing the PCI and libvirt 
> > > > driver part, I'm sure Jianghua Wang will be happy to take the lead 
> > > > for the virt XenServer part.
> > > > 
> > > > And I trust Jay, Stephen and Sylvain to follow the developments.
> > > 
> > > I understand the desire to get something in to Nova to support 
> > > vGPUs, and I understand that the existing /pci modules represent the 
> > > fastest/cheapest way to get there.
> > > 
> > > I won't block you from making any of the above changes, Sahid. I'll 
> > > even do my best to review them. However, I will be primarily 
> > > focusing this cycle on getting the nested resource providers work 
> > > feature-complete for (at least) SR-IOV PF/VF devices.
> > > 
> > > The decision of whether to allow an approach that adds more to the 
> > > existing /pci module is ultimately Matt's.
> > > 
> > > Best,
> > > -jay
> > > 
> > > 
> > > __ OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: 
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > Nested resource 

[openstack-dev] [monasca] Stefano Canepa and Dobrosław Żybort in Monasca core team

2017-09-26 Thread Bedyk, Witold
Hello everyone,

I would like to nominate Stefano Canepa and Dobrosław Żybort to Monasca core 
reviewers team.
Welcome and thank you for your contribution to the project.


Best regards
Witek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] nodepool: statsd metric details?

2017-09-26 Thread Markus Zoeller
Is there a doc I didn't find which explains the metrics nodepool is
emitting to statsd?

Zuul has this:
https://docs.openstack.org/infra/zuul/admin/monitoring.html

Nodepool only documents this:
https://docs.openstack.org/infra/nodepool/installation.html?#statsd-and-graphite

Any pointer is appreciated.


-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] zuul/nodepool: ansible roles: statsd config?

2017-09-26 Thread Markus Zoeller
We use the Ansible roles for nodepool and zuul In our 3rd party CI.
We cannot set the *statsd* config in those roles. Namely the environment
variables `STATSD_HOST` and `STATSD_PORT`. I didn't find the correct
place where I can propose this, that's why I'm asking on this ML.

Is it possible to add that to those Ansible roles?
https://github.com/openstack/ansible-role-nodepool
https://github.com/openstack/ansible-role-zuul


-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][DIB] how create triplo overcloud image with latest kernel?

2017-09-26 Thread Moshe Levi
Hi all,

As part of the OVS Hardware Offload [1] [2],  we need to create new 
Centos/Redhat 7 image  with latest kernel/ovs/iproute.
We tried to use virsh-customize to install the packages and we were able to 
update iproute and ovs, but for the kernel there is no space.
We also tried with virsh-customize to uninstall the old kenrel but we no luck.
Is other ways to replace kernel  package in existing image?



[1] - 
https://review.openstack.org/#/c/504911/
[2] - 
https://review.openstack.org/#/c/502313/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] OVS Hardware Offload

2017-09-26 Thread Moshe Levi
Hi all,

We are planning to add TripleO  support for OVS Hardware Offload, which was 
pushed to pike release [1] [2] [3].
Here is documentation commit which explain on how to use the feature [4].
We already wrote a spec for TripleO [5] and some POC code [6] [7] [8] and we 
would appreciate if we can get reviews.
Patches [6] [7] [8] are adding the support to the ovs mechanism driver, but  we 
also plan to add patches to support ODL with OVS Hardware Offload

[1] -https://review.openstack.org/#/c/275616/
[2] - https://review.openstack.org/#/c/452530/
[3] -https://review.openstack.org/#/c/398265/
[4] - https://review.openstack.org/#/c/504911/
[5] - https://review.openstack.org/#/c/502313/
[6] - https://review.openstack.org/#/c/502440/
[7] - https://review.openstack.org/#/c/507100/
[8] - https://review.openstack.org/#/c/507401/


Thanks,
Moshe Levi.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Configure SR-IOV VFs in tripleo

2017-09-26 Thread Moshe Levi
Hi  all,

While working on tripleo-ovs-hw-offload 
work, I encounter the following issue with SR-IVO.

I added -e ~/heat-templates/environments/neutron-sriov.yaml -e 
~/heat-templates/environments/host-config-and-reboot.yaml to the 
overcloud-deploy.sh.
The computes nodes are configure with the intel_iommu=on kernel option and the 
computes are reboot as expected,
than the tripleo::host::sriov will create /etc/sysconfig/allocate_vfs to 
configure the SR-IOV VF. It seem it requires additional reboot for the SR-IOV 
VFs to be created. Is that expected behavior? Am I doing something wrong?




[1] 
https://github.com/openstack/puppet-tripleo/blob/80e646ff779a0f8e201daec0c927809224ed5fdb/manifests/host/sriov.pp
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][mogan] Need help for replacing the current master

2017-09-26 Thread Zhenguo Niu
It's very appreciated if you shed some light on what the next steps would
be to move this along.

On Sat, Sep 23, 2017 at 10:03 AM, Sheng Liu  wrote:

> Just confirmed the git history and compared the code with current Mogan
> master, it is OK!, Thanks a lot for dims to help that. we will very
> appreciate that Infra team can help us to replace the current Mogan master.
>
> --
> Best Regards
> liusheng
>
> 2017-09-22 21:53 GMT+08:00 Zhenguo Niu :
>
>> Hi infra,
>>
>> In order to show respect to the original authors, we would like to
>> replace the current mogan master [1] with a new forked repo [2] which
>> includes the history of files which copied from other projects.
>>
>> The detailed discussion is here: http://lists.openstack.o
>> rg/pipermail/openstack-dev/2017-September/122470.html
>>
>> Thank you for your time!
>>
>> [1] https://github.com/openstack/mogan
>> [2] https://github.com/dims/mogan
>>
>> --
>> Best Regards,
>> Zhenguo Niu
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder][third-party][ci] Tintri Cinder CI failure

2017-09-26 Thread Silvan Kaiser
Hi Apoorva!
The test run is sadly missing the service logs, probably because you're
using a current DevStack (systemd based services) but an older sos-ci
version? If you apply
https://github.com/j-griffith/sos-ci/commit/f0f2ce2e2f2b12727ee5aa75a751376dcc1ea3a4
you should be able to get the logs for new test runs. This will help
debugging this.
Best
Silvan



2017-09-26 1:54 GMT+02:00 Apoorva Deshpande :

> Hello,
>
> Tintri's Cinder CI started failing around Sept 19, 2017. There are 29
> tests failing[1] with following errors [2][3][4]. Tintri Cinder driver
> inherit nfs cinder driver and it's available here[5].
>
> Please let me know if anyone has recently seen these failures or has any
> pointers on how to fix.
>
> Thanks,
> Apoorva
>
> IRC: Apoorva
>
> [1] http://openstack-ci.tintri.com/tintri/refs-changes-57-505357-1/testr_
> results.html
> [2] http://paste.openstack.org/show/621886/
> [3] http://paste.openstack.org/show/621858/
> [4] http://paste.openstack.org/show/621857/
> [5] https://github.com/openstack/cinder/blob/master/
> cinder/volume/drivers/tintri.py
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dr. Silvan Kaiser
Quobyte GmbH
Hardenbergplatz 2, 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Blair Bethwaite
I've been watching this thread and I think we've already seen an
excellent and uncontroversial suggestion towards simplifying initial
deployment of OS - that was to push towards encoding Constellations
into the deployment and/or config management projects.

On 26 September 2017 at 15:44, Adam Lawson  wrote:
> Hey Jay,
> I think a GUI with a default config is a good start. Much would need to
> happen to enable that of course but that's where my mind goes. Any talk
> about 'default' kind of infringes on what we've all strived to embrace; a
> cloud architecture without bakes in assumptions. A default-anything need not
> mean other options are not available - only that a default gets them
> started. I would never ever agree to a default that consists of
> KVM+Contrail+NetApp. Something neutral would be great- easier said than done
> of course.
>
> Samuel,
> Default configuration as I envision it != "Promoting a single solution". I
> really hope a working default install would allow new users to get started
> with OpeStack without promoting anything. OpenStack lacking a default
> install results in an unfriendly deployment exercise. I know for a fact the
> entire community at webhostingtalk.com ignores OS for the most part because
> of how hard it is to deploy. They use Fuel or other third-party solutions
> because we as a OS community continue to fail to acknowledge the importance
> of an easier of implementation. Imagine thousands of hosting providers
> deploying OpenStack because we made it easy. That is money in the bank IMHO.
> I totally get the thinking about avoiding the term default for the reasons
> you provided but giving users a starting point does not necessarily mean
> we're trying to get them to adopt that as their final design. Giving them a
> starting point must take precedence over not giving them any starting point.
>
> Jonathan,
> "I'm not going to adopt something new that requires a new parallel
> management tool to what I use." I would hope not! :) I don't mean having a
> tool means the tool is required. Only that a user-friendly deployment tool
> is available. Isn't that better than giving them nothing at all?
>
> //adam
>
>
> Adam Lawson
>
> Principal Architect
> Office: +1-916-794-5706
>
> On Mon, Sep 25, 2017 at 5:27 PM, Samuel Cassiba  wrote:
>>
>>
>> > On Sep 25, 2017, at 16:52, Clint Byrum  wrote:
>> >
>> > Excerpts from Jonathan D. Proulx's message of 2017-09-25 11:18:51 -0400:
>> >> On Sat, Sep 23, 2017 at 12:05:38AM -0700, Adam Lawson wrote:
>> >>
>> >> :Lastly, I do think GUI's make deployments easier and because of that,
>> >> I
>> >> :feel they're critical. There is more than one vendor whose built and
>> >> :distributes a free GUI to ease OpenStack deployment and management.
>> >> That's
>> >> :a good start but those are the opinions of a specific vendor - not he
>> >> OS
>> >> :community. I have always been a big believer in a default cloud
>> >> :configuration to ease the shock of having so many options for
>> >> everything. I
>> >> :have a feeling however our commercial community will struggle with
>> >> :accepting any method/project other than their own as being part a
>> >> default
>> >> :config. That will be a tough one to crack.
>> >>
>> >> Different people have differnt needs, so this is not meant to
>> >> contradict Adam.
>> >>
>> >> But :)
>> >>
>> >> Any unique deployment tool would be of no value to me as OpenStack (or
>> >> anyother infrastructure component) needs to fit into my environment.
>> >> I'm not going to adopt something new that requires a new parallel
>> >> management tool to what I use.
>> >>
>> >
>> > You already have that if you run OpenStack.
>> >
>> > The majority of development testing and gate testing happens via
>> > Devstack. A parallel management tool to what most people use to actually
>> > operate OpenStack.
>> >
>> >> I think focusing on the existing configuration management projects it
>> >> the way to go. Getting Ansible/Puppet/Chef/etc.. to support a well
>> >> know set of "constellations" in an opinionated would make deployment
>> >> easy (for most people who are using one of those already) and ,
>> >> ussuming the opionions are the same :) make consumption easier as
>> >> well.
>> >>
>> >> As an example when I started using OpenStack (Essex) we  had recently
>> >> switch to Ubuntu as our Linux platform and Pupept as our config
>> >> management. Ubuntu had a "one click MAAS install of OpenStack" which
>> >> was impossible as it made all sorts of assumptions about our
>> >> environment and wanted controll of most of them so it could provide a
>> >> full deployemnt solution.  Puppet had a good integrated example config
>> >> where I plugged in some local choices and and used existing deploy
>> >> methodologies.
>> >>
>> >> I fought with MAAS's "simple" install for a week.  When I gave up and
>> >> went with Puppet I had live users on a substantial (for the time)
>> >> cloud in less