Re: [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Slawomir Kaplonski
Hi,

I think it was published, see 
http://lists.openstack.org/pipermail/openstack/2018-November/047172.html

> Wiadomość napisana przez Jeremy Freudberg  w dniu 
> 14.11.2018, o godz. 06:12:
> 
> Hey Tony,
> 
> What's the reason for the results of the poll not being public?
> 
> Thanks,
> Jeremy
> On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds  wrote:
>> 
>> 
>> Hi everybody!
>> 
>> As the subject reads, the "T" release of OpenStack is officially
>> "Train".  Unlike recent choices Train was the popular choice so
>> congrats!
>> 
>> Thanks to everybody who participated and help with the naming process.
>> 
>> Lets make OpenStack Train the release so awesome that people can't help
>> but choo-choo-choose to run it[1]!
>> 
>> 
>> Yours Tony.
>> [1] Too soon? Too much?
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Jeremy Freudberg
Hey Tony,

What's the reason for the results of the poll not being public?

Thanks,
Jeremy
On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds  wrote:
>
>
> Hi everybody!
>
> As the subject reads, the "T" release of OpenStack is officially
> "Train".  Unlike recent choices Train was the popular choice so
> congrats!
>
> Thanks to everybody who participated and help with the naming process.
>
> Lets make OpenStack Train the release so awesome that people can't help
> but choo-choo-choose to run it[1]!
>
>
> Yours Tony.
> [1] Too soon? Too much?
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Tony Breeds

Hi everybody!

As the subject reads, the "T" release of OpenStack is officially
"Train".  Unlike recent choices Train was the popular choice so
congrats!

Thanks to everybody who participated and help with the naming process.

Lets make OpenStack Train the release so awesome that people can't help
but choo-choo-choose to run it[1]!


Yours Tony.
[1] Too soon? Too much?


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] no recheck / no workflow until gate is stable

2018-11-13 Thread Emilien Macchi
We have serious issues with the gate at this time, we believe it is a mix
of mirrors errors (infra) and tempest timeouts (see
https://review.openstack.org/617845).

Until the situation is resolved, do not recheck or approve any patch for
now.
Thanks for your understanding,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3] Enabling py37 unit tests

2018-11-13 Thread Corey Bryant
On Wed, Nov 7, 2018 at 11:12 AM Clark Boylan  wrote:

> On Wed, Nov 7, 2018, at 4:47 AM, Mohammed Naser wrote:
> > On Wed, Nov 7, 2018 at 1:37 PM Doug Hellmann 
> wrote:
> > >
> > > Corey Bryant  writes:
> > >
> > > > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant <
> corey.bry...@canonical.com>
> > > > wrote:
> > > >
> > > > I'd like to start moving forward with enabling py37 unit tests for a
> subset
> > > > of projects. Rather than putting too much load on infra by enabling
> 3 x py3
> > > > unit tests for every project, this would just focus on enablement of
> py37
> > > > unit tests for a subset of projects in the Stein cycle. And just to
> be
> > > > clear, I would not be disabling any unit tests (such as py35). I'd
> just be
> > > > enabling py37 unit tests.
> > > >
> > > > As some background, this ML thread originally led to updating the
> > > > python3-first governance goal (
> https://review.openstack.org/#/c/610708/)
> > > > but has now led back to this ML thread for a +1 rather than updating
> the
> > > > governance goal.
> > > >
> > > > I'd like to get an official +1 here on the ML from parties such as
> the TC
> > > > and infra in particular but anyone else's input would be welcomed
> too.
> > > > Obviously individual projects would have the right to reject proposed
> > > > changes that enable py37 unit tests. Hopefully they wouldn't, of
> course,
> > > > but they could individually vote that way.
> > > >
> > > > Thanks,
> > > > Corey
> > >
> > > This seems like a good way to start. It lets us make incremental
> > > progress while we take the time to think about the python version
> > > management question more broadly. We can come back to the other
> projects
> > > to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out.
> >
> > What's the impact on the number of consumption in upstream CI node usage?
> >
>
> For period from 2018-10-25 15:16:32,079 to 2018-11-07 15:59:04,994,
> openstack-tox-py35 jobs in aggregate represent 0.73% of our total capacity
> usage.
>
> I don't expect py37 to significantly deviate from that. Again the major
> resource consumption is dominated by a small number of projects/repos/jobs.
> Generally testing outside of that bubble doesn't represent a significant
> resource cost.
>
> I see no problem with adding python 3.7 unit testing from an
> infrastructure perspective.
>
> Clark
>
>
>
Thanks all for the input on this. It seems like we have no objections to
moving forward so I'll plan on getting started soon.

Thanks,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set

2018-11-13 Thread Satish Patel
Sean,

Thank you for the detailed explanation, i really hope if we can
backport to queens, it would be harder for me to upgrade cluster..!

On Tue, Nov 13, 2018 at 8:42 AM Sean Mooney  wrote:
>
> On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote:
> > Mike,
> >
> > Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920
> actully this is a releated but different bug based in the description below.
> thanks for highlighting this to me.
> >
> > Cc'ing: Sean
> >
> > Sent from my iPhone
> >
> > On Nov 12, 2018, at 8:27 AM, Satish Patel  wrote:
> >
> > > Mike,
> > >
> > > I had same issue month ago when I roll out sriov in my cloud and this is 
> > > what I did to solve this issue. Set
> > > following in flavor
> > >
> > > hw:numa_nodes=2
> > >
> > > It will spread out instance vcpu across numa, yes there will be little 
> > > penalty but if you tune your application
> > > according they you are good
> > >
> > > Yes this is bug I have already open ticket and I believe folks are 
> > > working on it but its not simple fix. They may
> > > release new feature in coming oprnstack release.
> > >
> > > Sent from my iPhone
> > >
> > > On Nov 11, 2018, at 9:25 PM, Mike Joseph  wrote:
> > >
> > > > Hi folks,
> > > >
> > > > It appears that the numa_policy attribute of a PCI alias is ignored for 
> > > > flavors referencing that alias if the
> > > > flavor also has hw:cpu_policy=dedicated set.  The alias config is:
> > > >
> > > > alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", 
> > > > "product_id": "1004", "numa_policy":
> > > > "preferred" }
> > > >
> > > > And the flavor config is:
> > > >
> > > > {
> > > >   "OS-FLV-DISABLED:disabled": false,
> > > >   "OS-FLV-EXT-DATA:ephemeral": 0,
> > > >   "access_project_ids": null,
> > > >   "disk": 10,
> > > >   "id": "221e1bcd-2dde-48e6-bd09-820012198908",
> > > >   "name": "vm-2",
> > > >   "os-flavor-access:is_public": true,
> > > >   "properties": "hw:cpu_policy='dedicated', 
> > > > pci_passthrough:alias='mlx:1'",
> > > >   "ram": 8192,
> > > >   "rxtx_factor": 1.0,
> > > >   "swap": "",
> > > >   "vcpus": 2
> > > > }
> Satish in your case you were trying to use neutrons sriov vnic types such 
> that the VF would be connected to a neutron
> network. In this case the mellanox connectx 3 virtual funcitons are being 
> passed to the guest using the pci alias via
> the flavor which means they cannot be used to connect to neutron networks but 
> they should be able to use affinity
> poileices.
> > > >
> > > > In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) 
> > > > with 16 VFs configured.  We wish to expose
> > > > these VFs to VMs that schedule on the host.  However, the NIC is in 
> > > > NUMA region 0 which means that only half of
> > > > the compute node's CPU cores would be usable if we required VM affinity 
> > > > to the NIC's NUMA region.  But we don't
> > > > need that, since we are okay with cross-region access to the PCI device.
> > > >
> > > > However, we do need CPU pinning to work, in order to have efficient 
> > > > cache hits on our VM processes.  Therefore, we
> > > > still want to pin our vCPUs to pCPUs, even if the pins end up on on a 
> > > > NUMA region opposite of the NIC.  The spec
> > > > for numa_policy seem to indicate that this is exactly the intent of the 
> > > > option:
> > > >
> > > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html
> > > >
> > > > But, with the above config, we still get PCI affinity scheduling errors:
> > > >
> > > > 'Insufficient compute resources: Requested instance NUMA topology 
> > > > together with requested PCI devices cannot fit
> > > > the given host NUMA topology.'
> > > >
> > > > This strikes me as a bug, but perhaps I am missing something here?
> yes this does infact seam like a new bug.
> can you add myself and stephen to the bug once you file it.
> in the bug please include the version of opentack you were deploying.
>
> in the interim setting hw:numa_nodes=2 will allow you to pin the guest 
> without the error
> however the flavor and alias you have provided should have been enough.
>
> im hoping that we can fix both the alisa and neutorn based case this cycle 
> but to do so we
> will need to reporpose original queens spec for stein and disucss if we can 
> backport any of the
> fixes or if this would be only completed in stein+ i would hope we coudl 
> backport fixes for the flavor
> based use case but the neutron based sue case would likely be stein+
>
> regards
> sean
> > > >
> > > > Thanks,
> > > > MJ
> > > > ___
> > > > Mailing list: 
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > > Post to : openst...@lists.openstack.org
> > > > Unsubscribe : 
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

__
OpenStack 

Re: [openstack-dev] [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set

2018-11-13 Thread Sean Mooney
On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote:
> Mike,
> 
> Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920
actully this is a releated but different bug based in the description below.
thanks for highlighting this to me.
> 
> Cc'ing: Sean 
> 
> Sent from my iPhone
> 
> On Nov 12, 2018, at 8:27 AM, Satish Patel  wrote:
> 
> > Mike,
> > 
> > I had same issue month ago when I roll out sriov in my cloud and this is 
> > what I did to solve this issue. Set
> > following in flavor 
> > 
> > hw:numa_nodes=2
> > 
> > It will spread out instance vcpu across numa, yes there will be little 
> > penalty but if you tune your application
> > according they you are good 
> > 
> > Yes this is bug I have already open ticket and I believe folks are working 
> > on it but its not simple fix. They may
> > release new feature in coming oprnstack release. 
> > 
> > Sent from my iPhone
> > 
> > On Nov 11, 2018, at 9:25 PM, Mike Joseph  wrote:
> > 
> > > Hi folks,
> > > 
> > > It appears that the numa_policy attribute of a PCI alias is ignored for 
> > > flavors referencing that alias if the
> > > flavor also has hw:cpu_policy=dedicated set.  The alias config is:
> > > 
> > > alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", 
> > > "product_id": "1004", "numa_policy":
> > > "preferred" }
> > > 
> > > And the flavor config is:
> > > 
> > > {
> > >   "OS-FLV-DISABLED:disabled": false,
> > >   "OS-FLV-EXT-DATA:ephemeral": 0,
> > >   "access_project_ids": null,
> > >   "disk": 10,
> > >   "id": "221e1bcd-2dde-48e6-bd09-820012198908",
> > >   "name": "vm-2",
> > >   "os-flavor-access:is_public": true,
> > >   "properties": "hw:cpu_policy='dedicated', 
> > > pci_passthrough:alias='mlx:1'",
> > >   "ram": 8192,
> > >   "rxtx_factor": 1.0,
> > >   "swap": "",
> > >   "vcpus": 2
> > > }
Satish in your case you were trying to use neutrons sriov vnic types such that 
the VF would be connected to a neutron
network. In this case the mellanox connectx 3 virtual funcitons are being 
passed to the guest using the pci alias via
the flavor which means they cannot be used to connect to neutron networks but 
they should be able to use affinity
poileices. 
> > > 
> > > In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) with 
> > > 16 VFs configured.  We wish to expose
> > > these VFs to VMs that schedule on the host.  However, the NIC is in NUMA 
> > > region 0 which means that only half of
> > > the compute node's CPU cores would be usable if we required VM affinity 
> > > to the NIC's NUMA region.  But we don't
> > > need that, since we are okay with cross-region access to the PCI device.
> > > 
> > > However, we do need CPU pinning to work, in order to have efficient cache 
> > > hits on our VM processes.  Therefore, we
> > > still want to pin our vCPUs to pCPUs, even if the pins end up on on a 
> > > NUMA region opposite of the NIC.  The spec
> > > for numa_policy seem to indicate that this is exactly the intent of the 
> > > option:
> > > 
> > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html
> > > 
> > > But, with the above config, we still get PCI affinity scheduling errors:
> > > 
> > > 'Insufficient compute resources: Requested instance NUMA topology 
> > > together with requested PCI devices cannot fit
> > > the given host NUMA topology.'
> > > 
> > > This strikes me as a bug, but perhaps I am missing something here?
yes this does infact seam like a new bug.
can you add myself and stephen to the bug once you file it.
in the bug please include the version of opentack you were deploying.

in the interim setting hw:numa_nodes=2 will allow you to pin the guest without 
the error
however the flavor and alias you have provided should have been enough.

im hoping that we can fix both the alisa and neutorn based case this cycle but 
to do so we
will need to reporpose original queens spec for stein and disucss if we can 
backport any of the
fixes or if this would be only completed in stein+ i would hope we coudl 
backport fixes for the flavor
based use case but the neutron based sue case would likely be stein+

regards
sean
> > > 
> > > Thanks,
> > > MJ
> > > ___
> > > Mailing list: 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > Post to : openst...@lists.openstack.org
> > > Unsubscribe : 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question

2018-11-13 Thread Sean Mooney
On Tue, 2018-11-13 at 12:27 +0100, Matt Riedemann wrote:
> On 11/13/2018 4:45 AM, Chen CH Ji wrote:
> > Got it, this is what I am looking for .. thank you
> 
> Regarding that you can do with server create, I believe it's:
> 
> 1. don't specify anything for networking, you get a port on the network 
> available to you; if there are multiple networks, it's a failure and the 
> user has to specify one.
> 
> 2. specify a network, nova creates a port on that network
in this case i belive neutron alocate an 1 ipv4 adress and 1 ipv6 addres 
assumeing the network has
a subnet for each type.
> 
> 3. specify a port, nova uses that port and doesn't create anything in 
> neutron
in this case nova just reads the ips the neutron has already allocated to the 
port and list those for  the instace
> 
> 4. specify a network and fixed IP, nova creates a port on that network 
> using that fixed IP.
and in this case  nova will create the port in neutron using the fixed ip you 
supplied which will cause neutron to
attach the prot to the correct subnet
> 
> It sounds like you want #3 or #4.
> 

i think what is actully wanted is "openstack server create --nic net-id=,v4-fixed-ip="

we do not have a subnet-id option for --nic so if you want to select the subnet 
as part of the boot you have to supply
the ip. similary if you want neutron to select the ip you have to precreate teh 
port and use the --port option when
creating the vm. so as matt saidn #3 or #4 are the best solutions for your 
request.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][manila] PLEASE READ: Change of location for dinner ...

2018-11-13 Thread Jay S Bryant

Team,

The dinner has had to change locations.  Dicke Wirtin didn't get my 
online reservation and they are full.


NEW LOCATION: Joe's Restaurant and Wirsthaus -- Theodor-Heuss-Platz 10, 
14052 Berlin


The time is still 8 pm.

Please pass the word on!

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] No IRC meeting this week

2018-11-13 Thread Ifat Afek
Hi,

We will not hold the Vitrage IRC meeting tomorrow, since some of our
contributors are in Berlin.
Our next meeting will be Next Wednesday, November 21th.

Thanks,
Ifat.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question

2018-11-13 Thread Matt Riedemann

On 11/13/2018 4:45 AM, Chen CH Ji wrote:

Got it, this is what I am looking for .. thank you


Regarding that you can do with server create, I believe it's:

1. don't specify anything for networking, you get a port on the network 
available to you; if there are multiple networks, it's a failure and the 
user has to specify one.


2. specify a network, nova creates a port on that network

3. specify a port, nova uses that port and doesn't create anything in 
neutron


4. specify a network and fixed IP, nova creates a port on that network 
using that fixed IP.


It sounds like you want #3 or #4.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] No meeting this week

2018-11-13 Thread Ivan Kolodyazhny
Hi team,

Let's skip the meeting tomorrow due to the OpenStack Summit.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev