Sean,
Thank you for the detailed explanation, i really hope if we can
backport to queens, it would be harder for me to upgrade cluster..!
On Tue, Nov 13, 2018 at 8:42 AM Sean Mooney wrote:
>
> On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote:
> > Mike,
> >
> > Here is the bug which I
Sean,
Thank you for the detailed explanation, i really hope if we can
backport to queens, it would be harder for me to upgrade cluster..!
On Tue, Nov 13, 2018 at 8:42 AM Sean Mooney wrote:
>
> On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote:
> > Mike,
> >
> > Here is the bug which I
On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote:
> Mike,
>
> Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920
actully this is a releated but different bug based in the description below.
thanks for highlighting this to me.
>
> Cc'ing: Sean
>
> Sent from my iPhone
Mike,
Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920
Cc'ing: Sean
Sent from my iPhone
> On Nov 12, 2018, at 8:27 AM, Satish Patel wrote:
>
> Mike,
>
> I had same issue month ago when I roll out sriov in my cloud and this is what
> I did to solve this issue. Set
Mike,
I had same issue month ago when I roll out sriov in my cloud and this is what I
did to solve this issue. Set following in flavor
hw:numa_nodes=2
It will spread out instance vcpu across numa, yes there will be little penalty
but if you tune your application according they you are good
Hi folks,
It appears that the numa_policy attribute of a PCI alias is ignored for
flavors referencing that alias if the flavor also has
hw:cpu_policy=dedicated set. The alias config is:
alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3",
"product_id": "1004", "numa_policy":