Re: [Openstack-operators] Passing a flavor's extra_specs to libvirt

2017-04-13 Thread Stig Telfer
Hello Paco - 

> On 13 Apr 2017, at 11:10, Paco Bernabé  wrote:
> 
> Hi,
> 
> The issue is apparently solved; we found a solution here 
> https://www.stackhpc.com/tripleo-numa-vcpu-pinning.html where libvirt and 
> qemu-kvm version restrictions where indicated. The CentOS7.3 repo has an 
> older qemu-kvm version (1.5.3) than the one needed (>= 2.1.0), so we added 
> the kvm-common repo, as recommended by the web. Now 1 host is returned 
> (Filter NUMATopologyFilter returned 1 hosts) and the guest VM has the desired 
> cpu topology.

I’d seen your mail on my way onto a plane, and wanted to get home and get my 
facts straight before responding.  Great to see you got there first, and that 
our post was helpful: made my day :-)

Share and enjoy,
Stig

> 
> -- 
> Met vriendelijke groeten / Best regards,
> Paco Bernabé
> Senior Systemsprogrammer | SURFsara | Science Park 140 | 1098XG Amsterdam | T 
> +31 610 961 785 | p...@surfsara.nl | www.surfsara.nl
> 
> 
> 
> 
>> Op 13 apr. 2017, om 11:38 heeft Paco Bernabé  
>> het volgende geschreven:
>> 
>> Hi,
>> 
>> More info; in de log file of the nova-scheduler we see messages like ( 
>> is de compute host name):
>> 
>>  • ,  fails NUMA topology requirements. No host NUMA 
>> topology while the instance specified one. host_passes 
>> /usr/lib/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py:100
>>  • Filter NUMATopologyFilter returned 0 hosts
>> 
>> So, we are not sure if the filters are ok in nova.conf:
>> 
>> scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,NUMATopologyFilter
>> 
>> -- 
>> Met vriendelijke groeten / Best regards,
>> Paco Bernabé
>> Senior Systemsprogrammer | SURFsara | Science Park 140 | 1098XG Amsterdam | 
>> T +31 610 961 785 | p...@surfsara.nl | www.surfsara.nl
>> 
>> 
>> 
>> 
>>> Op 13 apr. 2017, om 09:34 heeft Paco Bernabé 
>>>  het volgende geschreven:
>>> 
>>> Hi,
>>> 
>>> After reading the following articles:
>>> 
>>> • https://docs.openstack.org/admin-guide/compute-flavors.html
>>> • 
>>> http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
>>> • 
>>> http://openstack-in-production.blogspot.nl/2015/08/numa-and-cpu-pinning-in-high-throughput.html
>>> • 
>>> http://www.stratoscale.com/blog/openstack/cpu-pinning-and-numa-awareness/
>>> 
>>> We are not able yet to expose the NUMA config to the guest VM. This is the 
>>> configuration of one of our compute nodes:
>>> 
>>> # lscpu
>>> Architecture:  x86_64
>>> CPU op-mode(s):32-bit, 64-bit
>>> Byte Order:Little Endian
>>> CPU(s):48
>>> On-line CPU(s) list:   0-47
>>> Thread(s) per core:2
>>> Core(s) per socket:12
>>> Socket(s): 2
>>> NUMA node(s):  4
>>> Vendor ID: GenuineIntel
>>> CPU family:6
>>> Model: 79
>>> Model name:Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
>>> Stepping:  1
>>> CPU MHz:   2266.085
>>> BogoMIPS:  4404.00
>>> Virtualization:VT-x
>>> L1d cache: 32K
>>> L1i cache: 32K
>>> L2 cache:  256K
>>> L3 cache:  15360K
>>> NUMA node0 CPU(s): 0-5,24-29
>>> NUMA node1 CPU(s): 6-11,30-35
>>> NUMA node2 CPU(s): 12-17,36-41
>>> NUMA node3 CPU(s): 18-23,42-47
>>> 
>>> 
>>> And this is the flavour configuration:
>>> 
>>> OS-FLV-DISABLED:disabled   | False  
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> OS-FLV-EXT-DATA:ephemeral  | 2048   
>>> 
>>> 
>>> 
>>> 
>>> 
>>>  
>>> disk   | 30 
>>> 
>>> 
>>> 
>>> 
>>>   

Re: [Openstack-operators] Ceilometer and disk IO

2017-04-13 Thread Paras pradhan
Thanks for the reply. I have see people recommending to use ceph as a
backed for gnocchi but we don't use Ceph yet and looks like gnocchi does
not support cinder backends. We use Dell EQ San for block devices.

-Paras.

On Tue, Apr 11, 2017 at 9:15 PM, Alex Hubner  wrote:

> Ceilometer can be a pain in the a* if not properly configured/designed,
> especially when things start to grow. I've already saw the exact same
> situation you described on two different instalations. To make things more
> complicated, some OpenStack distributions use MongoDB as a storage backend
> and do not consider a dedicated infrastructure for Ceilometer, relegating
> this important service to live, by default, in the controller nodes...
> worst: not clearly agreeing on what should be done when the service starts
> to stall rather than simply adding more controller nodes... (yes Red Hat,
> I'm looking to you). You might consider using gnocchi and a ceph storage
> for telemetry as it was already suggested.
>
> For my 2 cents, here's a nice talk on the matter: https://www.openstack.
> org/videos/video/capacity-planning-saving-money-and-
> maximizing-efficiency-in-openstack-using-gnocchi-and-ceilometer
>
> []'s
> Hubner
>
> On Sat, Apr 8, 2017 at 2:00 PM, Paras pradhan 
> wrote:
>
>> Hello
>>
>> What kind of storage backend do you guys use if you see disk IO
>> bottlenecks when storing ceilometer events and metrics? In my current
>> configuration I am using 300 GB 10K SAS (in hardware raid 1) and iostat
>> report does not look good (upto 100% unilization) with ceilometer consuming
>> high CPU and Memory.  Does it help adding more spindles and move to raid 10?
>>
>> Thanks!
>> Paras.
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] OpenStack Summit Boston Prices Increase Tomorrow!

2017-04-13 Thread Kendall Waters
Hi everyone, 

It’s your last chance to save on your OpenStack Summit Boston 
 tickets before prices increase 
tomorrow Friday, April 14 at 11:59pm PDT (April 15 at 6:59 UTC).

Haven’t booked your hotel yet? You can purchase a Full Access Summit ticket 
with a 4-night hotel stay at a hotel walkable to the Summit venue for just 
$1,599 USD! Hurry—purchase now before prices increase tomorrow! 

REGISTER AND BOOK YOUR HOTEL HERE: 
https://openstacksummit2017boston.eventbrite.com 
 

If you have a registration code, it must be redeemed by May 3. 
Note: Registration codes only apply to Weeklong Full Access Passes.

Contact sum...@openstack.org  if you have any 
questions. 

Cheers,
Kendall

Kendall Waters
OpenStack Marketing
kend...@openstack.org


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Passing a flavor's extra_specs to libvirt

2017-04-13 Thread Paco Bernabé
Hi,

The issue is apparently solved; we found a solution here 
https://www.stackhpc.com/tripleo-numa-vcpu-pinning.html 
 where libvirt and 
qemu-kvm version restrictions where indicated. The CentOS7.3 repo has an older 
qemu-kvm version (1.5.3) than the one needed (>= 2.1.0), so we added the 
kvm-common repo, as recommended by the web. Now 1 host is returned (Filter 
NUMATopologyFilter returned 1 hosts) and the guest VM has the desired cpu 
topology.


-- 
Met vriendelijke groeten / Best regards,
Paco Bernabé
Senior Systemsprogrammer | SURFsara | Science Park 140 | 1098XG Amsterdam | T 
+31 610 961 785 | p...@surfsara.nl  | www.surfsara.nl 





> Op 13 apr. 2017, om 11:38 heeft Paco Bernabé  
> het volgende geschreven:
> 
> Hi,
> 
> More info; in de log file of the nova-scheduler we see messages like ( 
> is de compute host name):
> 
> ,  fails NUMA topology requirements. No host NUMA topology while 
> the instance specified one. host_passes 
> /usr/lib/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py:100
> Filter NUMATopologyFilter returned 0 hosts
> 
> So, we are not sure if the filters are ok in nova.conf:
> 
> scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,NUMATopologyFilter
> 
> -- 
> Met vriendelijke groeten / Best regards,
> Paco Bernabé
> Senior Systemsprogrammer | SURFsara | Science Park 140 | 1098XG Amsterdam | T 
> +31 610 961 785 | p...@surfsara.nl  | 
> www.surfsara.nl 
> 
> 
> 
> 
>> Op 13 apr. 2017, om 09:34 heeft Paco Bernabé > > het volgende geschreven:
>> 
>> Hi,
>> 
>> After reading the following articles:
>> 
>> https://docs.openstack.org/admin-guide/compute-flavors.html 
>> 
>> http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
>>  
>> 
>> http://openstack-in-production.blogspot.nl/2015/08/numa-and-cpu-pinning-in-high-throughput.html
>>  
>> 
>> http://www.stratoscale.com/blog/openstack/cpu-pinning-and-numa-awareness/ 
>> 
>> 
>> We are not able yet to expose the NUMA config to the guest VM. This is the 
>> configuration of one of our compute nodes:
>> 
>> # lscpu
>> Architecture:  x86_64
>> CPU op-mode(s):32-bit, 64-bit
>> Byte Order:Little Endian
>> CPU(s):48
>> On-line CPU(s) list:   0-47
>> Thread(s) per core:2
>> Core(s) per socket:12
>> Socket(s): 2
>> NUMA node(s):  4
>> Vendor ID: GenuineIntel
>> CPU family:6
>> Model: 79
>> Model name:Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
>> Stepping:  1
>> CPU MHz:   2266.085
>> BogoMIPS:  4404.00
>> Virtualization:VT-x
>> L1d cache: 32K
>> L1i cache: 32K
>> L2 cache:  256K
>> L3 cache:  15360K
>> NUMA node0 CPU(s): 0-5,24-29
>> NUMA node1 CPU(s): 6-11,30-35
>> NUMA node2 CPU(s): 12-17,36-41
>> NUMA node3 CPU(s): 18-23,42-47
>> 
>> 
>> And this is the flavour configuration:
>> 
>> OS-FLV-DISABLED:disabled   | False   
>>  
>>  
>>  
>>  
>>  
>>
>> OS-FLV-EXT-DATA:ephemeral  | 2048
>>  
>>  
>>  
>>  
>>  
>>   
>> disk   | 30  
>>  
>>  
>>  
>>  

[Openstack-operators] openstack newton fwaas

2017-04-13 Thread Ignazio Cassano
I all, I am trying to configure fwaas on newton.

I suppose there are some errors in nwtorking guide:

Enable FWaaS v2¶

-

Enable the FWaaS plug-in in the /etc/neutron/neutron.conf file:

service_plugins = firewall_v2[service_providers]...service_provider =
FIREWALL:Iptables:neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver:default
[fwaas]driver =
neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas_v2.IptablesFwaasDriverenabled
= True


Note

On Ubuntu, modify the [fwaas] section in the /etc/neutron/fwaas_driver.ini
file instead of /etc/neutron/neutron.conf.
-

Configure the FWaaS plugin for the L3 agent.

In the AGENT section of l3_agent.ini, make sure the FWaaS extension is
loaded:

[AGENT]extensions = fwaas

Edit the FWaaS section in the /etc/neutron/neutron.conf file to indicate
the agent version and driver:

[fwaas]agent_version = v2driver = iptablesenabled = True

- As you can see above, it tells to modify /etc/neutron/neutron.conf two
times .
-
- I am using centos 7 .
- Anyone can help me to configure the fwaas ?
- Must I install the package openstack-neutron-fwaas ?
- Regards
- Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Passing a flavor's extra_specs to libvirt

2017-04-13 Thread Paco Bernabé
Hi,

More info; in de log file of the nova-scheduler we see messages like ( is 
de compute host name):

,  fails NUMA topology requirements. No host NUMA topology while 
the instance specified one. host_passes 
/usr/lib/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py:100
Filter NUMATopologyFilter returned 0 hosts

So, we are not sure if the filters are ok in nova.conf:

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,NUMATopologyFilter

-- 
Met vriendelijke groeten / Best regards,
Paco Bernabé
Senior Systemsprogrammer | SURFsara | Science Park 140 | 1098XG Amsterdam | T 
+31 610 961 785 | p...@surfsara.nl  | www.surfsara.nl 





> Op 13 apr. 2017, om 09:34 heeft Paco Bernabé  
> het volgende geschreven:
> 
> Hi,
> 
> After reading the following articles:
> 
> https://docs.openstack.org/admin-guide/compute-flavors.html 
> 
> http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
>  
> 
> http://openstack-in-production.blogspot.nl/2015/08/numa-and-cpu-pinning-in-high-throughput.html
>  
> 
> http://www.stratoscale.com/blog/openstack/cpu-pinning-and-numa-awareness/ 
> 
> 
> We are not able yet to expose the NUMA config to the guest VM. This is the 
> configuration of one of our compute nodes:
> 
> # lscpu
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):48
> On-line CPU(s) list:   0-47
> Thread(s) per core:2
> Core(s) per socket:12
> Socket(s): 2
> NUMA node(s):  4
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 79
> Model name:Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
> Stepping:  1
> CPU MHz:   2266.085
> BogoMIPS:  4404.00
> Virtualization:VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  256K
> L3 cache:  15360K
> NUMA node0 CPU(s): 0-5,24-29
> NUMA node1 CPU(s): 6-11,30-35
> NUMA node2 CPU(s): 12-17,36-41
> NUMA node3 CPU(s): 18-23,42-47
> 
> 
> And this is the flavour configuration:
> 
> OS-FLV-DISABLED:disabled   | False
>   
>   
>   
>   
>   
>  
> OS-FLV-EXT-DATA:ephemeral  | 2048 
>   
>   
>   
>   
>   
> 
> disk   | 30   
>   
>   
>   
>   
>   
> 
> extra_specs| {"hw:numa_nodes": "8", "hw:numa_cpus.0": "0-5", 
> "hw:numa_cpus.1": "6-11", "hw:numa_cpus.2": "12-17", "hw:numa_cpus.3": 
> "18-23", "hw:numa_cpus.4": "24-29", "hw:numa_cpus.5": "30-35", 
> "hw:numa_cpus.6": "36-41", "hw:numa_cpus.7": "42-45", "hw:numa_mem.7": 
> "16384", "hw:numa_mem.6": "24576", "hw:numa_mem.5": "24576", "hw:numa_mem.4": 
> "24576", "hw:numa_mem.3": "24576", "hw:numa_mem.2": "24576", "hw:numa_mem.1": 
> "24576", "hw:numa_mem.0": "24576"} 
> os-flavor-access:is_public | True 
>   
>   
>   
>  

Re: [Openstack-operators] Boston Forum Schedule Online

2017-04-13 Thread Thierry Carrez
Tim Bell wrote:
> Yes, I agree it is difficult… I was asking as there was an option ‘watch 
> later’ on the summit schedule for the fishbowl sessions.
> 
> The option should probably be removed from the summit schedule web page so 
> people don’t get disappointed later if that is not too complicated.

Good point, I'll see if that mention can be swiftly removed.

-- 
Thierry

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Boston Forum Schedule Online

2017-04-13 Thread Tim Bell

Yes, I agree it is difficult… I was asking as there was an option ‘watch later’ 
on the summit schedule for the fishbowl sessions.

The option should probably be removed from the summit schedule web page so 
people don’t get disappointed later if that is not too complicated.

Tim

On 13.04.17, 09:48, "Thierry Carrez"  wrote:

Tim Bell wrote:
> Do you know if the Forum sessions will be video’d?

As far as I know they won't (same as old Design/Ops summit sessions).
It's difficult to produce, with people all over the room and not
necessarily using microphones.

The idea is to have the moderator post a follow-up thread for each
session, summarizing the outcome and opening up the discussion to
everyone who could not be present in person for one reason or another.

-- 
Thierry Carrez (ttx)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Boston Forum Schedule Online

2017-04-13 Thread Thierry Carrez
Tim Bell wrote:
> Do you know if the Forum sessions will be video’d?

As far as I know they won't (same as old Design/Ops summit sessions).
It's difficult to produce, with people all over the room and not
necessarily using microphones.

The idea is to have the moderator post a follow-up thread for each
session, summarizing the outcome and opening up the discussion to
everyone who could not be present in person for one reason or another.

-- 
Thierry Carrez (ttx)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Openstack Heat https links problems upgrading to Newton

2017-04-13 Thread Saverio Proto
Hello ops,

if anyone is interested I have problems with Heat and the Newton upgrade.

I sent an email about this here:
http://lists.openstack.org/pipermail/openstack-dev/2017-April/115412.html

If anyone already faced this issue any help would be appreciated !

thank you

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Passing a flavor's extra_specs to libvirt

2017-04-13 Thread Paco Bernabé
Hi,

After reading the following articles:

https://docs.openstack.org/admin-guide/compute-flavors.html 

http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
 

http://openstack-in-production.blogspot.nl/2015/08/numa-and-cpu-pinning-in-high-throughput.html
 

http://www.stratoscale.com/blog/openstack/cpu-pinning-and-numa-awareness/ 


We are not able yet to expose the NUMA config to the guest VM. This is the 
configuration of one of our compute nodes:

# lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):48
On-line CPU(s) list:   0-47
Thread(s) per core:2
Core(s) per socket:12
Socket(s): 2
NUMA node(s):  4
Vendor ID: GenuineIntel
CPU family:6
Model: 79
Model name:Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Stepping:  1
CPU MHz:   2266.085
BogoMIPS:  4404.00
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  256K
L3 cache:  15360K
NUMA node0 CPU(s): 0-5,24-29
NUMA node1 CPU(s): 6-11,30-35
NUMA node2 CPU(s): 12-17,36-41
NUMA node3 CPU(s): 18-23,42-47


And this is the flavour configuration:

OS-FLV-DISABLED:disabled   | False  





OS-FLV-EXT-DATA:ephemeral  | 2048   




 
disk   | 30 




 
extra_specs| {"hw:numa_nodes": "8", "hw:numa_cpus.0": "0-5", 
"hw:numa_cpus.1": "6-11", "hw:numa_cpus.2": "12-17", "hw:numa_cpus.3": "18-23", 
"hw:numa_cpus.4": "24-29", "hw:numa_cpus.5": "30-35", "hw:numa_cpus.6": 
"36-41", "hw:numa_cpus.7": "42-45", "hw:numa_mem.7": "16384", "hw:numa_mem.6": 
"24576", "hw:numa_mem.5": "24576", "hw:numa_mem.4": "24576", "hw:numa_mem.3": 
"24576", "hw:numa_mem.2": "24576", "hw:numa_mem.1": "24576", "hw:numa_mem.0": 
"24576"} 
os-flavor-access:is_public | True   




ram 
   | 188416 




rxtx_factor 
   | 1.0