Re: [Openstack-operators] [telecom-nfv] Boston Forum Topics

2017-03-23 Thread Shintaro Mizuno

Thank you for the reminder, Curtis,

I've added few lines and some +1s to the etherpad.

Cheers,
Shintaro

On 2017/03/24 1:54, Curtis wrote:

Hi All,

In our meeting yesterday we talked about how there is a forum
submission process for the Boston Summit.

We looked over the brainstorming etherpad [1] for NFV and decided that
we didn't quite have time in the meeting to make any recommendations
for sessions in that meeting, and that we would try to discuss it on
the mailing list, and see if we should be submitting anything, or
perhaps adding to the etherpad.

There are not a lot of votes for items on the etherpad, most have
0,1,2 votes, one has 5. (I'm not much help here b/c I only looked at
that etherpad yesterday.)

Also, there is the LCOO and OPNFV groups which have their own
processes and procedures, and have likely considered various topics
for discussion at the forum.

So if anyone has any suggestions or comments on what we should do
here, we'd love to hear them. :)

Thanks,
Curtis.

[1]: https://etherpad.openstack.org/p/BOS-UC-brainstorming-Telecom

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





--
Shintaro MIZUNO (水野伸太郎)
NTT Software Innovation Center
TEL: 0422-59-4977
E-mail: mizuno.shint...@lab.ntt.co.jp



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] Modify Default Quotas

2017-03-23 Thread Joe Topjian
We run a similar kind of script.

I think in most cases, a Floating IP means a publicly routable IP, and
those are now scarce resources. Because of that, I agree with what's been
mentioned about a conservative floating IP quota.

Since the other resource types aren't restricted by external availability,
they could easily be a higher value. Of course, a small floating IP quota
might restrict what a user can do with the other resources.

The only network resource I've had a user request an increase on is
security groups and rules. Users manage security groups and rules in a lot
of different ways. Some are very conservative and some make new groups for
*everything*.

On Thu, Mar 23, 2017 at 5:46 PM, Pierre Riteau  wrote:

> We’ve encountered the same issue in our cloud. I wouldn’t be surprised if
> it was quite common for systems with many tenants that are not active all
> the time.
>
> You may be interested by this OSOps script: https://git.openstack.org/
> cgit/openstack/osops-tools-generic/tree/neutron/orphan_tool/delete_orphan_
> floatingips.py
> The downside with this script is that it may delete a floating IP that was
> just allocated, if it runs just before the user attaches it to their
> instance.
>
> We have chosen to write a script that releases floating IPs held by
> tenants only if the tenant is inactive for a period of time. We define
> inactive by not having run any instance during this period.
> It is not a silver bullet though, because a tenant running only one
> instance can still keep 49 floating IPs unused, but we found that it helps
> a lot because most of the unused IPs were held by inactive tenants.
>
> Ideally Neutron would be able to track when a floating IP was last
> attached and release it automatically after a configurable period of time.
>
> > On 23 Mar 2017, at 12:47, Saverio Proto  wrote:
> >
> > Hello,
> >
> > floating IPs is the real issue.
> >
> > When using horizon it is very easy for users to allocate floating ips
> > but it is also very difficult to release them.
> >
> > In our production cloud we had to change the default from 50 to 2. We
> > have to be very conservative with floatingips quota because our
> > experience is that the user will never release a floating IP.
> >
> > A good starting point is to set the quota for the floatingips at the
> > the same quota for nova instances.
> >
> > Saverio
> >
> >
> > 2017-03-22 16:46 GMT+01:00 Morales, Victor :
> >> Hey there,
> >>
> >>
> >>
> >> I noticed that Ihar started working on a change to increase the default
> >> quotas values in Neutron[1].  Personally, I think that makes sense to
> change
> >> it but I’d like to complement it.  So, based on your experience, what
> should
> >> be the most common quota value for networks, subnets, ports, security
> >> groups, security rules, routers and Floating IPs per tenant?
> >>
> >>
> >>
> >> Regards/Saludos
> >>
> >> Victor Morales
> >>
> >> irc: electrocucaracha
> >>
> >>
> >>
> >> [1] https://review.openstack.org/#/c/444030
> >>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] Modify Default Quotas

2017-03-23 Thread Pierre Riteau
We’ve encountered the same issue in our cloud. I wouldn’t be surprised if it 
was quite common for systems with many tenants that are not active all the time.

You may be interested by this OSOps script: 
https://git.openstack.org/cgit/openstack/osops-tools-generic/tree/neutron/orphan_tool/delete_orphan_floatingips.py
The downside with this script is that it may delete a floating IP that was just 
allocated, if it runs just before the user attaches it to their instance.

We have chosen to write a script that releases floating IPs held by tenants 
only if the tenant is inactive for a period of time. We define inactive by not 
having run any instance during this period.
It is not a silver bullet though, because a tenant running only one instance 
can still keep 49 floating IPs unused, but we found that it helps a lot because 
most of the unused IPs were held by inactive tenants.

Ideally Neutron would be able to track when a floating IP was last attached and 
release it automatically after a configurable period of time.

> On 23 Mar 2017, at 12:47, Saverio Proto  wrote:
> 
> Hello,
> 
> floating IPs is the real issue.
> 
> When using horizon it is very easy for users to allocate floating ips
> but it is also very difficult to release them.
> 
> In our production cloud we had to change the default from 50 to 2. We
> have to be very conservative with floatingips quota because our
> experience is that the user will never release a floating IP.
> 
> A good starting point is to set the quota for the floatingips at the
> the same quota for nova instances.
> 
> Saverio
> 
> 
> 2017-03-22 16:46 GMT+01:00 Morales, Victor :
>> Hey there,
>> 
>> 
>> 
>> I noticed that Ihar started working on a change to increase the default
>> quotas values in Neutron[1].  Personally, I think that makes sense to change
>> it but I’d like to complement it.  So, based on your experience, what should
>> be the most common quota value for networks, subnets, ports, security
>> groups, security rules, routers and Floating IPs per tenant?
>> 
>> 
>> 
>> Regards/Saludos
>> 
>> Victor Morales
>> 
>> irc: electrocucaracha
>> 
>> 
>> 
>> [1] https://review.openstack.org/#/c/444030
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Memory usage of guest vms, ballooning and nova

2017-03-23 Thread Jean-Philippe Methot


On 2017-03-23 15:15, Edmund Rhudy (BLOOMBERG/ 120 PARK) wrote:
What sort of memory overcommit value are you running Nova with? The 
scheduler looks at an instance's reservation rather than how much 
memory is actually being used by QEMU when making a decision, as far 
as I'm aware (but please correct me if I am wrong on this point). If 
the HV has 128GB of memory, the instance has a reservation of 96GB, 
you have 16GB reserved via reserved_host_memory_mb, 
ram_allocation_ratio is set to 1.0, and you try to launch an instance 
from a flavor with 32GB of memory, it will fail to pass RamFilter in 
the scheduler and the scheduler will not consider it a valid host for 
placement. (I am assuming you are using FilterScheduler still, as I 
know nothing about the new placement API or what parts of it do and 
don't work in Newton.)
The overcommit value is set to 1.5 in the scheduler. It's not the 
scheduler that was preventing the instance from being provisionned, it 
was qemu returning that there was not enough ram when libvirt was trying 
to provision the instance (that error was not handled well by openstack, 
btw, but that's something else). So the instance does pass every filter. 
It just ends up in error when getting provisioned in the compute node 
because of a lack of ram, with the actual full error message only 
visible in the QEMU logs.
As far as why the memory didn't automatically get reclaimed, maybe KVM 
will only reclaim empty pages and memory fragmentation in the guest 
prevented it from doing so? It might also not actively try to reclaim 
memory unless it comes under pressure to do so, because finding empty 
pages and returning them to the host may be a somewhat time-consuming 
operation.


That's entirely possible, but according to the doc, libvirt is supposed 
to have a memory balloon function that does the operation of reclaiming 
empty pages from guest processes, or so I understand. Now, how this 
function works is not exactly clear to me, or even if nova uses it or 
not. Another user suggested it might not be automatic, which is in 
accordance to what you're conjecturing.

From: jp.met...@planethoster.info
Subject: Re: [Openstack-operators] Memory usage of guest vms, 
ballooning and nova


Hi, This is indeed linux, CentOS 7 to be more precise, using
qemu-kvm as hypervisor. The used ram was in the used column. While
we have made adjustments by moving and resizing the specific guest
that was using 96 GB (verified in top), the ram usage is still
fairly high for the amount of allocated ram. Currently the ram
usage looks like this : total used free shared buff/cache
available Mem: 251G 190G 60G 42M 670M 60G Swap: 952M 707M 245M I
have 188.5GB of ram allocated to 22 instances on this node. I
believe it's unrealistic to think that all these 22 instances have
cached/are using up all their ram at this time. On 2017-03-23
13:07, Kris G. Lindgren wrote: > Sorry for the super stupid
question. > > But if this is linux are you sure that the memory is
not actually being consumed via buffers/cache? > > free -m > total
used free shared buff/cache available > Mem: 128751 27708 2796
4099 98246 96156 > Swap: 8191 0 8191 > > Shows that of 128GB 27GB
is used, but buffers/cache consumes 98GB of ram. > >
___
> Kris Lindgren > Senior Linux Systems Engineer > GoDaddy > > On
3/23/17, 11:01 AM, "Jean-Philippe Methot"
>
wrote: > > Hi, > > Lately, on my production openstack Newton
setup, I've ran into a > situation that defies my assumptions
regarding memory management on > Openstack compute nodes and I've
been looking for explanations. > Basically, we had a VM with a
flavor that limited it to 96 GB of ram, > which, to be quite
honest, we never thought we could ever reach. This is > a very
important VM where we wanted to avoid running out of memory at >
all cost. The VM itself generally uses about 12 GB of ram. > > We
were surprised when we noticed yesterday that this VM, which has
been > running for several months, was using all its 96 GB on the
compute host. > Despite that, in the guest, the OS was indicating
a memory usage of > about 12 GB. The only explanation I see to
this is that at some point in > time, the host had to allocate all
the 96GB of ram to the VM process and > it never took back the
allocated ram. This prevented the creation of > more guests on the
node as it was showing it didn't have enough memory left. > > Now,
I was under the assumption that memory ballooning was integrated >
into nova and that the amount of allocated memory to a specific
guest > would deflate once that guest did not need the memory.
After > verification, I've found blueprints for it, but I see no
trace of any > implementation anywhere. > > I 

Re: [Openstack-operators] Memory usage of guest vms, ballooning and nova

2017-03-23 Thread Edmund Rhudy (BLOOMBERG/ 120 PARK)
What sort of memory overcommit value are you running Nova with? The scheduler 
looks at an instance's reservation rather than how much memory is actually 
being used by QEMU when making a decision, as far as I'm aware (but please 
correct me if I am wrong on this point). If the HV has 128GB of memory, the 
instance has a reservation of 96GB, you have 16GB reserved via 
reserved_host_memory_mb, ram_allocation_ratio is set to 1.0, and you try to 
launch an instance from a flavor with 32GB of memory, it will fail to pass 
RamFilter in the scheduler and the scheduler will not consider it a valid host 
for placement. (I am assuming you are using FilterScheduler still, as I know 
nothing about the new placement API or what parts of it do and don't work in 
Newton.)

As far as why the memory didn't automatically get reclaimed, maybe KVM will 
only reclaim empty pages and memory fragmentation in the guest prevented it 
from doing so? It might also not actively try to reclaim memory unless it comes 
under pressure to do so, because finding empty pages and returning them to the 
host may be a somewhat time-consuming operation.

From: jp.met...@planethoster.info 
Subject: Re: [Openstack-operators] Memory usage of guest vms, ballooning and 
nova

Hi,

This is indeed linux, CentOS 7 to be more precise, using qemu-kvm as 
hypervisor. The used ram was in the used column. While we have made 
adjustments by moving and resizing the specific guest that was using 96 
GB (verified in top), the ram usage is still fairly high for the amount 
of allocated ram.

Currently the ram usage looks like this :

   totalusedfree  shared buff/cache   
available
Mem:   251G190G 60G 42M 670M 60G
Swap:  952M707M245M


I have 188.5GB of ram allocated to 22 instances on this node. I believe 
it's unrealistic to think that all these 22 instances have cached/are 
using up all their ram at this time.

On 2017-03-23 13:07, Kris G. Lindgren wrote:
> Sorry for the super stupid question.
>
> But if this is linux are you sure that the memory is not actually being 
> consumed via buffers/cache?
>
> free -m
>total  usedfree  shared   
> buff/cache   available
> Mem: 128751   277082796 4099  98246  96156
> Swap:  8191   0 8191
>
> Shows that of 128GB 27GB is used, but buffers/cache consumes 98GB of ram.
>
> ___
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
> On 3/23/17, 11:01 AM, "Jean-Philippe Methot"  
> wrote:
>
>  Hi,
>  
>  Lately, on my production openstack Newton setup, I've ran into a
>  situation that defies my assumptions regarding memory management on
>  Openstack compute nodes and I've been looking for explanations.
>  Basically, we had a VM with a flavor that limited it to 96 GB of ram,
>  which, to be quite honest, we never thought we could ever reach. This is
>  a very important VM where we wanted to avoid running out of memory at
>  all cost. The VM itself generally uses about 12 GB of ram.
>  
>  We were surprised when we noticed yesterday that this VM, which has been
>  running for several months, was using all its 96 GB on the compute host.
>  Despite that, in the guest, the OS was indicating a memory usage of
>  about 12 GB. The only explanation I see to this is that at some point in
>  time, the host had to allocate all the 96GB of ram to the VM process and
>  it never took back the allocated ram. This prevented the creation of
>  more guests on the node as it was showing it didn't have enough memory 
> left.
>  
>  Now, I was under the assumption that memory ballooning was integrated
>  into nova and that the amount of allocated memory to a specific guest
>  would deflate once that guest did not need the memory. After
>  verification, I've found blueprints for it, but I see no trace of any
>  implementation anywhere.
>  
>  I also notice that on most of our compute nodes, the amount of ram used
>  is much lower than the amount of ram allocated to VMs, which I do
>  believe is normal.
>  
>  So basically, my question is, how does openstack actually manage ram
>  allocation? Will it ever take back the unused ram of a guest process?
>  Can I force it to take back that ram?
>  
>  --
>  Jean-Philippe Méthot
>  Openstack system administrator
>  PlanetHoster inc.
>  www.planethoster.net
>  
>  
>  ___
>  OpenStack-operators mailing list
>  OpenStack-operators@lists.openstack.org
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>  
>

-- 
Jean-Philippe Méthot
Openstack system administrator

Re: [Openstack-operators] Milan Ops Midcycle - Cinder session

2017-03-23 Thread Sean McGinnis
Thank you to everyone that attended the Cinder session at the ops
midcycle. I found it very helpful getting some feedback and hearing
concerns, and I hope everyone else was able to take away something
useful from the session and the event.

There wasn't much, but a few questions in the etherpad (link below) were
expanded on or answered.

Feel free to add more notes in the etherpad, and definitely feel free to
reach out to me directly if there are any Cinder related issues you have
questions or concerns about.

Thanks again to all attendees, and to everyone involved in hosting and
organizing the event. I found it well worth the trip.

Sean

On Mon, Mar 13, 2017 at 01:35:22PM -0500, Sean McGinnis wrote:
> The start of the Cinder session etherpad is available here:
> 
> https://etherpad.openstack.org/p/MIL-ops-cinder-rolling-upgrade
> 
> Please add whatever info you would like to it.
> 
> I think the main interest was in rolling upgrades, but feel free
> to add any other general Cinder topics you would like to discuss
> to the end and we can see how far we can get.
> 
> Thanks!
> 
> Sean
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] Modify Default Quotas

2017-03-23 Thread Saverio Proto
Hello,

floating IPs is the real issue.

When using horizon it is very easy for users to allocate floating ips
but it is also very difficult to release them.

In our production cloud we had to change the default from 50 to 2. We
have to be very conservative with floatingips quota because our
experience is that the user will never release a floating IP.

A good starting point is to set the quota for the floatingips at the
the same quota for nova instances.

Saverio


2017-03-22 16:46 GMT+01:00 Morales, Victor :
> Hey there,
>
>
>
> I noticed that Ihar started working on a change to increase the default
> quotas values in Neutron[1].  Personally, I think that makes sense to change
> it but I’d like to complement it.  So, based on your experience, what should
> be the most common quota value for networks, subnets, ports, security
> groups, security rules, routers and Floating IPs per tenant?
>
>
>
> Regards/Saludos
>
> Victor Morales
>
> irc: electrocucaracha
>
>
>
> [1] https://review.openstack.org/#/c/444030
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Memory usage of guest vms, ballooning and nova

2017-03-23 Thread Jean-Philippe Methot

Hi,

This is indeed linux, CentOS 7 to be more precise, using qemu-kvm as 
hypervisor. The used ram was in the used column. While we have made 
adjustments by moving and resizing the specific guest that was using 96 
GB (verified in top), the ram usage is still fairly high for the amount 
of allocated ram.


Currently the ram usage looks like this :

  totalusedfree  shared buff/cache   
available

Mem:   251G190G 60G 42M 670M 60G
Swap:  952M707M245M


I have 188.5GB of ram allocated to 22 instances on this node. I believe 
it's unrealistic to think that all these 22 instances have cached/are 
using up all their ram at this time.


On 2017-03-23 13:07, Kris G. Lindgren wrote:

Sorry for the super stupid question.

But if this is linux are you sure that the memory is not actually being 
consumed via buffers/cache?

free -m
   total  usedfree  shared   
buff/cache   available
Mem: 128751   277082796 4099  98246  96156
Swap:  8191   0 8191

Shows that of 128GB 27GB is used, but buffers/cache consumes 98GB of ram.

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 3/23/17, 11:01 AM, "Jean-Philippe Methot"  
wrote:

 Hi,
 
 Lately, on my production openstack Newton setup, I've ran into a

 situation that defies my assumptions regarding memory management on
 Openstack compute nodes and I've been looking for explanations.
 Basically, we had a VM with a flavor that limited it to 96 GB of ram,
 which, to be quite honest, we never thought we could ever reach. This is
 a very important VM where we wanted to avoid running out of memory at
 all cost. The VM itself generally uses about 12 GB of ram.
 
 We were surprised when we noticed yesterday that this VM, which has been

 running for several months, was using all its 96 GB on the compute host.
 Despite that, in the guest, the OS was indicating a memory usage of
 about 12 GB. The only explanation I see to this is that at some point in
 time, the host had to allocate all the 96GB of ram to the VM process and
 it never took back the allocated ram. This prevented the creation of
 more guests on the node as it was showing it didn't have enough memory 
left.
 
 Now, I was under the assumption that memory ballooning was integrated

 into nova and that the amount of allocated memory to a specific guest
 would deflate once that guest did not need the memory. After
 verification, I've found blueprints for it, but I see no trace of any
 implementation anywhere.
 
 I also notice that on most of our compute nodes, the amount of ram used

 is much lower than the amount of ram allocated to VMs, which I do
 believe is normal.
 
 So basically, my question is, how does openstack actually manage ram

 allocation? Will it ever take back the unused ram of a guest process?
 Can I force it to take back that ram?
 
 --

 Jean-Philippe Méthot
 Openstack system administrator
 PlanetHoster inc.
 www.planethoster.net
 
 
 ___

 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 



--
Jean-Philippe Méthot
Openstack system administrator
PlanetHoster inc.
www.planethoster.net


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Memory usage of guest vms, ballooning and nova

2017-03-23 Thread Chris Friesen

On 03/23/2017 11:01 AM, Jean-Philippe Methot wrote:


So basically, my question is, how does openstack actually manage ram allocation?
Will it ever take back the unused ram of a guest process? Can I force it to take
back that ram?


I don't think nova will automatically reclaim memory.

I'm pretty sure that if you have CONF.libvirt.mem_stats_period_seconds set 
(which it is by default) then you can manually tell libvirt to reclaim some 
memory via the "virsh setmem" command.


Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Memory usage of guest vms, ballooning and nova

2017-03-23 Thread Jean-Philippe Methot

Hi,

Lately, on my production openstack Newton setup, I've ran into a 
situation that defies my assumptions regarding memory management on 
Openstack compute nodes and I've been looking for explanations. 
Basically, we had a VM with a flavor that limited it to 96 GB of ram, 
which, to be quite honest, we never thought we could ever reach. This is 
a very important VM where we wanted to avoid running out of memory at 
all cost. The VM itself generally uses about 12 GB of ram.


We were surprised when we noticed yesterday that this VM, which has been 
running for several months, was using all its 96 GB on the compute host. 
Despite that, in the guest, the OS was indicating a memory usage of 
about 12 GB. The only explanation I see to this is that at some point in 
time, the host had to allocate all the 96GB of ram to the VM process and 
it never took back the allocated ram. This prevented the creation of 
more guests on the node as it was showing it didn't have enough memory left.


Now, I was under the assumption that memory ballooning was integrated 
into nova and that the amount of allocated memory to a specific guest 
would deflate once that guest did not need the memory. After 
verification, I've found blueprints for it, but I see no trace of any 
implementation anywhere.


I also notice that on most of our compute nodes, the amount of ram used 
is much lower than the amount of ram allocated to VMs, which I do 
believe is normal.


So basically, my question is, how does openstack actually manage ram 
allocation? Will it ever take back the unused ram of a guest process? 
Can I force it to take back that ram?


--
Jean-Philippe Méthot
Openstack system administrator
PlanetHoster inc.
www.planethoster.net


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Changing defaul volume backend

2017-03-23 Thread Ignazio Cassano
Hi all,
on  newton I have two storage backend : nfs and vmax.
At this time the default is vmax but I 'd like to switch on nfs .
Changing cinder.conf default_volume_type and resynchronizing cinder db does
not change the default.
Cinder type-default always shows vmax .
Could anyone help me?
Thanks
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Anyone using libvirt driver port filtering with neutron?

2017-03-23 Thread Mathieu Gagné
On Thu, Mar 23, 2017 at 10:08 AM,   wrote:
> The nova libvirt driver provides support for ebtables-based port
> filtering (using libvirt's nwfilter) to prevent things like MAC, IP
> and/or ARP spoofing. I've been looking into deprecating this as part of
> the move to deprecate all things nova-network'y, but it appears that,
> in some scenarios, it is possible to use this feature with neutron.

Isn't ARP spoofing support now part of Neutron, at least for
Linuxbridge mechanism?
https://review.openstack.org/#/c/196986/

We do use the feature you mentioned but there is too much hack or code
change you need to do to benefit from it.
Especially in our case as you can't use both Neutron network manager
(with security groups, allowed address pairs, etc.) and Nova iptables
driver to benefit from libvirt's nwfilter anti-ARP spoofing.

We are still running Kilo and will be migrating to Mitaka which has
the ARP spoofing protection built-in in Neutron. So no, in our case, I
don't see a reason to keep this feature around as you can get the same
with Neutron port-security extension.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Rabbitmq cluster_status alarms

2017-03-23 Thread Matteo Panella
Hi,

On 22/03/2017 14:48, Andreas Vallin wrote:
> Cluster status of node 'rabbit@Infra1-rabbit-mq-container-2590dd44' ...
> [{nodes,[{disc,['rabbit@Infra1-rabbit-mq-container-2590dd44',
> 'rabbit@Infra2-rabbit-mq-container-ff24b66b',
> 'rabbit@Infra3-rabbit-mq-container-bf7948a7']}]},
>  {running_nodes,['rabbit@Infra3-rabbit-mq-container-bf7948a7',
>  'rabbit@Infra2-rabbit-mq-container-ff24b66b',
>  'rabbit@Infra1-rabbit-mq-container-2590dd44']},
>  {cluster_name,<<"rabbitmq_osa_prod">>},
>  {partitions,[]},
>  {alarms,[{'rabbit@Infra3-rabbit-mq-container-bf7948a7',[]},
>   {'rabbit@Infra2-rabbit-mq-container-ff24b66b',[]},
>   {'rabbit@Infra1-rabbit-mq-container-2590dd44',[]}]}]

AFAIR, once all alarms are cleared the node name remains in
cluster_status' output but the alarm list becomes empty - which seems to
be the case for your cluster.

Regards,
-- 
Matteo Panella
INFN CNAF
Via Ranzani 13/2 c - 40127 Bologna, Italy
Phone: +39 051 609 2903



smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [multisite] weekly meeting of Mar.23

2017-03-23 Thread joehuang
Hello, Gerald,

Thank you for the mail. Will discuss this in next weekly meeting whether we can 
help.

Before rewriting the multisite part, can the old one still be put in place? The 
update can be done gradually. Someone already complains the missing of this 
guide.

Best Regards
Chaoyi Huang (joehuang)

From: Kunzmann, Gerald [kunzm...@docomolab-euro.com]
Sent: 22 March 2017 23:54
To: joehuang; opnfv-tech-discuss
Cc: openstack-operators@lists.openstack.org; Heidi Joy Tretheway
Subject: RE: [multisite] weekly meeting of Mar.23

Dear multisite-Team,

FYI, the arch guide [1] in OpenStack is being rewritten, and the multi-site 
docs that were there are no longer there (see archived manuals in [2])
So there are currently no official docs on multi-site/region in OpenStack. In 
the Telco/NFV meeting it was suggested they could use some help writing those.

[1] https://docs.openstack.org/arch-design/
[2] 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/arch-design-to-archive/source

It seems the plan is to rewrite the arch guide piece by piece and they just 
haven't gotten to the multisite part yet, and could probably use some help. 
There is also potentially a NFV section to it.

Is there someone from multi-site team interested to work on this? Maybe you 
could discuss this in your meeting tomorrow.

I am not sure who would be the right person to contact, maybe Heidi from 
OpenStack heidi...@openstack.org?

Best regards,
Gerald


From: opnfv-tech-discuss-boun...@lists.opnfv.org 
[mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] On Behalf Of joehuang
Sent: Mittwoch, 22. März 2017 06:04
To: opnfv-tech-discuss 
Subject: [opnfv-tech-discuss] [multisite]weekly meeting of Mar.23


Hello, team,



HA PTL FuQiao was invited to talk about multi-site requirements in this weekly 
meeting



Mar.23 2017 Agenda:

* CMCC multi-site requirements

* Functest issue.

* E-Release discussion

* OPNFV Beijing summit preparation

* Open discussion


IRC: http://webchat.freenode.net/?channels=opnfv-meeting 8:00-9:00 UTC (During 
winter time, means CET 9:00 AM).

Other topics are also welcome in the weekly meeting, please reply in this mail.


Best Regards
Chaoyi Huang (joehuang)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [User-committee] Boston Forum - Formal Submission Now Open!

2017-03-23 Thread Thierry Carrez
Eoghan Glynn wrote:
> Thanks for putting this together!
> 
> But one feature gap is some means to tag topic submissions, e.g.
> tagging the project-specific topics by individual project relevance.
> That could be a basis for grouping topics, to allow folks to better
> manage their time during the Forum.
> 
> (e.g. if someone was mostly interested in say networking issues, they
> could plan to attend all the neutron- and kuryr-tagged topics more
> easily if those slots were all scheduled in a near-contiguous block
> with minimal conflicts)

That is a good point! The tooling we are using this time is pretty basic
(a resurrection of good old odsreg for submission / selection, with
manual scheduling to the summit scheduling system in the end), and we'll
certainly improve it for future Forums.

For this first one, we'll likely rely on the Forum selection committee
to add the relevant tags when they manually schedule the session. It's
probably doable since there are a lot less sessions (only 3 parallel
Forum rooms * 4 days compared to the ~20 * 5 we had in "Design Summits").

So if you have specific attendance needs that should be tagged in a
session, I encourage you to mention it in the description, for them to
pick it up at scheduling time. Also, once the schedule is up, if you see
a missing tag, feel free to reach out to them so that they add it.

-- 
Thierry Carrez (ttx)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators