Thank you for the reminder, Curtis,
I've added few lines and some +1s to the etherpad.
Cheers,
Shintaro
On 2017/03/24 1:54, Curtis wrote:
Hi All,
In our meeting yesterday we talked about how there is a forum
submission process for the Boston Summit.
We looked over the brainstorming etherpad
We run a similar kind of script.
I think in most cases, a Floating IP means a publicly routable IP, and
those are now scarce resources. Because of that, I agree with what's been
mentioned about a conservative floating IP quota.
Since the other resource types aren't restricted by external
We’ve encountered the same issue in our cloud. I wouldn’t be surprised if it
was quite common for systems with many tenants that are not active all the time.
You may be interested by this OSOps script:
On 2017-03-23 15:15, Edmund Rhudy (BLOOMBERG/ 120 PARK) wrote:
What sort of memory overcommit value are you running Nova with? The
scheduler looks at an instance's reservation rather than how much
memory is actually being used by QEMU when making a decision, as far
as I'm aware (but please
What sort of memory overcommit value are you running Nova with? The scheduler
looks at an instance's reservation rather than how much memory is actually
being used by QEMU when making a decision, as far as I'm aware (but please
correct me if I am wrong on this point). If the HV has 128GB of
Thank you to everyone that attended the Cinder session at the ops
midcycle. I found it very helpful getting some feedback and hearing
concerns, and I hope everyone else was able to take away something
useful from the session and the event.
There wasn't much, but a few questions in the etherpad
Hello,
floating IPs is the real issue.
When using horizon it is very easy for users to allocate floating ips
but it is also very difficult to release them.
In our production cloud we had to change the default from 50 to 2. We
have to be very conservative with floatingips quota because our
Hi,
This is indeed linux, CentOS 7 to be more precise, using qemu-kvm as
hypervisor. The used ram was in the used column. While we have made
adjustments by moving and resizing the specific guest that was using 96
GB (verified in top), the ram usage is still fairly high for the amount
of
On 03/23/2017 11:01 AM, Jean-Philippe Methot wrote:
So basically, my question is, how does openstack actually manage ram allocation?
Will it ever take back the unused ram of a guest process? Can I force it to take
back that ram?
I don't think nova will automatically reclaim memory.
I'm
Hi,
Lately, on my production openstack Newton setup, I've ran into a
situation that defies my assumptions regarding memory management on
Openstack compute nodes and I've been looking for explanations.
Basically, we had a VM with a flavor that limited it to 96 GB of ram,
which, to be quite
Hi all,
on newton I have two storage backend : nfs and vmax.
At this time the default is vmax but I 'd like to switch on nfs .
Changing cinder.conf default_volume_type and resynchronizing cinder db does
not change the default.
Cinder type-default always shows vmax .
Could anyone help me?
Thanks
On Thu, Mar 23, 2017 at 10:08 AM, wrote:
> The nova libvirt driver provides support for ebtables-based port
> filtering (using libvirt's nwfilter) to prevent things like MAC, IP
> and/or ARP spoofing. I've been looking into deprecating this as part of
> the move to deprecate
Hi,
On 22/03/2017 14:48, Andreas Vallin wrote:
> Cluster status of node 'rabbit@Infra1-rabbit-mq-container-2590dd44' ...
> [{nodes,[{disc,['rabbit@Infra1-rabbit-mq-container-2590dd44',
> 'rabbit@Infra2-rabbit-mq-container-ff24b66b',
>
Hello, Gerald,
Thank you for the mail. Will discuss this in next weekly meeting whether we can
help.
Before rewriting the multisite part, can the old one still be put in place? The
update can be done gradually. Someone already complains the missing of this
guide.
Best Regards
Chaoyi Huang
Eoghan Glynn wrote:
> Thanks for putting this together!
>
> But one feature gap is some means to tag topic submissions, e.g.
> tagging the project-specific topics by individual project relevance.
> That could be a basis for grouping topics, to allow folks to better
> manage their time during the
15 matches
Mail list logo