It’s already been merged..
https://github.com/openstack/osops-tools-contrib/blob/master/multi/user-info.py
So.. Yes it can! :)
On 4/5/16, 7:53 AM, "Valery Tschopp" wrote:
>Hello Edgar,
>
>Happy to ear that you think it is a great tool. Can it be included in
I would have to agree with Matt. The ability for any sort of handling of
failures either reside within the application or tools around the application
to make it work. Having the infrastructure handle the failures, I believe, is
a slippery slope that is starting to appear more and more.
I
Clayton,
This is really good information.
I’m wondering how we can help support you and get the necessary dev support to
get this resolved sooner than later. I totally agree with you that this should
be backported to at least Liberty.
Please let me know how I and other can help!
—Joe
The instance would still require a floating IP. That is the only way the host
would get outside of the tenant network.
We do this for some of our tenants to ensure that we know that only connections
outbound would be controlled by Floating IPs.
On Jan 15, 2016, at 6:55 PM, Akshay Kumar
You could put an interface from your first network to R2. That will now allow
connectivity between the two routers.
Then on R1, you can setup host routes on R1 to point all traffic to the
interface on R2 and you’ll most likely need to do vice-versa on R2 to point
traffic back to R1.
We
At this point, we use Keystone and UUID’s for our setup, but we don’t store the
UUID tokens in the Database. We use Memcache to do that. Actually we use
McRouter and Memcache to make sure any node in our control plane can validate
that token.
—Joe
From: Ajaya Agrawal
Hi,
These would make great additions to the OpenStack Operators github repos that
we have setup. It would probably fit under the
http://github.com/OpenStack/osops-tools-generic repo.
Thanks
Joe
On Dec 9, 2015, at 9:48 PM, Hieu LE
> wrote:
Dear
Hi,
These would make great additions to the OpenStack Operators github repos that
we have setup. It would probably fit under the
http://github.com/OpenStack/osops-tools-generic repo.
Thanks
Joe
On Dec 9, 2015, at 9:48 PM, Hieu LE
> wrote:
Dear
We haven’t seen the bad namespaces issue, but we have experienced an issue
where our node eventually started to see soft lockups like these:
kernel: BUG: soft lockup - CPU#0 stuck for 22s!
We noticed it once we hit a high amount of namespaces. It was definitely over
400, as we didn’t realize
Meeting Notes - 11/18/2015 - 1900 UTC
IRC Room: #openstack-meeting-4
Attendees:
- j^2
- mdorman
- klindgren
- raginbajin
- balajin
# Pre-Agenda Topics
- Group agreed to update the name of the working group to contain OSOps.
- Group agreed to use the OSOps wiki page to host notes and
We manage two regions using a single portal. We do this by utilizing a single
keystone that is just stretched across the two regions.This allows any user
to go into any portal they like and manage their tenants in either region.
All services other than Keystone (for us) are independent of
mailto:j...@chef.io>"
Date: Wednesday, September 16, 2015 at 2:19 PM
To: "Bajin, Joseph", OpenStack Operators Mailing List
Subject: Re: [Openstack-operators] [openstack-operators][osops] It's alive!
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Awesome. This is great.
I cou
I’d definitely like to be a part of this. I think this is one of the items
that fits under the Monitoring/Ops Tools Working Group topics that we discussed
at the Summit. Not sure if you were able to attend, but this was the etherpad
on the topics discussed:
13 matches
Mail list logo