Don't feed the troll. :)
On Monday, August 25, 2014 12:39 PM, Joshua Harlow harlo...@outlook.com wrote:
So to see if we can get something useful from this thread.
What was your internal analysis, can it be published? Even negative analysis is
useful to make openstack better...
We manage a fairly large nova-baremetal installation at Yahoo. And while we've
developed tools to hit the nova-bm API, we're planning to move to ironic
without any support for the nova BM API. Definitely no interest in the proxy
API from our end.
Sometimes you just need to let a thing die.
We drive the ³VM=Cattle² message pretty hard. Part of onboarding a
property to our cloud, and allowing them to serve traffic from VMs is
explaining the transient nature of VMs. I broadcast the message that all
compute resources die, and if your day/week/month is ruined because of a
Do the backing file and instance directory exist? -drive
implies that both that directory exists and the instance image is contained
On Tuesday, May 13, 2014 6:22 AM, sonia verma
>I see a clear line between something that handles the creation of all
ancillary resources needed to boot a VM and then the creation of the VM
I agree. To me the line is the difference between creating a top level
resource, and adding a host to that resource.
For example, I do expect a
On Thu, Sep 24, 2015 at 2:22 PM, Sam Morrison wrote:
> Yes an AZ may not be considered a failure domain in terms of control
> infrastructure, I think all operators understand this. If you want control
> infrastructure failure domains use regions.
> However from a resource
> At risk of getting too offtopic I think there's an alternate solution to
> doing this in Nova or on the client side. I think we're missing some sort
> of OpenStack API and service that can handle this. Nova is a low level
> infrastructure API and service, it is not designed to handle these
>Affinity is mostly meaningless with baremetal. It's entirely a
>virtualization related thing. If you try and group things by TOR, or
>chassis, or anything else, it's going to start meaning something entirely
>different than it means in Nova,
I disagree, in fact, we need TOR and power
Wed, Dec 16, 2015 at 12:40 PM, James Penick <jpen...@gmail.com> wrote:
> >Affinity is mostly meaningless with baremetal. It's entirely a
> >virtualization related thing. If you try and group things by TOR, or
> >chassis, or anything else, it's going to start meaning someth
at that time. Right
>now, this is the best plan I have, that we can commit to completing in a
I respect that you're trying to solve the problem we have right now to make
operators lives Suck Less. But I think that a short term decision made now
would hurt a lot more later
I'm very much against it.
In my environment we're going to be depending heavily on the nova
scheduler for affinity/anti-affinity of physical datacenter constructs,
TOR, Power, etc. Like other operators we need to also have a concept of
host aggregates and availability zones for our baremetal as
the options to me.
On Mon, Dec 14, 2015 at 5:28 PM, Jim Rollenhagen <j...@jimrollenhagen.com>
> On Mon, Dec 14, 2015 at 04:15:42PM -0800, James Penick wrote:
> > I'm very much against it.
>rather than making progress on OpenStack, we'll spend the next 4 years
bikeshedding broadly about which bits, if any, should be rewritten in Go.
100% agreed, and well said.
On Tue, Jun 7, 2016 at 12:00 PM, Monty Taylor wrote:
> This text is in my vote, but as I'm sure
On Mon, Feb 22, 2016 at 8:32 AM, Matt Fischer wrote:
> Cross-post to openstack-operators...
> As an operator, there's value in me attending some of the design summit
> sessions to provide feedback and guidance. But I don't really need to be in
> the room for a week
mriedem pointed me to the vendordata code  which shows some fields are
passed (such as project ID) and that SSL is supported. So that's good.
The docs on vendordata suck. But I think it'll do what you're looking for.
Michael Still wrote up a helpful post titled "Nova vendordata
On Thu, Dec 14, 2017 at 7:07 AM, Dan Smith wrote:
> Agreed. The first reaction I had to this proposal was pretty much what
> you state here: that now the 20% person has a 365-day window in which
> they have to keep their head in the game, instead of a 180-day one.
Big +1 to re-evaluating this. In my environment we have many users
deploying and managing a number of different apps in different tenants.
Some of our users, such as Yahoo Mail service engineers could be in up to
40 different tenants. Those service engineers may change products as their
>This sounds like it is based on the customizations done at Oath, which to
my recollection did not use the actual federation implementation in
keystone due to its reliance on Athenz (I think?) as an identity manager.
Something similar can be accomplished in standard keystone with the
I've shared an item with you:
OpenStack upstream specs
It's not an attachment -- it's stored online. To open this item, just click
the link above.
Mail list logo