Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread James Penick
Hey Colleen,

>This sounds like it is based on the customizations done at Oath, which to
my recollection did not use the actual federation implementation in
keystone due to its reliance on Athenz (I think?) as an identity manager.
Something similar can be accomplished in standard keystone with the mapping
API in keystone which can cause dynamic generation of a shadow user,
project and role assignments.

You're correct, this was more about the general design of asymmetrical
token based authentication rather that our exact implementation with
Athenz. We didn't use the shadow users because Athenz authentication in our
implementation is done via an 'ntoken'  which is Athenz' older method for
identification, so it was it more straightforward for us to resurrect the
PKI driver. The new way is via mTLS, where the user can identify themselves
via a client cert. I imagine we'll need to move our implementation to use
shadow users as a part of that change.

>We have historically pushed back hard against allowing setting a project
ID via the API, though I can see predictable-but-not-settable as less
problematic.

Yup, predictable-but-not-settable is what we need. Basically as long as the
uuid is a hash of the string, we're good. I definitely don't want to be
able to set a user ID or project ID via API, because of the security and
operability problems that could arise. In my mind this would just be a
config setting.

>One of the use cases from the past was being able to use the same token in
different regions, which is problematic from a security perspective. Is
that that idea here? Or could someone provide more details on why this is
needed?

Well, sorta. As far as we're concerned you can get authenticate to keystone
in each region independently using your credential from the IdP. Our use
cases are more about simplifying federation of other systems, like Glance.
Say I create an image and a member list for that image. I'd like to be able
to copy that image *and* all of its metadata straight across to another
cluster and have things Just Work without needing to look up and resolve
the new UUIDs on the new cluster.

However, for deployers who wish to use Keystone as their IdP, then in that
case they'll need to use that keystone credential to establish a credential
in the keystone cluster in that region.

-James

On Wed, Sep 26, 2018 at 2:10 AM Colleen Murphy  wrote:

> Thanks for the summary, Ildiko. I have some questions inline.
>
> On Tue, Sep 25, 2018, at 11:23 AM, Ildiko Vancsa wrote:
>
> 
>
> >
> > We agreed to prefer federation for Keystone and came up with two work
> > items to cover missing functionality:
> >
> > * Keystone to trust a token from an ID Provider master and when the auth
> > method is called, perform an idempotent creation of the user, project
> > and role assignments according to the assertions made in the token
>
> This sounds like it is based on the customizations done at Oath, which to
> my recollection did not use the actual federation implementation in
> keystone due to its reliance on Athenz (I think?) as an identity manager.
> Something similar can be accomplished in standard keystone with the mapping
> API in keystone which can cause dynamic generation of a shadow user,
> project and role assignments.
>
> > * Keystone should support the creation of users and projects with
> > predictable UUIDs (eg.: hash of the name of the users and projects).
> > This greatly simplifies Image federation and telemetry gathering
>
> I was in and out of the room and don't recall this discussion exactly. We
> have historically pushed back hard against allowing setting a project ID
> via the API, though I can see predictable-but-not-settable as less
> problematic. One of the use cases from the past was being able to use the
> same token in different regions, which is problematic from a security
> perspective. Is that that idea here? Or could someone provide more details
> on why this is needed?
>
> Were there any volunteers to help write up specs and work on the
> implementations in keystone?
>
> 
>
> Colleen (cmurphy)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-15 Thread James Penick
On Thu, Dec 14, 2017 at 7:07 AM, Dan Smith  wrote:

>
>
> Agreed. The first reaction I had to this proposal was pretty much what
> you state here: that now the 20% person has a 365-day window in which
> they have to keep their head in the game, instead of a 180-day one.
>
> Assuming doubling the length of the cycle has no impact on the
> _importance_ of the thing the 20% person is working on, relative to
> project priorities, then the longer cycle just means they have to
> continuously rebase for a longer period of time.
>

+1, I see yearly releases as something that will inevitably hinder project
velocity, not help it.

-James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-10-30 Thread James Penick
Big +1 to re-evaluating this. In my environment we have many users
deploying and managing a number of different apps in different tenants.
Some of our users, such as Yahoo Mail service engineers could be in up to
40 different tenants. Those service engineers may change products as their
careers develop. Having to re-deploy part of an application stack because
Sally SE changed products would be unnecessarily disruptive.

 I regret that I missed the bus on this back in June. But at Oath we've
built a system (Called Copper Argos) on top of Athenz (it's open source:
www.athenz.io) to provide instance identity in a way that is both unique
but doesn't have all of the problems of a static persistent identity.

 The really really really* high level overview is:
1. Users pass application identity data to Nova as metadata during the boot
process.
2. Our vendor-data driver works with a service called HostSignd to validate
that data and create a one time use attestation document which is injected
into the instance's config drive.
3. On boot an agent within the instance will use that time-limited host
attestation document to identify itself to the Athenz identity service,
which will then exchange the document for a unique certificate containing
the application data passed in the boot call.
4. From then on the instance identity (TLS certificate) is periodically
exchanged by the agent for a new certificate.
5. The host attestation document and the instance TLS certificate can each
only be used a single time to exchange for another certificate. The
attestation document has a very short ttl, and the instance identity is set
to live slightly longer than the planned rotation frequency. So if you
rotate your certificates once an hour, the ttl on the cert should be 2
hours. This gives some wiggle room in the event the identity service is
down for any reason.

The agent is also capable of supporting SSH CA by passing the SSH host key
up to be re-signed whenever it exchanges the TLS certificate. All instances
leveraging Athens identity can communicate to one another using TLS mutual
auth.

If there's any interest i'd be happy to go into more detail here on the ML
and/or at the summit in Sydney

-James
* With several more zoolander-style Really's thrown in for good measure.


On Tue, Oct 10, 2017 at 12:34 PM, Fox, Kevin M  wrote:

> Big +1 for reevaluating the bigger picture. We have a pile of api's that
> together don't always form the most useful of api's due to lack of big
> picture analysis.
>
> +1 to thinking through the dev's/devops use case.
>
> Another one to really think over is single user that != application
> developer. IE, Pure user type person deploying cloud app in their tenant
> written by dev not employees by user's company. User shouldn't have to go
> to Operator to provision service accounts and other things. App dev should
> be able to give everything needed to let OpenStack launch say a heat
> template that provisions the service accounts for the User, not making the
> user twiddle the api themselves. It should be a "here, launch this" kind of
> thing, and they fill out the heat form, and out pops a working app. If they
> have to go prevision a bunch of stuff themselves before passing stuff to
> the form, game over. Likewise, if they have to look at yaml, game over. How
> do app credentials fit into this?
>
> Thanks,
> Kevin
>
> 
> From: Zane Bitter [zbit...@redhat.com]
> Sent: Monday, October 09, 2017 9:39 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [keystone][nova] Persistent application
> credentials
>
> On 12/09/17 18:58, Colleen Murphy wrote:
> > While it's fresh in our minds, I wanted to write up a short recap of
> > where we landed in the Application Credentials discussion in the BM/VM
> > room today. For convenience the (as of yet unrevised) spec is here:
>
> Thanks so much for staying on this Colleen, it's tremendously helpful to
> have someone from the core team keeping an eye on it :)
>
> > http://specs.openstack.org/openstack/keystone-specs/
> specs/keystone/backlog/application-credentials.html
> >
> > Attached are images of the whiteboarded notes.
> >
> > On the contentious question of the lifecycle of an application
> > credential, we re-landed in the same place we found ourselves in when
> > the spec originally landed, which is that the credential becomes invalid
> > when its creating user is disabled or deleted. The risk involved in
> > allowing a credential to continue to be valid after its creating user
> > has been disabled is not really surmountable, and we are basically
> > giving up on this feature. The benefits we still get from not having to
> > embed user passwords in config files, especially for LDAP or federated
> > users, is still a vast improvement over the situation today, as is the
> > ability to rotate credentials.
>
> OK, there were lots of smart people in the room so I trust that y'all

Re: [openstack-dev] Supporting SSH host certificates

2017-10-05 Thread James Penick
Hey Pino,

mriedem pointed me to the vendordata code [1] which shows some fields are
passed (such as project ID) and that SSL is supported. So that's good.

The docs on vendordata suck. But I think it'll do what you're looking for.
Michael Still wrote up a helpful post titled "Nova vendordata deployment,
an excessively detailed guide"[2] and he's written a vendordata service
example[3] which even shows keystone integration.

At Oath, we have a system that provides a unique x509 certificate for each
host, including the ability to sign host SSH keys against an HSM. In our
case what we do is have Nova call the service, which generates and returns
a signed (and time limited) host bootstrap document, which is injected into
the instance. When the instance boots it calls our identity service and
provides its bootstrap document as a bearer certificate. The identity
service trusts this one-time document to attest the instance, and will then
provide an x509 certificate as well as sign the hosts SSH keys. After the
initial bootstrap the host will rotate its keys frequently, by providing
its last certificate in exchange for a new one. The service tracks all host
document and certificate IDs which have been exchanged until their expiry,
so that a cert cannot be re-used.

This infrastructure relies on Athenz [4] as the AuthNG system for all
principals (users, services, roles, domains, etc) as well as an internal
signatory service which signs x509 certificates and SSH host keys using an
HSM infrastructure.

Instead, you could write a vendordata service which, when called, would
generate an ssh host keypair, sign it, and return those files as encoded
data, which can be expanded into files in the correct locations on first
boot. I strongly suggest using not only using keystone auth, but that you
ensure all calls from vendordata to the microservice are encrypted with TLS
mutual auth.

-James


1:
https://github.com/openstack/nova/blob/master/nova/api/metadata/vendordata_dynamic.py#L77
2: https://www.stillhq.com/openstack/22.html
3: https://github.com/mikalstill/vendordata
4: https://athenz.io


On Fri, Sep 29, 2017 at 5:17 PM, Fox, Kevin M  wrote:

> https://review.openstack.org/#/c/93/
> --
> *From:* Giuseppe de Candia [giuseppe.decan...@gmail.com]
> *Sent:* Friday, September 29, 2017 1:05 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] Supporting SSH host certificates
>
> Ihar, thanks for pointing that out - I'll definitely take a close look.
>
> Jon, I'm not very familiar with Barbican, but I did assume the full
> implementation would use Barbican to store private keys. However, in terms
> of actually getting a private key (or SSH host cert) into a VM instance,
> Barbican doesn't help. The instance needs permission to access secrets
> stored in Barbican. The main question of my e-mail is: how do you inject a
> credential in an automated but secure way? I'd love to hear ideas - in the
> meantime I'll study Ihar's link.
>
> thanks,
> Pino
>
>
>
> On Fri, Sep 29, 2017 at 2:49 PM, Ihar Hrachyshka 
> wrote:
>
>> What you describe (at least the use case) seems to resemble
>> https://review.openstack.org/#/c/456394/ This work never moved
>> anywhere since the spec was posted though. You may want to revive the
>> discussion in scope of the spec.
>>
>> Ihar
>>
>> On Fri, Sep 29, 2017 at 12:21 PM, Giuseppe de Candia
>>  wrote:
>> > Hi Folks,
>> >
>> >
>> >
>> > My intent in this e-mail is to solicit advice for how to inject SSH host
>> > certificates into VM instances, with minimal or no burden on users.
>> >
>> >
>> >
>> > Background (skip if you're already familiar with SSH certificates):
>> without
>> > host certificates, when clients ssh to a host for the first time (or
>> after
>> > the host has been re-installed), they have to hope that there's no man
>> in
>> > the middle and that the public key being presented actually belongs to
>> the
>> > host they're trying to reach. The host's public key is stored in the
>> > client's known_hosts file. SSH host certicates eliminate the
>> possibility of
>> > Man-in-the-Middle attack: a Certificate Authority public key is
>> distributed
>> > to clients (and written to their known_hosts file with a special syntax
>> and
>> > options); the host public key is signed by the CA, generating an SSH
>> > certificate that contains the hostname and validity period (among other
>> > things). When negotiating the ssh connection, the host presents its SSH
>> host
>> > certificate and the client verifies that it was signed by the CA.
>> >
>> >
>> >
>> > How to support SSH host certificates in OpenStack?
>> >
>> >
>> >
>> > First, let's consider doing it by hand, instance by instance. The only
>> > solution I can think of is to VNC to the instance, copy the public key
>> to my
>> > CA server, sign it, and then write the certificate back 

[openstack-dev] OpenStack upstream specs - Invitation to edit

2016-10-18 Thread James Penick (via Google Docs)

I've shared an item with you:

OpenStack upstream specs
https://docs.google.com/document/d/1oZzTLFBASwSydy2NNUXJlBA1nRNB-qvoJF_H49utNc4/edit?usp=sharing=CJOpnqIJ=58069599

It's not an attachment -- it's stored online. To open this item, just click  
the link above.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reasoning behind my vote on the Go topic

2016-06-07 Thread James Penick
>rather than making progress on OpenStack, we'll spend the next 4 years
bikeshedding broadly about which bits, if any, should be rewritten in Go.

100% agreed, and well said.


On Tue, Jun 7, 2016 at 12:00 PM, Monty Taylor  wrote:

> This text is in my vote, but as I'm sure there are people who do not
> read all of the gerrit comments for governance changes, I'm posting it
> here so that my thoughts are clear.
>
> Please know that this has actually kept me up at night. I cast my vote
> on this neither glibly or superficially. I have talked to everyone I can
> possibly think of on the topic, and at the end, the only thing I can do
> is use my judgment and vote to the best of my ability. I apologize from
> the bottom of my heart to the people I find myself in disagreement with.
> I have nothing but the utmost respect for you all.
>
> I vote against allowing Go as an official language of OpenStack.
>
> "The needs of the many outweigh the needs of the few, or the one"
>
> I'm super unhappy about both possible votes here.
>
> I think go is a wonderful language. I think hummingbird is a well
> considered solution to a particular problem. I think that lack of
> flexibility is broadly speaking not a problem we have in OpenStack
> currently. I'm more worried about community cohesion in a post-Big Tent
> world than I am about specific optimization.
>
> I do not think that adding Go as a language to OpenStack today is enough
> of a win to justify the cost, so I don't like accepting it.
>
> I do not think that this should preclude serious thought about
> OpenStack's technology underpinnings, so I don't like rejecting it.
>
> "only a great fool would reach for what he was given..."
>
> I think that one of OpenStack's biggest and most loudly spoken about
> problems is too many per-project solutions and not enough holistic
> solutions. Because of that, and the six years of experience we have
> seeing where that gets us, I do not think that adding Go into the mix
> and "seeing what happens" is going to cause anything other than chaos.
>
> If we want to add Go, or any other language, into the mix for server
> projects, I think it should be done with the intent that we are going to
> do it because it's a markedly better choice across the board, that we
> are going to rewrite literally everything, and I believe that we should
> consider the cost associated with retraining 2000 developers as part of
> considering that. Before you think that that's me throwing the baby out
> with the bathwater...
>
> In a previous comment, Deklan says:
>
> "If Go was accepted as an officially supported language in the OpenStack
> community, I'd be the first to start to rewrite as much code as possible
> in Go."
>
> That is, in fact, EXACTLY the concern. That rather than making progress
> on OpenStack, we'll spend the next 4 years bikeshedding broadly about
> which bits, if any, should be rewritten in Go. It took Juju YEARS to
> rewrite from Python to Go and to hit feature parity. The size of that
> codebase was much smaller and they even had a BDFL (which people keep
> telling us makes things go quicker)
>
> It could be argued that we could exercise consideration about which
> things get rewritten in Go so as to avoid that, but I'm pretty sure that
> would just mean that the only conversation the TC would have for the
> next two years would be "should X be in Go or Python" - and we'd have
> strong proponents from each project on each side of the argument.
>
> David Goetz says "you aren’t doing the community any favors by deciding
> for them how they do their jobs". I get that, and can respect that point
> of view. However, for the most part, the negative feedback we get as
> members of the TC is actually that we're too lax, not that we're too
> strict.
>
> I know that it's a popular sentiment with some folks to say "let devs
> use whatever tool they want to." However, that has never been our
> approach with OpenStack. It has been suggested multiple times and
> aligning on limited chosen tech has always been the thing we've chosen.
> I tend to align in my personal thinking more with Dan McKinley in:
>
> http://mcfunley.com/choose-boring-technology
>
> I have effectively been arguing his point for as long as I've been
> involved in OpenStack governance - although probably not as well as he
> does. I don't see any reason to reverse myself now.
>
> I'd rather see us focus energy on Python3, asyncio and its pluggable
> event loops. The work in:
>
> http://magic.io/blog/uvloop-blazing-fast-python-networking/
>
> is a great indication in an actual apples-to-apples comparison of what
> can be accomplished in python doing IO-bound activities by using modern
> Python techniques. I think that comparing python2+eventlet to a fresh
> rewrite in Go isn't 100% of the story. A TON of work has gone in to
> Python that we're not taking advantage of because we're still supporting
> Python2. So what I've love to see in the realm of 

Re: [openstack-dev] [Openstack-operators] [all] A proposal to separate the design summit

2016-02-22 Thread James Penick
On Mon, Feb 22, 2016 at 8:32 AM, Matt Fischer  wrote:

> Cross-post to openstack-operators...
>
> As an operator, there's value in me attending some of the design summit
> sessions to provide feedback and guidance. But I don't really need to be in
> the room for a week discussing minutiae of implementations. So I probably
> can't justify 2 extra trips just to give a few hours of
> feedback/discussion. If this is indeed the case for some other folks we'll
> need to do a good job of collecting operator feedback at the operator
> sessions (perhaps hopefully with reps from each major project?). We don't
> want projects operating in a vacuum when it comes to major decisions.
>

If there's one thing i've learned from design summits, it's that there
should be operators in nearly every session. In my experience the core
developers for each project have been overwhelmingly encouraging of Ops
feedback. Im hoping that, if anything, this split would encourage operators
and deployers to participate more in the design sessions.



>
>
Also where do the current operators design sessions and operators midcycle
> fit in here?
>
> (apologies for not replying directly to the first message, gmail seems to
> have lost it).
>
>
>
> On Mon, Feb 22, 2016 at 8:24 AM, Russell Bryant 
> wrote:
>
>>
>>
>> On Mon, Feb 22, 2016 at 10:14 AM, Thierry Carrez 
>> wrote:
>>
>>> Hi everyone,
>>>
>>> TL;DR: Let's split the events, starting after Barcelona.
>>>
>>
>> This proposal sounds fantastic.  Thank you very much to those that help
>> put it together.
>>
>> --
>> Russell Bryant
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread James Penick
>Affinity is mostly meaningless with baremetal. It's entirely a
>virtualization related thing. If you try and group things by TOR, or
>chassis, or anything else, it's going to start meaning something entirely
>different than it means in Nova,

I disagree, in fact, we need TOR and power affinity/anti-affinity for VMs
as well as baremetal. As an example, there are cases where certain compute
resources move significant amounts of data between one or two other
instances, but you want to ensure those instances are not on the same
hypervisor. In that scenario it makes sense to have instances on different
hypervisors, but on the same TOR to reduce unnecessary traffic across the
fabric.

>and it would probably be better to just
>make lots of AZ's and have users choose their AZ mix appropriately,
>since that is the real meaning of AZ's.

Yes, at some level certain things should be expressed in the form of an AZ,
power seems like a good candidate for that. But , expressing something like
a TOR as an AZ in an environment with hundreds of thousands of physical
hosts, would not scale. Further, it would require users to have a deeper
understanding of datacenter toplogy, which is exactly the opposite of why
IaaS exists.

The whole point of a service-oriented infrastructure is to be able to give
the end user the ability to boot compute resources that match a variety of
constraints, and have those resources selected and provisioned for them. IE
"Give me 12 instances of m1.blah, all running Linux, and make sure they're
spread across 6 different TORs and 2 different power domains in network
zone Blah."







On Wed, Dec 16, 2015 at 10:38 AM, Clint Byrum  wrote:

> Excerpts from Jim Rollenhagen's message of 2015-12-16 08:03:22 -0800:
> > Nobody is talking about running a compute per flavor or capability. All
> > compute hosts will be able to handle all ironic nodes. We *do* still
> > need to figure out how to handle availability zones or host aggregates,
> > but I expect we would pass along that data to be matched against. I
> > think it would just be metadata on a node. Something like
> > node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
> > you. Ditto for host aggregates - add the metadata to ironic to match
> > what's in the host aggregate. I'm honestly not sure what to do about
> > (anti-)affinity filters; we'll need help figuring that out.
> >
>
> Affinity is mostly meaningless with baremetal. It's entirely a
> virtualization related thing. If you try and group things by TOR, or
> chassis, or anything else, it's going to start meaning something entirely
> different than it means in Nova, and it would probably be better to just
> make lots of AZ's and have users choose their AZ mix appropriately,
> since that is the real meaning of AZ's.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread James Penick
Someone else expressed this more gracefully than I:

*'Because sans Ironic, compute-nodes still have physical characteristics*
*that make grouping on them attractive for things like anti-affinity. I*
*don't really want my HA instances "not on the same compute node", I want*
*them "not in the same failure domain". It becomes a way for all*
*OpenStack workloads to have more granularity than "availability zone".*
(
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg14891.html
)

^That guy definitely has a good head on his shoulders ;)

-James


On Wed, Dec 16, 2015 at 12:40 PM, James Penick <jpen...@gmail.com> wrote:

> >Affinity is mostly meaningless with baremetal. It's entirely a
> >virtualization related thing. If you try and group things by TOR, or
> >chassis, or anything else, it's going to start meaning something entirely
> >different than it means in Nova,
>
> I disagree, in fact, we need TOR and power affinity/anti-affinity for VMs
> as well as baremetal. As an example, there are cases where certain compute
> resources move significant amounts of data between one or two other
> instances, but you want to ensure those instances are not on the same
> hypervisor. In that scenario it makes sense to have instances on different
> hypervisors, but on the same TOR to reduce unnecessary traffic across the
> fabric.
>
> >and it would probably be better to just
> >make lots of AZ's and have users choose their AZ mix appropriately,
> >since that is the real meaning of AZ's.
>
> Yes, at some level certain things should be expressed in the form of an
> AZ, power seems like a good candidate for that. But , expressing something
> like a TOR as an AZ in an environment with hundreds of thousands of
> physical hosts, would not scale. Further, it would require users to have a
> deeper understanding of datacenter toplogy, which is exactly the opposite
> of why IaaS exists.
>
> The whole point of a service-oriented infrastructure is to be able to give
> the end user the ability to boot compute resources that match a variety of
> constraints, and have those resources selected and provisioned for them. IE
> "Give me 12 instances of m1.blah, all running Linux, and make sure they're
> spread across 6 different TORs and 2 different power domains in network
> zone Blah."
>
>
>
>
>
>
>
> On Wed, Dec 16, 2015 at 10:38 AM, Clint Byrum <cl...@fewbar.com> wrote:
>
>> Excerpts from Jim Rollenhagen's message of 2015-12-16 08:03:22 -0800:
>> > Nobody is talking about running a compute per flavor or capability. All
>> > compute hosts will be able to handle all ironic nodes. We *do* still
>> > need to figure out how to handle availability zones or host aggregates,
>> > but I expect we would pass along that data to be matched against. I
>> > think it would just be metadata on a node. Something like
>> > node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
>> > you. Ditto for host aggregates - add the metadata to ironic to match
>> > what's in the host aggregate. I'm honestly not sure what to do about
>> > (anti-)affinity filters; we'll need help figuring that out.
>> >
>>
>> Affinity is mostly meaningless with baremetal. It's entirely a
>> virtualization related thing. If you try and group things by TOR, or
>> chassis, or anything else, it's going to start meaning something entirely
>> different than it means in Nova, and it would probably be better to just
>> make lots of AZ's and have users choose their AZ mix appropriately,
>> since that is the real meaning of AZ's.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-16 Thread James Penick
>We actually called out this problem in the Ironic midcycle and the Tokyo
>summit - we decided to report Ironic's total capacity from each compute
>host (resulting in over-reporting from Nova), and real capacity (for
>purposes of reporting, monitoring, whatever) should be fetched by
>operators from Ironic (IIRC, you specifically were okay with this
>limitation). This is still wrong, but it's the least wrong of any option
>(yes, all are wrong in some way). See the spec[1] for more details.

I do recall that discussion, but the merged spec says:

"In general, a nova-compute running the Ironic virt driver should expose
(total resources)/(number of compute services). This allows for resources
to be
sharded across multiple compute services without over-reporting resources."

I agree that what you said via email is Less Awful than what I read on the
spec (Did I misread it? Am I full of crazy?)

>We *do* still
>need to figure out how to handle availability zones or host aggregates,
>but I expect we would pass along that data to be matched against. I
>think it would just be metadata on a node. Something like
>node.properties['availability_zone'] = 'rackspace-iad-az3' or what have
>you. Ditto for host aggregates - add the metadata to ironic to match
>what's in the host aggregate. I'm honestly not sure what to do about
>(anti-)affinity filters; we'll need help figuring that out.
&
>Right, I didn't mean gantt specifically, but rather "splitting out the
>scheduler" like folks keep talking about. That's why I said "actually
>exists". :)

 I think splitting out the scheduler isn't going to be realistic. My
feeling is, if Nova is going to fulfill its destiny of being The Compute
Service, then the scheduler will stay put and the VM pieces will split out
into another service (Which I think should be named "Seamus" so I can refer
to it as "The wee baby Seamus").

(re: ironic maintaining host aggregates)
>Yes, and yes, assuming those things are valuable to our users. The
>former clearly is, the latter will clearly depend on the change but I
>expect we will evolve to continue to fit Nova's model of the world
>(after all, fitting into Nova's model is a huge chunk of what we do, and
>is exactly what we're trying to do with this work).

It's a lot easier to fit into the nova model if we just use what's there
and don't bother trying to replicate it.

>Again, the other solutions I'm seeing that *do* solve more problems are:
>* Rewrite the resource tracker

>Do you have an entire team (yes, it will take a relatively large team,
>especially when you include some cores dedicated to reviewing the code)
>that can dedicate a couple of development cycles to one of these?

 We can certainly help.

>I sure
>don't. If and when we do, we can move forward on that and deprecate this
>model, if we find that to be a useful thing to do at that time. Right
>now, this is the best plan I have, that we can commit to completing in a
>reasonable timeframe.

I respect that you're trying to solve the problem we have right now to make
operators lives Suck Less. But I think that a short term decision made now
would hurt a lot more later on.

-James

On Wed, Dec 16, 2015 at 8:03 AM, Jim Rollenhagen <j...@jimrollenhagen.com>
wrote:

> On Tue, Dec 15, 2015 at 05:19:19PM -0800, James Penick wrote:
> > > getting rid of the raciness of ClusteredComputeManager in my
> > >current deployment. And I'm willing to help other operators do the same.
> >
> >  You do alleviate race, but at the cost of complexity and
> > unpredictability.  Breaking that down, let's say we go with the current
> > plan and the compute host abstracts hardware specifics from Nova.  The
> > compute host will report (sum of resources)/(sum of managed compute).  If
> > the hardware beneath that compute host is heterogenous, then the
> resources
> > reported up to nova are not correct, and that really does have
> significant
> > impact on deployers.
> >
> >  As an example: Let's say we have 20 nodes behind a compute process.
> Half
> > of those nodes have 24T of disk, the other have 1T.  An attempt to
> schedule
> > a node with 24T of disk will fail, because Nova scheduler is only aware
> of
> > 12.5T of disk.
>
> We actually called out this problem in the Ironic midcycle and the Tokyo
> summit - we decided to report Ironic's total capacity from each compute
> host (resulting in over-reporting from Nova), and real capacity (for
> purposes of reporting, monitoring, whatever) should be fetched by
> operators from Ironic (IIRC, you specifically were okay with this
> limitation). This is still wrong, but it's the least wrong of any option
> (yes, all are wrong in some way). See the spec[1] for more de

Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-15 Thread James Penick
> getting rid of the raciness of ClusteredComputeManager in my
>current deployment. And I'm willing to help other operators do the same.

 You do alleviate race, but at the cost of complexity and
unpredictability.  Breaking that down, let's say we go with the current
plan and the compute host abstracts hardware specifics from Nova.  The
compute host will report (sum of resources)/(sum of managed compute).  If
the hardware beneath that compute host is heterogenous, then the resources
reported up to nova are not correct, and that really does have significant
impact on deployers.

 As an example: Let's say we have 20 nodes behind a compute process.  Half
of those nodes have 24T of disk, the other have 1T.  An attempt to schedule
a node with 24T of disk will fail, because Nova scheduler is only aware of
12.5T of disk.

 Ok, so one could argue that you should just run two compute processes per
type of host (N+1 redundancy).  If you have different raid levels on two
otherwise identical hosts, you'll now need a new compute process for each
variant of hardware.  What about host aggregates or availability zones?
This sounds like an N^2 problem.  A mere 2 host flavors spread across 2
availability zones means 8 compute processes.

I have hundreds of hardware flavors, across different security, network,
and power availability zones.

>None of this precludes getting to a better world where Gantt actually
>exists, or the resource tracker works well with Ironic.

It doesn't preclude it, no. But Gantt is dead[1], and I haven't seen any
movement to bring it back.

>It just gets us to an incrementally better model in the meantime.

 I strongly disagree. Will Ironic manage its own concept of availability
zones and host aggregates?  What if nova changes their model, will Ironic
change to mirror it?  If not I now need to model the same topology in two
different ways.

 In that context, breaking out scheduling and "hiding" ironic resources
behind a compute process is going to create more problems than it will
solve, and is not the "Least bad" of the options to me.

-James
[1] http://git.openstack.org/cgit/openstack/gantt/tree/README.rst

On Mon, Dec 14, 2015 at 5:28 PM, Jim Rollenhagen <j...@jimrollenhagen.com>
wrote:

> On Mon, Dec 14, 2015 at 04:15:42PM -0800, James Penick wrote:
> > I'm very much against it.
> >
> >  In my environment we're going to be depending heavily on the nova
> > scheduler for affinity/anti-affinity of physical datacenter constructs,
> > TOR, Power, etc. Like other operators we need to also have a concept of
> > host aggregates and availability zones for our baremetal as well. If
> these
> > decisions move out of Nova, we'd have to replicate that entire concept of
> > topology inside of the Ironic scheduler. Why do that?
> >
> > I see there are 3 main problems:
> >
> > 1. Resource tracker sucks for Ironic.
> > 2. We need compute host HA
> > 3. We need to schedule compute resources in a consistent way.
> >
> >  We've been exploring options to get rid of RT entirely. However, melwitt
> > suggested out that by improving RT itself, and changing it from a pull
> > model to a push, we skip a lot of these problems. I think it's an
> excellent
> > point. If RT moves to a push model, Ironic can dynamically register nodes
> > as they're added, consumed, claimed, etc and update their state in Nova.
> >
> >  Compute host HA is critical for us, too. However, if the compute hosts
> are
> > not responsible for any complex scheduling behaviors, it becomes much
> > simpler to move the compute hosts to being nothing more than dumb workers
> > selected at random.
> >
> >  With this model, the Nova scheduler can still select compute resources
> in
> > the way that it expects, and deployers can expect to build one system to
> > manage VM and BM. We get rid of RT race conditions, and gain compute HA.
>
> Right, so Deva mentioned this here. Copied from below:
>
> > > > Some folks are asking us to implement a non-virtualization-centric
> > > > scheduler / resource tracker in Nova, or advocating that we wait for
> the
> > > > Nova scheduler to be split-out into a separate project. I do not
> believe
> > > > the Nova team is interested in the former, I do not want to wait for
> the
> > > > latter, and I do not believe that either one will be an adequate
> solution
> > > > -- there are other clients (besides Nova) that need to schedule
> workloads
> > > > on Ironic.
>
> And I totally agree with him. We can rewrite the resource tracker, or we
> can break out the scheduler. That will take years - what do you, as an
> operator, plan to do in the meantime? As an operator of ironic myself,

Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-14 Thread James Penick
I'm very much against it.

 In my environment we're going to be depending heavily on the nova
scheduler for affinity/anti-affinity of physical datacenter constructs,
TOR, Power, etc. Like other operators we need to also have a concept of
host aggregates and availability zones for our baremetal as well. If these
decisions move out of Nova, we'd have to replicate that entire concept of
topology inside of the Ironic scheduler. Why do that?

I see there are 3 main problems:

1. Resource tracker sucks for Ironic.
2. We need compute host HA
3. We need to schedule compute resources in a consistent way.

 We've been exploring options to get rid of RT entirely. However, melwitt
suggested out that by improving RT itself, and changing it from a pull
model to a push, we skip a lot of these problems. I think it's an excellent
point. If RT moves to a push model, Ironic can dynamically register nodes
as they're added, consumed, claimed, etc and update their state in Nova.

 Compute host HA is critical for us, too. However, if the compute hosts are
not responsible for any complex scheduling behaviors, it becomes much
simpler to move the compute hosts to being nothing more than dumb workers
selected at random.

 With this model, the Nova scheduler can still select compute resources in
the way that it expects, and deployers can expect to build one system to
manage VM and BM. We get rid of RT race conditions, and gain compute HA.

-James

On Thu, Dec 10, 2015 at 4:42 PM, Jim Rollenhagen 
wrote:

> On Thu, Dec 10, 2015 at 03:57:59PM -0800, Devananda van der Veen wrote:
> > All,
> >
> > I'm going to attempt to summarize a discussion that's been going on for
> > over a year now, and still remains unresolved.
> >
> > TLDR;
> > 
> >
> > The main touch-point between Nova and Ironic continues to be a pain
> point,
> > and despite many discussions between the teams over the last year
> resulting
> > in a solid proposal, we have not been able to get consensus on a solution
> > that meets everyone's needs.
> >
> > Some folks are asking us to implement a non-virtualization-centric
> > scheduler / resource tracker in Nova, or advocating that we wait for the
> > Nova scheduler to be split-out into a separate project. I do not believe
> > the Nova team is interested in the former, I do not want to wait for the
> > latter, and I do not believe that either one will be an adequate solution
> > -- there are other clients (besides Nova) that need to schedule workloads
> > on Ironic.
> >
> > We need to decide on a path of least pain and then proceed. I really want
> > to get this done in Mitaka.
> >
> >
> > Long version:
> > -
> >
> > During Liberty, Jim and I worked with Jay Pipes and others on the Nova
> team
> > to come up with a plan. That plan was proposed in a Nova spec [1] and
> > approved in October, shortly before the Mitaka summit. It got significant
> > reviews from the Ironic team, since it is predicated on work being done
> in
> > Ironic to expose a new "reservations" API endpoint. The details of that
> > Ironic change were proposed separately [2] but have deadlocked.
> Discussions
> > with some operators at and after the Mitaka summit have highlighted a
> > problem with this plan.
> >
> > Actually, more than one, so to better understand the divergent viewpoints
> > that result in the current deadlock, I drew a diagram [3]. If you haven't
> > read both the Nova and Ironic specs already, this diagram probably won't
> > make sense to you. I'll attempt to explain it a bit with more words.
> >
> >
> > [A]
> > The Nova team wants to remove the (Host, Node) tuple from all the places
> > that this exists, and return to scheduling only based on Compute Host.
> They
> > also don't want to change any existing scheduler filters (especially not
> > compute_capabilities_filter) or the filter scheduler class or plugin
> > mechanisms. And, as far as I understand it, they're not interested in
> > accepting a filter plugin that calls out to external APIs (eg, Ironic) to
> > identify a Node and pass that Node's UUID to the Compute Host.  [[ nova
> > team: please correct me on any point here where I'm wrong, or your
> > collective views have changed over the last year. ]]
> >
> > [B]
> > OpenStack deployers who are using Nova + Ironic rely on a few things:
> > - compute_capabilities_filter to match node.properties['capabilities']
> > against flavor extra_specs.
> > - other downstream nova scheduler filters that do other sorts of hardware
> > matching
> > These deployers clearly and rightly do not want us to take away either of
> > these capabilities, so anything we do needs to be backwards compatible
> with
> > any current Nova scheduler plugins -- even downstream ones.
> >
> > [C] To meet the compatibility requirements of [B] without requiring the
> > nova-scheduler team to do the work, we would need to forklift some parts
> of
> > the nova-scheduler code into Ironic. But I think that's terrible, and 

Re: [openstack-dev] Compute API (Was Re: [nova][cinder] how to handle AZ bug 1496235?)

2015-09-28 Thread James Penick
>I see a clear line between something that handles the creation of all
ancillary resources needed to boot a VM and then the creation of the VM
itself.

I agree. To me the line is the difference between creating a top level
resource, and adding a host to that resource.

For example, I do expect a top level compute API to:
-Request an IP from Neutron
-Associate an instance with a volume
-Add an instance to a network security group in Neutron
-Add a real to a vip in neutron

But I don't expect Nova to:
-Create tenant/provider networks in Neutron
-Create a volume (boot from volume is a weird case)
-Create a neutron security group
-Create a load balancer

Also, if Nova is the API for all things compute, then there are some things
it will need to support that are not specific to VMs. For example, with
Ironic my users expect to use the Nova API/CLI to boot a baremetal compute
resource, and specify raid configuration as well as drive layout. My
understanding is there has been pushback on adding that to Nova, since it
doesn't make sense to have RAID config when building a VM. But, if Nova is
the compute abstraction layer, then we'll need a graceful way to express
this.

-James



On Mon, Sep 28, 2015 at 9:12 AM, Andrew Laski  wrote:

> On 09/28/15 at 10:19am, Sean Dague wrote:
>
>> On 09/28/2015 10:11 AM, Andrew Laski wrote:
>>
>>> On 09/28/15 at 08:50am, Monty Taylor wrote:
>>>
 On 09/28/2015 07:58 AM, Sylvain Bauza wrote:

>>> 
>>
>>>
 Specifically, I want "nova boot" to get me a VM with an IP address. I
 don't want it to do fancy orchestration - I want it to not need fancy
 orchestration, because needing fancy orchestration to get a VM  on a
 network is not a feature.

>>>
>>> In the networking case there is a minimum of orchestration because the
>>> time required to allocate a port is small.  What has been requiring
>>> orchestration is the creation of volumes because of the requirement of
>>> Cinder to download an image, or be on a backend that support fast
>>> cloning and rely on a cache hit.  So the question under discussion is
>>> when booting an instance relies on another service performing a long
>>> running operation where is a good place to handle that.
>>>
>>> My thinking for a while has been that we could use another API that
>>> could manage those things.  And be the central place you're looking for
>>> to pass a simple "nova boot" with whatever options are required so you
>>> don't have to manage the complexities of calls to
>>> Neutron/Cinder/Nova(current API).  What's become clear to me from this
>>> thread is that people don't seem to oppose that idea, however they don't
>>> want their users/clients to need to switch what API they're currently
>>> using(Nova).
>>>
>>> The right way to proceed with this idea seems to be to by evolving the
>>> Nova API and potentially creating a split down the road.  And by split I
>>> more mean architectural within Nova, and not necessarily a split API.
>>> What I imagine is that we follow the model of git and have a plumbing
>>> and porcelain API and each can focus on doing the right things.
>>>
>>
>> Right, and I think that's a fine approach. Nova's job is "give me a
>> working VM". Working includes networking, persistent storage. The API
>> semantics for "give me a working VM" should exist in Nova.
>>
>> It is also fine if there are lower level calls that tweak parts of that,
>> but nova boot shouldn't have to be a multi step API process for the
>> user. Building one working VM you can do something with is really the
>> entire point of Nova.
>>
>
> What I'm struggling with is where do we draw the line in this model?  For
> instance we don't allow a user to boot an instance from a disk image on
> their local machine via the Nova API, that is a multi step process.  And
> which parameters do we expose that can influence network and volume
> creation, if not all of them?  It would be helpful to establish guidelines
> on what is a good candidate for inclusion in Nova.
>
> I see a clear line between something that handles the creation of all
> ancillary resources needed to boot a VM and then the creation of the VM
> itself.  I don't understand why the creation of the other resources should
> live within Nova but as long as we can get to a good split between
> responsibilities that's a secondary concern.
>
>
>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread James Penick
On Thu, Sep 24, 2015 at 2:22 PM, Sam Morrison  wrote:

>
> Yes an AZ may not be considered a failure domain in terms of control
> infrastructure, I think all operators understand this. If you want control
> infrastructure failure domains use regions.
>
> However from a resource level (eg. running instance/ running volume) I
> would consider them some kind of failure domain. It’s a way of saying to a
> user if you have resources running in 2 AZs you have a more available
> service.
>
> Every cloud will have a different definition of what an AZ is, a
> rack/collection of racks/DC etc. openstack doesn’t need to decide what that
> is.
>
> Sam
>

This seems to map more closely to how we use AZs.

Turning it around to the user perspective:
 My users want to be sure that when they boot compute resources, they can
do so in such a way that their application will be immune to a certain
amount of physical infrastructure failure.

Use cases I get from my users:
1. "I want to boot 10 instances, and be sure that if a single leg of power
goes down, I wont lose more than 2 instances"
2. "My instances move a lot of network traffic. I want to ensure that I
don't have more than 3 of my instances per rack, or else they'll saturate
the ToR"
3. "Compute room #1 has been overrun by crazed ferrets. I need to boot new
instances in compute room #2."
4. "I want to boot 10 instances, striped across at least two power domains,
under no less than 5 top of rack switches, with access to network security
zone X."

For my users, abstractions for availability and scale of the control plane
should be hidden from their view. I've almost never been asked by my users
whether or not the control plane is resilient. They assume that my team, as
the deployers, have taken adequate steps to ensure that the control plane
is deployed in a resilient and highly available fashion.

I think it would be good for the operator community to come to an agreement
on what an AZ should be from the perspective of those who deploy both
public and private clouds and bring that back to the dev teams.

-James
:)=
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Compute API (Was Re: [nova][cinder] how to handle AZ bug 1496235?)

2015-09-24 Thread James Penick
>
>
> At risk of getting too offtopic I think there's an alternate solution to
> doing this in Nova or on the client side.  I think we're missing some sort
> of OpenStack API and service that can handle this.  Nova is a low level
> infrastructure API and service, it is not designed to handle these
> orchestrations.  I haven't checked in on Heat in a while but perhaps this
> is a role that it could fill.
>
> I think that too many people consider Nova to be *the* OpenStack API when
> considering instances/volumes/networking/images and that's not something I
> would like to see continue.  Or at the very least I would like to see a
> split between the orchestration/proxy pieces and the "manage my
> VM/container/baremetal" bits


(new thread)
 You've hit on one of my biggest issues right now: As far as many deployers
and consumers are concerned (and definitely what I tell my users within
Yahoo): The value of an OpenStack value-stream (compute, network, storage)
is to provide a single consistent API for abstracting and managing those
infrastructure resources.

 Take networking: I can manage Firewalls, switches, IP selection, SDN, etc
through Neutron. But for compute, If I want VM I go through Nova, for
Baremetal I can -mostly- go through Nova, and for containers I would talk
to Magnum or use something like the nova docker driver.

 This means that, by default, Nova -is- the closest thing to a top level
abstraction layer for compute. But if that is explicitly against Nova's
charter, and Nova isn't going to be the top level abstraction for all
things Compute, then something else needs to fill that space. When that
happens, all things common to compute provisioning should come out of Nova
and move into that new API. Availability zones, Quota, etc.

-James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] On an API proxy from baremetal to ironic

2014-09-11 Thread James Penick
We manage a fairly large nova-baremetal installation at Yahoo. And while we've 
developed tools to hit the nova-bm API, we're planning to move to ironic 
without any support for the nova BM API. Definitely no interest in the proxy 
API from our end. 
Sometimes you just need to let a thing die. 
-James
 :)= 

 On Wednesday, September 10, 2014 12:51 PM, Ben Nemec 
openst...@nemebean.com wrote:
   

 -BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/10/2014 02:26 PM, Dan Smith wrote:
 1) Is this tested anywhere?  There are no unit tests in the patch
 and it's not clear to me that there would be any Tempest coverage
 of this code path.  Providing this and having it break a couple
 of months down the line seems worse than not providing it at all.
 This is obviously fixable though.
 
 AFAIK, baremetal doesn't have any tempest-level testing at all
 anyway. However, I don't think our proxy code breaks, like, ever. I
 expect that unit tests for this stuff is plenty sufficient.

Right, but this would actually be running against Ironic, which does
have Tempest testing.  It might require some client changes to be able
to hit a Baremetal API instead of Ironic though.

 
 2) If we think maintaining compatibility for existing users is
 that important, why aren't we proxying everything?  Is it too 
 difficult/impossible due to the differences between Baremetal
 and Ironic?  And if they're that different, does it still make
 sense to allow one to look like the other?  As it stands, this
 isn't going to let deployers use their existing tools without
 modification anyway.
 
 Ideally we'd proxy everything, based on our current API
 guarantees. However, I think the compromise of just the show/index
 stuff came about because it would be extremely easy to do, provide
 some measure of continuity, and provide us a way to return
 something nicer for the create/update operations than a 500. It
 seemed like a completely fair and practical balance.

Fair enough.  I'm still not crazy about it, but since it already
exists and you say these interfaces don't require much maintenance I
guess that takes care of my major concerns.

- -Ben
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUEKsAAAoJEDehGd0Fy7uqM3YIAKaqJPCwyS1l3NoKhj7qmGlT
wqdPspI2LyVgnHY62iq73O6FSpmEp0JzEcuBxHi21gK3tIBrvRr+mOsNtGNoj7Of
84YmcFyWgBR75rRDSLLnVu7rs1LJ0jpGwVzWDi/vmzVoxWdNXwSx223mQTwi9gJ3
n+Rgf0HYOKUwGgDVDpyWFv1DUBo/Hgc3ZdG8pzwnEqONN0bmRlBQMZRJrl2+8Jvj
zTYxDmunWp8FbTdKE80JcQ1YQYjmg4anCzaH0MEwax+j6lxu8MwEtM61ISJ7vV3L
KqTSW2OrjtqKY/9oHSnKiBuD9RInyWhML6pq8jsniadPw+TOatJ4PZaCyTS9XvI=
=cSmK
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What does NASA not using OpenStack mean to OS's future

2014-08-26 Thread James Penick
Don't feed the troll. :)
 
:)=


On Monday, August 25, 2014 12:39 PM, Joshua Harlow harlo...@outlook.com wrote:
 


So to see if we can get something useful from this thread.

What was your internal analysis, can it be published? Even negative analysis is 
useful to make openstack better...

It'd be nice to have some details on what you found, what u didn't find, so 
that we can all improve...

After all that is what it's all about.

-Josh


On Aug 25, 2014, at 11:13 AM, Aryeh Friedman aryeh.fried...@gmail.com wrote:

If I was doing that then I would be promoting the platform by name (which I am 
not). I was just pointing out in our own internal ananylis OS came in dead 
last among all the open source IaaS/PaaS's (the current version of mine is not 
#1 btw)




On Mon, Aug 25, 2014 at 2:03 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:

On 25 August 2014 10:34, Aryeh Friedman aryeh.fried...@gmail.com wrote:

Do you call Martin Meckos having no clue... he is the one that leveled the 
second worse criticism after mine... or is Euclapytus not one the founding 
members of OpenStack (after all many of the glance commands still use it's 
name)



You appear to be trolling, and throwing around amazingly easy-to-disprove 
'factoids', in an inappropriate forum, in order to drum up support for your 
own competing open source cloud platform.  Please stop.

Your time would be much better spent improving your platform rather than 
coming up with frankly bizarre criticism of the competitors.
-- 
Ian.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] running VM manually using openstack

2014-05-13 Thread James Penick
Hey Sonia,
 Do the backing file and instance directory exist? -drive 
file=/opt/stack/data/nova/instances/3765ec4d-48d0-4f0b-9f7d-2f73ee3a3852/disk 
implies that both that directory exists and the instance image is contained 
within it. 
-James

On Tuesday, May 13, 2014 6:22 AM, sonia verma soniaverma9...@gmail.com wrote:
 
HI 

I have install openstack using devstack.I'm using multinode setup with one 
controller node and one compute node.I'm able to launch VM from openstack 
dashboard onto the controller node.
At the controller node when i do ps -ef | grep nova,the following appears.. 

 ps -ef | grep nova
root   799   798  0 11:48 pts/22   00:00:00 sg libvirtd 
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
shiv   801   799  0 11:48 pts/22   00:00:55 /usr/bin/python 
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
shiv  1233  1232  1 11:48 pts/23   00:03:45 /usr/bin/python 
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
shiv  1457  1233  0 11:48 pts/23   00:00:45 /usr/bin/python 
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
shiv  1458  1233  0 11:48 pts/23   00:00:45 /usr/bin/python 
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
shiv  1459  1233  0 11:48 pts/23   00:00:45 /usr/bin/python 
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
shiv  1460  1233  0 11:48 pts/23   00:00:45 /usr/bin/python 
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
shiv  1578  1577  0 11:48 pts/24   00:00:09 /usr/bin/python 
/usr/local/bin/nova-cert --config-file /etc/nova/nova.conf
shiv  1880  1879  0 11:48 pts/25   00:00:10 /usr/bin/python 
/usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
shiv  2221  2219  0 11:48 pts/26   00:00:05 /usr/bin/python 
/usr/local/bin/nova-novncproxy --config-file /etc/nova/nova.conf --web 
/opt/stack/noVNC
shiv  2431  2429  0 11:48 pts/27   00:00:00 /usr/bin/python 
/usr/local/bin/nova-xvpvncproxy --config-file /etc/nova/nova.conf
shiv  2670  2669  0 11:48 pts/28   00:00:09 /usr/bin/python 
/usr/local/bin/nova-consoleauth --config-file /etc/nova/nova.conf
shiv  2897  2895  0 11:48 pts/29   00:00:00 /usr/bin/python 
/usr/local/bin/nova-objectstore --config-file /etc/nova/nova.conf
122  13484 1 19 15:43 ?    00:00:02 /usr/bin/kvm -S -M pc-1.0 
-enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-0003 
-uuid cec324c7-2398-49b8-844b-08c6cca7fff3 -smbios 
type=1,manufacturer=OpenStack Foundation,product=OpenStack 
Nova,version=2014.2,serial=44454c4c-5600-1039-804c-c7c04f595831,uuid=cec324c7-2398-49b8-844b-08c6cca7fff3
 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0003.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew 
-no-kvm-pit-reinjection -no-hpet -no-shutdown -drive 
file=/opt/stack/data/nova/instances/cec324c7-2398-49b8-844b-08c6cca7fff3/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
 -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive
 
file=/opt/stack/data/nova/instances/cec324c7-2398-49b8-844b-08c6cca7fff3/disk.config,if=none,media=cdrom,id=drive-ide0-1-1,readonly=on,format=raw,cache=none
 -device ide-drive,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev 
tap,fd=19,id=hostnet0,vhost=on,vhostfd=20 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:9b:09:3b,bus=pci.0,addr=0x3 
-chardev 
file,id=charserial0,path=/opt/stack/data/nova/instances/cec324c7-2398-49b8-844b-08c6cca7fff3/console.log
 -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 
-device isa-serial,chardev=charserial1,id=serial1 -usb -vnc 127.0.0.1:1 -k 
en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
shiv 14867 20602  0 15:43 pts/7    00:00:00 grep --color=auto nova
shiv 31444 31443  1 11:47 pts/16   00:03:45 /usr/bin/python 
/usr/local/bin/nova-api
shiv 31469 31444  0 11:47 pts/16   00:00:00 /usr/bin/python 
/usr/local/bin/nova-api
shiv 31470 31444  0 11:47 pts/16   00:00:00 /usr/bin/python 
/usr/local/bin/nova-api
shiv 31471 31444  0 11:47 pts/16   00:00:00 /usr/bin/python 
/usr/local/bin/nova-api

I see the VM instance booted using KVM utility on to the controller node.
However when i try to launch VM manually using the similar command but changing 
the instances id and name as above..

sudo /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 512 -smp 
1,sockets=1,cores=1,threads=1 -name instance-0002 -uuid 
3765ec4d-48d0-4f0b-9f7d-2f73ee3a3852 -smbios type=1,manufacturer=OpenStack 
Foundation,product=OpenStack 
Nova,version=2014.2,serial=44454c4c-4400-1034-8042-c7c04f313032,uuid=3765ec4d-48d0-4f0b-9f7d-2f73ee3a3852
 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0002.monitor,server,nowait
 -mon 

Re: [openstack-dev] deliver the vm-level HA to improve the business continuity with openstack

2014-04-14 Thread James Penick
Enterprise!

We drive the ³VM=Cattle² message pretty hard. Part of onboarding a
property to our cloud, and allowing them to serve traffic from VMs is
explaining the transient nature of VMs. I broadcast the message that all
compute resources die, and if your day/week/month is ruined because of a
single compute instance going away, then you¹re doing something Very
Wrong. :)

-James
:)=






On 4/14/14, 11:05 AM, Jay Pipes jaypi...@gmail.com wrote:

On Mon, 2014-04-14 at 10:56 -0700, Steven Dake wrote:
 On 04/14/2014 10:46 AM, Tim Bell wrote:
  Can Heat control/monitor a VM which it has not created and restart it
(potentially on a different hypervisor with live migration) ?
 
  Tim
 Tim,
 
 No it sure can't.

But a 10-20 line shell script could, calling nova CLI commands.

I think the point here is that we see pushes like this to treat VMs as
pets and we've so far been able to successfully push back on these
(anti)feature requests, pointing out that this kind of thing generally
is antithetical to a utility cloud model and antithetical to the
horizontal scale-out architecture that cloud espouses (i.e. don't have a
single point of failure that requires this kind of setup).

/me waits for someone to use the word enterprise.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev