Re: [openstack-dev] [TripleO][Heat] reverting the HOT migration? // dealing with lockstep changes

2014-08-13 Thread James Slagle
On Tue, Aug 12, 2014 at 7:10 PM, Robert Collins
 wrote:
> On 13 August 2014 11:05, Robert Collins  wrote:
>
>> I've reproduced the problem with zane's fix for the validation error -
>> and it does indeed still break:
>> "| stack_status_reason  | StackValidationFailed: Property error :
>> NovaCompute6:
>> |  | key_name Value must be a string
>>
>>
>>  "
>
> Filed https://bugs.launchpad.net/heat/+bug/1356097 to track this.
>
> Since this makes it impossible to upgrade a pre-HOT-migration merged
> stack, I'm going to push forward on toggling back to non-HOT, at least
> until we can figure out whether this is a shallow or deep problem in
> Heat. (Following our 'rollback then fix' stock approach to issues).

The backwards compatibility spec is yet to be approved. This is partly
the reason why i pushed for the stable branches last cycle -- because
TripleO has no backwards compatibility guarantee (yet).

Regardless, I'd hate to see the migration to HOT reverted. That will
cause a lot of churn on folks. Especially on the parts of the Tuskar
work that are depending on this migration. And then the disruption on
everyone again when we try to merge it next time.

I think working through the Heat bugs is less churn. Steve Baker
proposed a fix for the latest issue:
https://review.openstack.org/#/c/113739/ but it's marked WIP. I'd
rather push on getting these fixes merged vs. reverting. Since we
actually don't know if it's a shallow vs. deep problem, let's not
assume deep and automatically cause a bunch of churn for everyone.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Specs and approvals

2014-08-19 Thread James Slagle
On Tue, Aug 19, 2014 at 5:31 AM, Robert Collins
 wrote:
> Hey everybody - https://wiki.openstack.org/wiki/TripleO/SpecReviews
> seems pretty sane as we discussed at the last TripleO IRC meeting.
>
> I'd like to propose that we adopt it with the following tweak:
>
> 19:46:34  so I propose that +2 on a spec is a commitment to
> review it over-and-above the core review responsibilities
> 19:47:05  if its not important enough for a reviewer to do
> that thats a pretty strong signal
> 19:47:06  lifeless: +1, I thought we already agreed to that
> at the meetup
> 19:47:17  yea, sounds fine to me
> 19:47:20  +1
> 19:47:30  dprince: it wasn't clear whether it was
> part-of-responsibility, or additive, I'm proposing we make it clearly
> additive
> 19:47:52  and separately I think we need to make surfacing
> reviews-for-themes a lot better
>
> That is - +1 on a spec review is 'sure, I like it', +2 is specifically
> "I will review this *over and above* my core commitment" - the goal
> here is to have some very gentle choke on concurrent WIP without
> needing the transition to a managed pull workflow that Nova are
> discussing - which we didn't have much support for during the meeting.
>
> Obviously, any core can -2 for any of the usual reasons - this motion
> is about opening up +A to the whole Tripleo core team on specs.
>
> Reviewers, and other interested kibbitzers, please +1 / -1 as you feel fit :)

+1 from me. I've also added the +1/+2 distinction under the Reviewer
workload bullet item on the wiki page.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Specs and approvals

2014-08-19 Thread James Slagle
On Tue, Aug 19, 2014 at 11:47 AM, Daniel P. Berrange
 wrote:
> On Tue, Aug 19, 2014 at 09:31:48PM +1200, Robert Collins wrote:
>> Hey everybody - https://wiki.openstack.org/wiki/TripleO/SpecReviews
>> seems pretty sane as we discussed at the last TripleO IRC meeting.
>>
>> I'd like to propose that we adopt it with the following tweak:
>>
>> 19:46:34  so I propose that +2 on a spec is a commitment to
>> review it over-and-above the core review responsibilities
>> 19:47:05  if its not important enough for a reviewer to do
>> that thats a pretty strong signal
>> 19:47:06  lifeless: +1, I thought we already agreed to that
>> at the meetup
>> 19:47:17  yea, sounds fine to me
>> 19:47:20  +1
>> 19:47:30  dprince: it wasn't clear whether it was
>> part-of-responsibility, or additive, I'm proposing we make it clearly
>> additive
>> 19:47:52  and separately I think we need to make surfacing
>> reviews-for-themes a lot better
>>
>> That is - +1 on a spec review is 'sure, I like it', +2 is specifically
>> "I will review this *over and above* my core commitment" - the goal
>> here is to have some very gentle choke on concurrent WIP without
>> needing the transition to a managed pull workflow that Nova are
>> discussing - which we didn't have much support for during the meeting.
>
> If it is considered to be a firm commitment to review the code, then
> people will have to be quite conservative when approving blueprints
> lest take accept too much work. It is hard to predict just how much
> work will be involved in the code review from the spec though, and
> even harder to predict /when/ during the dev cycle the code will be
> posted.

The commitment is not just about planning for optimization of your
review load across a cycle. Anyone is free to review anything at any
time. Including +2'ing patches implementing specs that you didn't +2.
I don't think patches that implement specs from individuals who aren't
the primary/secondary assignee are going to be rejected on that point
either. The review commitment is roughly the same as the assignee
commitment.

There's likely to be some set of specs that, as a reviewer, you're
going to prioritize reviewing the implementing patches regardless of
how busy you arethat's what the commitment is about. Should core
reviewers really be +2'ing specs for which they *don't* intend to
commit to review the patches? I suppose the individual projects will
have to decide what they each think will work for them.

To me, a +2 on a spec implies a couple of things, one of which is that
you agree that some amount of the project's resources should work on
it in terms of implementation and reviews. If you're +2'ing a spec,
who are you signing up to do that review workload if not yourself?

That being said, there are always going to be human factors at play.
Not all commitments can always be met. I don't think we're going to
"punish" people who are acting in good faith.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Specs and approvals

2014-08-19 Thread James Slagle
On Tue, Aug 19, 2014 at 1:58 PM, Joe Gordon  wrote:
>
> While I cannot speak for the dynamics of the tripleo team, if this were to
> be adopted in nova I would not +2 any blueprints as I don't think I can
> commit to *guaranteeing* I will have even more review bandwidth, I can make
> best efforts but a personally cannot guarantee.

I'm pretty sure we're all humans here, although there may be some bots
among us :). We're all working on "best effort", there are no 100%
guarantees.

It's a commitment...no point arguing semantics of guarantee vs
commitment. But, it's not about seeking to punish people that are
acting in good faith, who may be unable to deliver on that commitment
for whatever reason.

This is just an attempt to codify and record who is signing up to do
the review workload on a given spec. Why +2 a spec yet not commit to
doing a best effort at reviewing the patches? Is there just hope that
the review burden is going to get "absorbed" by others in the
community?

I think that's the approach that is clearly not working, hence the
different ideas floating around the list about how to limit the scope
of the changes currently in flight. You could say, well, we'll only
have X slots for approved specs at any one time, but that doesn't
address the question of who, if anyone, is going to be giving their
best effort to review those changes. I suppose if things are limited
sufficiently, everyone will have enough time to review everything. The
approach we're suggesting here just looks at the problem differently,
it gives folks the opportunity to say, I'm going to focus on reviewing
these changes without dropping all my other reviews. And of course
this is just one of the criteria laid out on the wiki page before
approving a spec, amongst many.
-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Puppet elements support

2014-09-10 Thread James Slagle
On Mon, Sep 8, 2014 at 7:11 PM, Emilien Macchi
 wrote:
> Hi TripleO community,
>
> I would be really interested by helping to bring Puppet elements support
> in TripleO.
> So far I've seen this work:
> https://github.com/agroup/tripleo-puppet-elements/tree/puppet_dev_heat
> which is a very good bootstrap but really outdated.
> After some discussion with Greg Haynes on IRC, we came up with the idea
> to create a repo (that would be move in Stackforge or OpenStack git) and
> push the bits from what has been done by HP folks with updates &
> improvements.
>
> I started a basic repo
> https://github.com/enovance/tripleo-puppet-elements that could be moved
> right now on Stackforge to let the community start the work.
>
> My proposal is:
> * move this repo (or create a new one directly on
> github/{stackforge,openstack?})
> * push some bits from "agroup" original work.
> * continue the contributions, updates & improvements.
>
> Any thoughts?

Sounds good to me. I'm +1 on seeing some integration between TripleO
and the existing openstack-puppet-modules. The tripleo-puppet-elements
repo under agroup was kind of a POC that mostly Red Hat folks were
working on. I can try and answer any questions about that if you have
any.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-10 Thread James Slagle
On Tue, Sep 9, 2014 at 2:32 PM, Gregory Haynes  wrote:
> Hello everyone!
>
> I have been working on a meta-review of StevenK's reviews and I would
> like to propose him as a new member of our core team.

+1. Steven has also been doing great work on os-cloud-config, I think
he'd make a good addition to the core team.



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and "ready state" orchestration

2014-09-15 Thread James Slagle
On Mon, Sep 15, 2014 at 7:44 AM, Steven Hardy  wrote:
> All,
>
> Starting this thread as a follow-up to a strongly negative reaction by the
> Ironic PTL to my patches[1] adding initial Heat->Ironic integration, and
> subsequent very detailed justification and discussion of why they may be
> useful in this spec[2].
>
> Back in Atlanta, I had some discussions with folks interesting in making
> "ready state"[3] preparation of bare-metal resources possible when
> deploying bare-metal nodes via TripleO/Heat/Ironic.

After a cursory reading of the references, it seems there's a couple of issues:
- are the features to move hardware to a "ready-state" even going to
be in Ironic proper, whether that means in ironic at all or just in
contrib.
- assuming some of the features are there, should Heat have any Ironic
resources given that Ironic's API is admin-only.

>
> The initial assumption is that there is some discovery step (either
> automatic or static generation of a manifest of nodes), that can be input
> to either Ironic or Heat.

I think it makes a lot of sense to use Heat to do the bulk
registration of nodes via Ironic. I understand the argument that the
Ironic API should be "admin-only" a little bit for the non-TripleO
case, but for TripleO, we only have admins interfacing with the
Undercloud. The user of a TripleO undercloud is the deployer/operator
and in some scenarios this may not be the undercloud admin. So,
talking about TripleO, I don't really buy that the Ironic API is
admin-only.

Therefore, why not have some declarative Heat resources for things
like Ironic nodes, that the deployer can make use of in a Heat
template to do bulk node registration?

The alternative listed in the spec:

"Don’t implement the resources and rely on scripts which directly
interact with the Ironic API, prior to any orchestration via Heat."

would just be a bit silly IMO. That goes against one of the main
drivers of TripleO, which is to use OpenStack wherever possible. Why
go off and write some other thing that is going to parse a
json/yaml/csv of nodes and orchestrate a bunch of Ironic api calls?
Why would it be ok for that other thing to use Ironic's "admin-only"
API yet claim it's not ok for Heat on the undercloud to do so?


> Following discovery, but before an undercloud deploying OpenStack onto the
> nodes, there are a few steps which may be desired, to get the hardware into
> a state where it's ready and fully optimized for the subsequent deployment:
>
> - Updating and aligning firmware to meet requirements of qualification or
>   site policy
> - Optimization of BIOS configuration to match workloads the node is
>   expected to run
> - Management of machine-local storage, e.g configuring local RAID for
>   optimal resilience or performance.
>
> Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
> of these steps possible, but there's no easy way to either encapsulate the
> (currently mostly vendor specific) data associated with each step, or to
> coordinate sequencing of the steps.
>
> What is required is some tool to take a text definition of the required
> configuration, turn it into a correctly sequenced series of API calls to
> Ironic, expose any data associated with those API calls, and declare
> success or failure on completion.  This is what Heat does.
>
> So the idea is to create some basic (contrib, disabled by default) Ironic
> heat resources, then explore the idea of orchestrating ready-state
> configuration via Heat.
>
> Given that Devananda and I have been banging heads over this for some time
> now, I'd like to get broader feedback of the idea, my interpretation of
> "ready state" applied to the tripleo undercloud, and any alternative
> implementation ideas.

My opinion is that if the features are in Ironic, they should be
exposed via Heat resources for orchestration. If the TripleO case is
too much of a one-off (which I don't really think it is), then sure,
keep it all in contrib so that no one gets confused about why the
resources are there.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and "ready state" orchestration

2014-09-15 Thread James Slagle
On Mon, Sep 15, 2014 at 12:59 PM, Clint Byrum  wrote:
> Excerpts from James Slagle's message of 2014-09-15 08:15:21 -0700:
>> On Mon, Sep 15, 2014 at 7:44 AM, Steven Hardy  wrote:
>> > Following discovery, but before an undercloud deploying OpenStack onto the
>> > nodes, there are a few steps which may be desired, to get the hardware into
>> > a state where it's ready and fully optimized for the subsequent deployment:
>> >
>> > - Updating and aligning firmware to meet requirements of qualification or
>> >   site policy
>> > - Optimization of BIOS configuration to match workloads the node is
>> >   expected to run
>> > - Management of machine-local storage, e.g configuring local RAID for
>> >   optimal resilience or performance.
>> >
>> > Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
>> > of these steps possible, but there's no easy way to either encapsulate the
>> > (currently mostly vendor specific) data associated with each step, or to
>> > coordinate sequencing of the steps.
>> >
>> > What is required is some tool to take a text definition of the required
>> > configuration, turn it into a correctly sequenced series of API calls to
>> > Ironic, expose any data associated with those API calls, and declare
>> > success or failure on completion.  This is what Heat does.
>> >
>> > So the idea is to create some basic (contrib, disabled by default) Ironic
>> > heat resources, then explore the idea of orchestrating ready-state
>> > configuration via Heat.
>> >
>> > Given that Devananda and I have been banging heads over this for some time
>> > now, I'd like to get broader feedback of the idea, my interpretation of
>> > "ready state" applied to the tripleo undercloud, and any alternative
>> > implementation ideas.
>>
>> My opinion is that if the features are in Ironic, they should be
>> exposed via Heat resources for orchestration. If the TripleO case is
>> too much of a one-off (which I don't really think it is), then sure,
>> keep it all in contrib so that no one gets confused about why the
>> resources are there.
>>
>
> And I think if this is a common thing that Ironic users need to do,
> then Ironic should do it, not Heat.

I would think Heat would be well suited for the case where you want to
orchestrate a workflow on top of existing Ironic API's for managing
the infrastructure lifecycle, of which attaining ready state is one
such use case.

It's a fair point that if these things are common enough to all users
that they should just be done in Ironic. To what extent such an API in
Ironic would just end up orchestrating other Ironic API's the same way
Heat might do it would be hard to tell. It seems like that's the added
complexity in my view vs. a set of simple Heat resources and taking
advantage of all the orchestration that Heat already offers.

I know this use case isn't just about enrolling nodes (apologies if I
implied that in my earlier response). That was just one such use that
jumped out at me in which it might be nice to use Heat. I think about
how os-cloud-config registers nodes today. It has to create the node,
then create the port (2 separate calls). And, it also needs the
ability to update registered nodes[1]. This logic is going to end up
living in os-cloud-config.

And perhaps the answer is "no", but it seems to me Heat could do this
sort of thing easier already if it had the resources defined to do so.
It'd be neat to have a yaml file of all your defined nodes, and use
stack-create to register them in Ironic. When you need to add some new
ones, update the yaml, and then stack-update.

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/043782.html
(thread crossed into Sept)

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks

2018-11-02 Thread James Slagle
On Fri, Nov 2, 2018 at 9:39 AM Dan Prince  wrote:
>
> I pushed a patch[1] to update our containerized deployment
> architecture docs yesterday. There are 2 new fairly useful sections we
> can leverage with TripleO's stepwise deployment. They appear to be
> used somewhat sparingly so I wanted to get the word out.
>
> The first is 'deploy_steps_tasks' which gives you a means to run
> Ansible snippets on each node/role in a stepwise fashion during
> deployment. Previously it was only possible to execute puppet or
> docker commands where as now that we have deploy_steps_tasks we can
> execute ad-hoc ansible in the same manner.
>
> The second is 'external_deploy_tasks' which allows you to use run
> Ansible snippets on the Undercloud during stepwise deployment. This is
> probably most useful for driving an external installer but might also
> help with some complex tasks that need to originate from a single
> Ansible client.

+1


> The only downside I see to these approaches is that both appear to be
> implemented with Ansible's default linear strategy. I saw shardy's
> comment here [2] that the :free strategy does not yet apparently work
> with the any_errors_fatal option. Perhaps we can reach out to someone
> in the Ansible community in this regard to improve running these
> things in parallel like TripleO used to work with Heat agents.

It's effectively parallel across one role at a time at the moment, up
to the number of configured forks (default: 25). The reason it won't
parallelize across roles, is because it's a different task file used
with import_tasks for each role. Ansible won't run that in parallel
since the task list is different.

I was able to make this parallel across roles for the pre and post
deployments by making the task file the same for each role, and
controlling the difference with group and host vars:
https://review.openstack.org/#/c/574474/
From Ansible's perspective, the task list is now the same for each
host, although different things will be done depending on the value of
vars for each host.

It's possible a similar approach could be done with the other
interfaces you point out here.

In addition to the any_errors_fatal issue when using strategy:free, is
that you'd also lose the grouping of the task output per role after
each task finishes. This is mostly cosmetic, but using free does
create a lot more noisier output IMO.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Edge squad meeting this week and next week

2018-11-07 Thread James Slagle
I won't be around to run the Edge squad meeting this week and next
week. If someone else wants to pick it up, that would be great.
Otherwise, consider it cancelled :). Thanks!

https://etherpad.openstack.org/p/tripleo-edge-squad-status

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-28 Thread James Slagle
On Wed, Nov 28, 2018 at 12:31 PM Bogdan Dobrelya  wrote:
> Long story short, we cannot shoot both rabbits with a single shot, not
> with puppet :) May be we could with ansible replacing puppet fully...
> So splitting config and runtime images is the only choice yet to address
> the raised security concerns. And let's forget about edge cases for now.
> Tossing around a pair of extra bytes over 40,000 WAN-distributed
> computes ain't gonna be our the biggest problem for sure.

I think it's this last point that is the crux of this discussion. We
can agree to disagree about the merits of this proposal and whether
it's a pre-optimzation or micro-optimization, which I admit are
somewhat subjective terms. Ultimately, it seems to be about the "why"
do we need to do this as to the reason why the conversation seems to
be going in circles a bit.

I'm all for reducing container image size, but the reality is that
this proposal doesn't necessarily help us with the Edge use cases we
are talking about trying to solve.

Why would we even run the exact same puppet binary + manifest
individually 40,000 times so that we can produce the exact same set of
configuration files that differ only by things such as IP address,
hostnames, and passwords? Maybe we should instead be thinking about
how we can do that *1* time centrally, and produce a configuration
that can be reused across 40,000 nodes with little effort. The
opportunity for a significant impact in terms of how we can scale
TripleO is much larger if we consider approaching these problems with
a wider net of what we could do. There's opportunity for a lot of
better reuse in TripleO, configuration is just one area. The plan and
Heat stack (within the ResourceGroup) are some other areas.

At the same time, if some folks want to work on smaller optimizations
(such as container image size), with an approach that can be agreed
upon, then they should do so. We just ought to be careful about how we
justify those changes so that we can carefully weigh the effort vs the
payoff. In this specific case, I don't personally see this proposal
helping us with Edge use cases in a meaningful way given the scope of
the changes. That's not to say there aren't other use cases that could
justify it though (such as the security points brought up earlier).

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] [TripleO] Easier way of trying TripleO

2013-11-19 Thread James Slagle
I'd like to propose an idea around a simplified and complimentary version of
devtest that makes it easier for someone to get started and try TripleO.  

The goal being to get people using TripleO as a way to experience the
deployment of OpenStack, and not necessarily a way to get an experience of a
useable OpenStack cloud itself.

To that end, we could:

1) Provide an undercloud vm image so that you could effectively skip the entire
   seed setup.
2) Provide pre-built downloadable images for the overcloud and deployment
   kernel and ramdisk.
3) Instructions on how to use these images to deploy a running
   overcloud.

Images could be provided for Ubuntu and Fedora, since both those work fairly
well today.

The instructions would look something like:

1) Download all the images.
2) Perform initial host setup.  This would be much smaller than what is
   required for devtest and off the top of my head would mostly be:
   - openvswitch bridge setup
   - libvirt configuration
   - ssh configuration (for the baremetal virtual power driver)
3) Start the undercloud vm.  It would need to be bootstrapped with an initial
   static json file for the heat metadata, same as the seed works today.
4) Any last mile manual configuration, such as nova.conf edits for the virtual
   power driver user.
6) Use tuskar+horizon (running on the undercloud)  to deploy the overcloud.
7) Overcloud configuration (don't see this being much different than what is
   there today).

All the openstack clients, heat templates, etc., are on the undercloud vm, and
that's where they're used from, as opposed to from the host (results in less 
stuff
to install/configure on the host).

We could also provide instructions on how to configure the undercloud vm to
provision baremetal.  I assume this would be possible, given the correct
bridged networking setup.

It could make sense to use an all in one overcloud for this as well, given it's
going for simplification.

Obviously, this approach implies some image management on the community's part,
and I think we'd document and use all the existing tools (dib, elements) to
build images, etc.

Thoughts on this approach?  

--
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Easier way of trying TripleO

2013-11-21 Thread James Slagle
On Tue, Nov 19, 2013 at 6:57 PM, Robert Collins
 wrote:
> On 20 November 2013 10:40, James Slagle  wrote:
>> I'd like to propose an idea around a simplified and complimentary version of
>> devtest that makes it easier for someone to get started and try TripleO.
>
> I think its a grand idea (in fact it's been floated many times). For a
> while we ran our own jenkins with such downloadable images.
>
> Right now I think we need to continue the two primary arcs we have:
> CI/CD integration and a CD HA overcloud so that we are being tested,
> and then work on making the artifacts from those tests available.

Yes, understood.

There are people focused on CI/CD work currently, and I don't think this effort
would take away from that focus, other than the time it takes to do patch
reviews, but I don't think that should be a reason not to do it.

It'd be nice to have the images delivered as output from a well tested CD run,
but I'd like to not delay until we have that.  I think we could make this
available quicker than we could get to that point.

Plus, I think if we make this easier to try, we might get more community
participation.  Right now, I don't think we're attracting people who want to
try a test/development tripleo based deployment with devtest.  We're really
only attracting people who want to contribute and develop on tripleo.  Which,
may certainly be as designed at this point.  But I feel we have a lot of
positive momentum coming out of summit, so it makes sense to me to try and give
people something easier to try.

Given that, I think the next steps would be:
 - add a bit more detail in a blueprint and get some feedback on that
 - open up cards to track the work in the tripleo trello
 - start the work on it :).  If it's just me working on it, I'm fine with
   that.  I expect there may be 1 or 2 other folks that might work on it
   as well, but these aren't folks that are looking at the CI/CD stories right
   now.

Any opposition to that approach or other thoughts?


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-27 Thread James Slagle
On Wed, Nov 27, 2013 at 8:39 AM, Jaromir Coufal  wrote:

> V0: basic slick installer - flexibility and control first
> - enable user to auto-discover (or manual register) nodes
> - let user decide, which node is going to be controller, which is going to
> be compute or storage
> - associate images with these nodes
> - deploy
>

I think you've made some good points about the user experience helping drive the
design of what Tuskar is targeting.  I think the conversation around
how to design
letting the user pick what to deploy where should continue.  I wonder
though, would
it be possible to not have that in a V0?

Basically make your V0 above even smaller (eliminating the middle 2
bullets), and just
letting nova figure it out, the same as what happens now when we run
"heat stack-create " from the CLI.

I see 2 possible reasons for trying this:
- Gets us to something people can try even sooner
- It may turn out we want this option in the long run ... a "figure it
out all out for me"
  type of approach, so it wouldn't be wasted effort.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-04 Thread James Slagle
On Wed, Dec 4, 2013 at 2:12 AM, Robert Collins
 wrote:
> In this months review:
>  - Ghe Rivero for -core

+1.  Has been doing very good reviews.

>  - Jan Provaznik for removal from -core
>  - Jordan O'Mara for removal from -core
>  - Martyn Taylor for removal from -core
>  - Jiri Tomasek for removal from -core
>  - Jamomir Coufal for removal from -core
>
> Jan, Jordan, Martyn, Jiri and Jaromir are still actively contributing
> to TripleO and OpenStack, but I don't think they are tracking /
> engaging in the code review discussions enough to stay in -core: I'd
> be delighted if they want to rejoin as core - as we discussed last
> time, after a shorter than usual ramp up period if they get stuck in.

What's the shorter than usual ramp up period?

In general, I agree with your points about removing folks from core.

We do have a situation though where some folks weren't reviewing as
frequently when the Tuskar UI/API development slowed a bit post-merge.
 Since that is getting ready to pick back up, my concern with removing
this group of folks, is that it leaves less people on core who are
deeply familiar with that code base.  Maybe that's ok, especially if
the fast track process to get them back on core is reasonable.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] capturing build details in images

2013-12-05 Thread James Slagle
On Wed, Dec 4, 2013 at 5:19 PM, Robert Collins
 wrote:
> This is a follow up to https://review.openstack.org/59621 to get
> broader discussion..
>
> So at the moment we capture a bunch of details in the image - what
> parameters the image was built with and some environment variables.
>
> Last week we were capturing everything, which there is broad consensus
> was too much, but it seems to me that that is based on two things:
>  - the security ramifications of unanticipated details being baked
> into the image
>  - many variables being irrelevant most of the time
>
> I think those are both good points. But... the problem with diagnostic
> information is you don't know that you need it until you don't have
> it.
>
> I'm particularly worried that things like bad http proxies, and third
> party elements that need variables we don't know about will be
> undiagnosable. Forcing everything through a DIB_FOO variable thunk
> seems like just creating work for ourselves - I'd like to avoid that.
>
> Further, some variables we should capture (like http_proxy) have
> passwords embedded in them, so even whitelisting what variables to
> capture doesn't solve the general problem.
>
> So - what about us capturing this information outside the image: we
> can create a uuid for the build, and write a file in the image with
> that uuid, and outside the image we can write:
>  - all variables (no security ramifications now as this file can be
> kept by whomever built the image)
>  - command line args
>  - version information for the toolchain etc.

+1.  I like this idea a lot.

What about making the uuid file written outside of the image be in
json format so it's easily machine parseable?

Something like:

dib-.json would contain:

{
  "environment" : {
  "DIB_NO_TMPFS": "1",
  ...
   },
  "dib" : {
 "command-line" : ,
 "version": .
  }
}

Could keep adding additional things like list of elements used, build time, etc.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-05 Thread James Slagle
On Wed, Dec 4, 2013 at 2:10 PM, Robert Collins
 wrote:
> On 5 December 2013 06:55, James Slagle  wrote:
>> On Wed, Dec 4, 2013 at 2:12 AM, Robert Collins
>>> Jan, Jordan, Martyn, Jiri and Jaromir are still actively contributing
>>> to TripleO and OpenStack, but I don't think they are tracking /
>>> engaging in the code review discussions enough to stay in -core: I'd
>>> be delighted if they want to rejoin as core - as we discussed last
>>> time, after a shorter than usual ramp up period if they get stuck in.
>>
>> What's the shorter than usual ramp up period?
>
> You know, we haven't actually put numbers on it. But I'd be
> comfortable with a few weeks of sustained involvement.

+1.  Sounds reasonable.

>
>> In general, I agree with your points about removing folks from core.
>>
>> We do have a situation though where some folks weren't reviewing as
>> frequently when the Tuskar UI/API development slowed a bit post-merge.
>>  Since that is getting ready to pick back up, my concern with removing
>> this group of folks, is that it leaves less people on core who are
>> deeply familiar with that code base.  Maybe that's ok, especially if
>> the fast track process to get them back on core is reasonable.
>
> Well, I don't think we want a situation where when a single org
> decides to tackle something else for a bit, that noone can comfortably
> fix bugs in e.g. Tuskar / or worse the whole thing stalls - thats why
> I've been so keen to get /everyone/ in Tripleo-core familiar with the
> entire collection of codebases we're maintaining.
>
> So I think after 3 months that other cores should be reasonably familiar too 
> ;).

Well, it's not so much about just fixing bugs.  I'm confident our set
of cores could fix bugs in almost any OpenStack related project, and
in fact most do.  It was more just a comment around people who worked
on the initial code being removed from core.  But, if others don't
share that concern, and in fact Ladislav's comment about having
confidence in the number of tuskar-ui guys still on core pretty much
mitigates my concern :).

> That said, perhaps we should review these projects.
>
> Tuskar as an API to drive deployment and ops clearly belongs in
> TripleO - though we need to keep pushing features out of it into more
> generalised tools like Heat, Nova and Solum. TuskarUI though, as far
> as I know all the other programs have their web UI in Horizon itself -
> perhaps TuskarUI belongs in the Horizon program as a separate code
> base for now, and merge them once Tuskar begins integration?

IMO, I'd like to see Tuskar UI stay in tripleo for now, given that we
are very focused on the deployment story.  And our reviewers are
likely to have strong opinions on that :).  Not that we couldn't go
review in Horizon if we wanted to, but I don't think we need the churn
of making that change right now.

So, I'll send my votes on the other folks after giving them a little
more time to reply.

Thanks.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread James Slagle
On Thu, Dec 5, 2013 at 11:10 AM, Clint Byrum  wrote:
> Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
>> Why not just use glance?
>>
>
> I've asked that question a few times, and I think I can collate the
> responses I've received below. I think enhancing glance to do these
> things is on the table:

I'm actually interested in the use cases laid out by Heater from both
a template perspective and image perspective.  For the templates, as
Robert mentioned, Tuskar needs a solution for this requirement, since
it's deploying using templates.  For the images, we have the concept
of a "golden" image in TripleO and are heavily focused on image based
deployments.  Therefore, it seems to make sense that TripleO also
needs a way to version/tag known good images.

Given that, I think it makes sense  to do this in a way so that it's
consumable for things other than just templates.  In fact, you can
almost s/template/image/g on the Heater wiki page, and it pretty well
lays out what I'd like to see for images as well.

> 1. Glance is for big blobs of data not tiny templates.
> 2. Versioning of a single resource is desired.
> 3. Tagging/classifying/listing/sorting
> 4. Glance is designed to expose the uploaded blobs to nova, not users
>
> My responses:
>
> 1: Irrelevant. Smaller things will fit in it just fine.
>
> 2: The swift API supports versions. We could also have git as a
> backend.

I would definitely like to see a git backend for versioning.  No
reason to reimplement a different solution for what already works
well.  I'm not sure we'd want to put a whole image into git though.
Perhaps just it's manifest (installed components, software versions,
etc) in json format would go into git, and that would be associated
back to the binary image via uuid.  That would even make it easy to
diff changes between versions, etc.

> This feels like something we can add as an optional feature
> without exploding Glance's scope and I imagine it would actually be a
> welcome feature for image authors as well. Think about Ubuntu maintaining
> official images. If they can keep the ID the same and just add a version
> (allowing users to lock down to a version if updated images cause issue)
> that seems like a really cool feature for images _and_ templates.
>
> 3: I'm sure glance image users would love to have those too.
>
> 4: Irrelevant. Heat will need to download templates just like nova, and
> making images publicly downloadable is also a thing in glance.
>
> It strikes me that this might be a silo problem instead of an
> actual design problem. Folk should not be worried about jumping into
> Glance and adding features. Unless of course the Glance folk have
> reservations? (adding glance tag to the subject)

I'm +1 for adding these types of features to glance, or at least
something common, instead of making it specific to Heat templates.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes - close to implementation start

2013-12-05 Thread James Slagle
On Tue, Dec 3, 2013 at 4:37 AM, Jaromir Coufal  wrote:
> I am sorry for mistake in tag - fixed in this reply and keeping the original
> text below.
>
> On 2013/03/12 10:25, Jaromir Coufal wrote:
>
> Hey folks,
>
> I opened 2 issues on UX discussion forum with TripleO UI topics:
>
> Resource Management:
> http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/
> - this section was already reviewed before, there is not much surprises,
> just smaller updates
> - we are about to implement this area
>
> http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/
> - these are completely new views and they need a lot of attention so that in
> time we don't change direction drastically
> - any feedback here is welcome
>

Jarda,

This stuff looks really great. I like the flow quite a bit.  One thing
that stuck out to me though is that I didn't see anything around
troubleshooting a Deployment.  Since stuff does occassionally go wrong
:), it might be nice to mock up how troubleshooting that might look.

I think we already have the logs in Horizon, but bubbling that up to
the user on the deployment screen might be nice.  Or, perhaps links
back to the logs if something failed.

In the pdf for deployment[1], slide 13, there's shown a node that
failed.  But, I don't see much there in terms of getting at how/why it
failed.

I know I'm probably thinking way far out here :), but there was some
good discussion at the summit on Heat making it easier to debug stack
failures, adding retries, etc.  I know that's stuff still early, but
it might make sense to think about hooking into that once available.
[2].  There was also an idea of adopting a stack [3].  This might be a
way for folks who already have a deployment to bring it under the
control of Tuskar so to speak.

Anyway, it doesn't look like anything in the mockups would make it
difficult to hook into these new API's once avaiable, but I just
thought I would mention them.  Thanks!

[1] 
http://people.redhat.com/~jcoufal/openstack/tripleo/2013-12-03_tripleo-ui_03-deployment.pdf
[2] 
https://blueprints.launchpad.net/heat/+spec/troubleshooting-low-level-control
[3] https://blueprints.launchpad.net/heat/+spec/adopt-stack

> We need to get into implementation ASAP. It doesn't mean that we have
> everything perfect from the very beginning, but that we have direction and
> we move forward by enhancements.
>
> Therefor implementation of above mentioned areas should start very soon.
>
> If all possible, I will try to record walkthrough with further explanations.
> If you have any questions or feedback, please follow the threads on
> ask-openstackux.
>
> Thanks
> -- Jarda
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread James Slagle
On Wed, Dec 4, 2013 at 2:12 AM, Robert Collins
 wrote:
>  - Jan Provaznik for removal from -core
>  - Jordan O'Mara for removal from -core
>  - Martyn Taylor for removal from -core
>  - Jiri Tomasek for removal from -core
>  - Jamomir Coufal for removal from -core

Four responded and said they'd try to be more active in reviews, or at
least would like/try to be.

However, given that:

- Robert seems to be very consistent in reviewing core every month,
including giving folks plenty of heads up about removal.
- There is now a defined shorter ramp up period to get back on core
- the *average* of 1 review/day is a very low bar

I think it's prudent and can't really object to removing these
individuals from core, so +1 for the removal.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread James Slagle
Mainn,

Thanks for pulling this together.

> * NODES
>* Management node (where triple-o is installed)
>* created as part of undercloud install process

I think getting the undercloud installed/deployed should be a
requirement for Icehouse.  I'm not sure if you meant that or were
assuming that it would already be done :).  I'd like to see a simpler
process than building the seed vm, starting it, deploying undercloud,
etc.  But, that's something we can work to define if others agree as
well.

>* can create additional management nodes (F)

By this, do you mean using the undercloud to scale itself?  e.g.,
using nova on the undercloud to launch an additional undercloud
compute node, etc.  I like that concept, and don't see any reason why
that wouldn't be technically possible.

> * DEPLOYMENT ACTION
>* Heat template generated on the fly
>   * hardcoded images
>  * allow image selection (F)

So, I think this may be what Robert was getting at, but I think this
one should be M or possibly even committed to Icehouse.  I think it's
very likely we're going to need to update which image is used to do
the deployment, e.g., if you build a new image to pick up a security
update.

IIRC, the image is just referenced by name in the template.  So,
maybe the process is just:

* build the new image
* rename/delete the old image
* upload the new image with the required name (overcloud-compute,
overcloud-control)

However, having a nicer image selection process would be nice.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread James Slagle
On Fri, Dec 6, 2013 at 4:55 PM, Matt Wagner  wrote:
>> - As an infrastructure administrator, Anna expects that the
>> management node for the deployment services is already up and running
>> and the status of this node is shown in the UI.
>
> The 'management node' here is the undercloud node that Anna is
> interacting with, as I understand it. (Someone correct me if I'm wrong.)
> So it's not a bad idea to show its status, but I guess the mere fact
> that she's using it will indicate that it's operational.

That's how I read it as well, which assumes that you're using the
undercloud to manage itself.

FWIW, based on the OpenStack personas I think that Anna would be the
one doing the undercloud setup.  So, maybe this use case should be:

- As an infrastructure administrator, Anna wants to install the
undercloud so she can use the UI.

That piece is going to be a pretty big part of the entire deployment
process, so I think having a use case for it makes sense.

Nice work on the use cases Liz, thanks for pulling them together.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread James Slagle
 well be true :).

> 4) Keep python-tuskarclient thin, but build a separate CLI app that would
> provide same integration features as Tuskar UI does. (This would lead to
> code duplication. Depends on the actual amount of logic to duplicate if this
> is bearable or not.)

-1

>
>
> Which of the options you see as best? Did i miss some better option? Am i
> just being crazy and trying to solve a non-issue? Please tell me :)

Not at all, definitely some very good points raised.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread James Slagle
On Wed, Dec 11, 2013 at 10:35 AM, James Slagle  wrote:
> On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský  wrote:
>> 1) Make a thicker python-tuskarclient and put the business logic there. Make
>> it consume other python-*clients. (This is an unusual approach though, i'm
>> not aware of any python-*client that would consume and integrate other
>> python-*clients.)
>
> python-openstackclient consumes other clients :).  Ok, that's probably
> not a great example :).
>
> This approach makes the most sense to me.  python-tuskarclient would
> make the decisions about if it can call the heat api directly, or the
> tuskar api, or some other api.  The UI and CLI would then both use
> python-tuskarclient.

Another example:

Each python-*client also uses keystoneclient to do auth and get
endpoints.  So, it's not like each client has reimplemented the code
to make HTTP requests to keystone, they reuse the keystone Client
class object.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread James Slagle
This is really helpful, thanks for pulling it together.

comment inline...

On Wed, Dec 11, 2013 at 2:15 PM, Tzu-Mainn Chen  wrote:
> * NODE a physical general purpose machine capable of running in many roles. 
> Some nodes may have hardware layout that is particularly
>useful for a given role.
>
>  * REGISTRATION - the act of creating a node in Ironic
>
>  * ROLE - a specific workload we want to map onto one or more nodes. 
> Examples include 'undercloud control plane', 'overcloud control
>plane', 'overcloud storage', 'overcloud compute' etc.
>
>  * MANAGEMENT NODE - a node that has been mapped with an undercloud 
> role
>  * SERVICE NODE - a node that has been mapped with an overcloud role
> * COMPUTE NODE - a service node that has been mapped to an 
> overcloud compute role
> * CONTROLLER NODE - a service node that has been mapped to an 
> overcloud controller role
> * OBJECT STORAGE NODE - a service node that has been mapped to an 
> overcloud object storage role
> * BLOCK STORAGE NODE - a service node that has been mapped to an 
> overcloud block storage role
>
>  * UNDEPLOYED NODE - a node that has not been mapped with a role

This begs the question (for me anyway), why not call it UNMAPPED NODE?
 If not, can we s/mapped/deployed in the descriptions above instead?

It might make sense then to define mapped and deployed in technical
terms as well.  Is mapped just the act of associating a node with a
role in the UI, or does it mean that bits have actually been
transferred across the wire to the node's disk and it's now running?

>   * another option - UNALLOCATED NODE - a node that has not been 
> allocated through nova scheduler (?)
>- (after reading lifeless's explanation, I 
> agree that "allocation" may be a
>   misleading term under TripleO, so I 
> personally vote for UNDEPLOYED)
>
>  * INSTANCE - A role deployed on a node - this is where work actually 
> happens.
>
> * DEPLOYMENT
>
>  * SIZE THE ROLES - the act of deciding how many nodes will need to be 
> assigned to each role
>* another option - DISTRIBUTE NODES (?)
>  - (I think the former is more accurate, but 
> perhaps there's a better way to say it?)
>
>  * SCHEDULING - the process of deciding which role is deployed on which 
> node
>
>  * SERVICE CLASS - a further categorization within a service role for a 
> particular deployment.
>
>   * NODE PROFILE - a set of requirements that specify what attributes 
> a node must have in order to be mapped to
>a service class
>



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UI] Icehouse Requirements - Summary, Milestones

2013-12-13 Thread James Slagle
On Fri, Dec 13, 2013 at 03:04:09PM +0100, Imre Farkas wrote:
> On 12/13/2013 11:36 AM, Jaromir Coufal wrote:
> >
> >*VERSION 0*
> >===
> >Enable user to deploy OpenStack with the simpliest TripleO way, no
> >difference between hardware.
> >
> >Target:
> >- end of icehouse-2
> >
> >Features we need to get in:
> >- Enable manual nodes registration (Ironic)
> >- Get images available for user (Glance)
> >- Node roles (hardcode): Controller, Compute, Object Storage, Block Storage
> >- Design deployment (number of nodes per role)
> >- Deploy (Heat + Nova)
> 
> One note to deploy: It's not done only by Heat and Nova. If we
> expect a fully functional OpenStack installation as a result, we are
> missing a few steps like creating users, initializing and
> registering the service endpoints with Keystone. In TripleO this is
> done by the init-keystone and setup-endpoints scripts. Check devtest
> for more details: 
> http://docs.openstack.org/developer/tripleo-incubator/devtest_undercloud.html

Excellent point Imre, as the deployment isn't really useable until those steps
are done.  The link to the overcloud setup steps is actually:
http://docs.openstack.org/developer/tripleo-incubator/devtest_overcloud.html
Very similar to what is done for the undercloud.

I think most of that logic could be reimplemented to be done via direct calls
to the API using the client libs vs using a CLI.  Not sure about
"keystone-manage pki_setup" though, would need to look into that.

--
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-18 Thread James Slagle
mised in the beginning by
  using an internal concept that wasn't necessarily clear, we're then even
  worse off.  For example, isn't there a push now to update the usage of
  "tenant" in some places?  I know we're not calling that term out
  specifically, but it's just an example.


> 
> There is not only UI, sysadmins will work with CLI, using Openstack
> services, using Openstack
> naming. So naming it differently will be confusing.
> 
> Btw. I would never hire a sysadmin that should be managing my 100s
> nodes cloud and have no idea
> what is happening under the hood. :-D
> 
> Ladislav
> 
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] icehouse-1 test disk images setup

2013-12-24 Thread James Slagle
I built some vm image files for testing with TripleO based off of the
icehouse-1 milestone tarballs for Fedora and Ubuntu.  If folks are
interested in giving them a try you can find a set of instructions and
how to download the images at:

https://gist.github.com/slagle/981b279299e91ca91bd9

The steps are similar to the devtest process, but you use the prebuilt
vm images for the undercloud and overcloud and don't need a seed vm.
When the undercloud vm is started it uses the OpenStack Configuration
Drive as a data source for cloud-init.  This eliminates some of the
manual configuration that would otherwise be needed.  To that end, the
steps currently use some different git repos for some of the tripleo
tooling since not all of that functionality is upstream yet.  I can
submit those upstream, but they didn't make a whole lot of sense
without the background, so I wanted to provide that first.

At the very least, this could be an easier way for developers to get
setup with tripleo to do a test overcloud deployment to develop on
things like Tuskar.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] icehouse-1 test disk images setup

2013-12-24 Thread James Slagle
On Tue, Dec 24, 2013 at 12:26 PM, Clint Byrum  wrote:
> Excerpts from James Slagle's message of 2013-12-24 08:50:32 -0800:
>> I built some vm image files for testing with TripleO based off of the
>> icehouse-1 milestone tarballs for Fedora and Ubuntu.  If folks are
>> interested in giving them a try you can find a set of instructions and
>> how to download the images at:
>>
>> https://gist.github.com/slagle/981b279299e91ca91bd9
>>
>
> This is great, thanks for working hard to make the onramp shorter. :)
>
>> The steps are similar to the devtest process, but you use the prebuilt
>> vm images for the undercloud and overcloud and don't need a seed vm.
>> When the undercloud vm is started it uses the OpenStack Configuration
>> Drive as a data source for cloud-init.  This eliminates some of the
>> manual configuration that would otherwise be needed.  To that end, the
>> steps currently use some different git repos for some of the tripleo
>> tooling since not all of that functionality is upstream yet.  I can
>> submit those upstream, but they didn't make a whole lot of sense
>> without the background, so I wanted to provide that first.
>>
>
> Why would config drive be easier than putting a single json file in
> /var/lib/heat-cfntools/cfn-init-data the way the seed works?
>
> Do you experience problems with that approach that we haven't discussed?

That approach works fine if you're going to build the seed image.   In
devtest, you modify the cfn-init-data with a sed command, then include
it in your build seed image.  So, everyone that runs devtest ends up
with a unique seed image pretty much.

In this approach, everyone uses the same undercloud vm image.  In
order to make that work, their's a script to build the config drive
iso and that is then used to make config changes at boot time to the
undercloud.  Specifically, there's cloud-init data on the config drive
iso to update the virtual power manager user and ssh key, and sets the
user's ssh key in authorized keys.

>
> If I were trying to shrink devtest from 3 clouds to 2, I'd eliminate the
> undercloud, not the seed. The seed is basically an undercloud in a VM
> with a static configuration. That is what you have described but done
> in a slightly different way. I am curious what the benefits of this
> approach are.

True, there's not a whole lot of difference between eliminating the
seed or the undercloud.  You eliminate either one, then call your
first cloud whichever you want.  To me, the seed has always seemed
short lived, once you use it to deploy the undercloud it can go away
(eventually, anyway).  So, that's why I am calling the first cloud
here the undercloud.  Plus, since it will eventually include Tuskar
and deploy the overcloud, it seemed more inline with the current
devtest flow to call it an undercloud.

>
>> At the very least, this could be an easier way for developers to get
>> setup with tripleo to do a test overcloud deployment to develop on
>> things like Tuskar.
>>
>
> Don't let my questions discourage you. This is great as-is!

Great, thanks.  I appreciate the feedback!


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] icehouse-1 test disk images setup

2013-12-27 Thread James Slagle
On Tue, Dec 24, 2013 at 4:28 PM, Clint Byrum  wrote:
> Excerpts from James Slagle's message of 2013-12-24 10:40:23 -0800:
>> In this approach, everyone uses the same undercloud vm image.  In
>> order to make that work, their's a script to build the config drive
>> iso and that is then used to make config changes at boot time to the
>> undercloud.  Specifically, there's cloud-init data on the config drive
>> iso to update the virtual power manager user and ssh key, and sets the
>> user's ssh key in authorized keys.
>>
>
> Is this because it is less work to build an iso than to customize an
> existing seed image? How hard would it be to just mount the guest image
> and drop the json file in it?

It might take a little while longer to customize a built seed image,
but it would still most likely be under a minute.  Building the iso
takes about a second.  Either approach would be fine.  I just chose
the config drive because it seemed more like the cloud-init way to
bootstrap an image that didn't have a dynamic runtime provided
datasource.  Of course, I ran into a few bugs in cloud-init testing
out the config drive approach, so just modifying the image with
libguestfs or qemu-nbd probably would have been just as easy.

>
> Anyway I like the approach, though I generally do not like config drive.
> :)
>
>> >
>> > If I were trying to shrink devtest from 3 clouds to 2, I'd eliminate the
>> > undercloud, not the seed. The seed is basically an undercloud in a VM
>> > with a static configuration. That is what you have described but done
>> > in a slightly different way. I am curious what the benefits of this
>> > approach are.
>>
>> True, there's not a whole lot of difference between eliminating the
>> seed or the undercloud.  You eliminate either one, then call your
>> first cloud whichever you want.  To me, the seed has always seemed
>> short lived, once you use it to deploy the undercloud it can go away
>> (eventually, anyway).  So, that's why I am calling the first cloud
>> here the undercloud.  Plus, since it will eventually include Tuskar
>> and deploy the overcloud, it seemed more inline with the current
>> devtest flow to call it an undercloud.
>>
>
> The more I think about it the more I think we should just take the three
> cloud approach. The seed can be turned off as soon as the undercloud is
> running, but it allows testing and modification of the seed to undercloud
> transfer, which is something we are going to need to put work in to at
> some point. It would be a shame to force developers to switch gears and
> use something entirely different when they need to get into that.

Yea, that certainly makes sense.  Also part of my motivation to not
have a seed is the memory requirements on the host you're using for
devtest.   I'm not sure if 8gb is even enough anymore, as I haven't
tried a full devtest run that recently.  Especially if you're using
your main development laptop with other stuff running.   If you were
able to shut down the seed though after deploying the undercloud, that
would definitely help.  I think there would be a couple of challenges
with that for devtest:

- If you had to reboot the undercloud, I think you'd need the seed
there for the undercloud's metadata.
- The seed vm is the only one in devtest with the 2 network
interfaces, default and brbm
- The seed handles routing all the traffic for 192.0.2.0/24

> Perhaps we could just use your config drive approach for the seed all
> the time. Then users can start with pre-built images, but don't have to
> change everything when they want to start changing said images.
>
> I'm not 100% convinced that it is needed, but I'd rather have one path
> than two if we can manage that and not drive away potential
> contributors.

Agreed, I'd like to see it as one path.  Similar to how devtest offers
different options down that path today, these could be additional
options to not have a seed (or be able to just shutdown your seed vm),
or use pre built vm's, etc.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
Hi,

I'd like to discuss some possible ways we could install the OpenStack
components from packages in tripleo-image-elements.  As most folks are
probably aware, there is a "fork" of tripleo-image-elements called
tripleo-puppet-elements which does install using packages, but it does
so using Puppet to do the installation and for managing the
configuration of the installed components.  I'd like to kind of set
that aside for a moment and just discuss how we might support
installing from packages using tripleo-image-elements directly and not
using Puppet.

One idea would be to add support for a new type (or likely 2 new
types: rpm and dpkg) to the source-repositories element.
source-repositories already knows about the git, tar, and file types,
so it seems somewhat natural to have additional types for rpm and
dpkg.

A complication with that approach is that the existing elements assume
they're setting up everything from source.  So, if we take a look at
the nova element, and specifically install.d/74-nova, that script does
stuff like install a nova service, adds a nova user, creates needed
directories, etc.  All of that wouldn't need to be done if we were
installing from rpm or dpkg, b/c presumably the package would take
care of all that.

We could fix that by making the install.d scripts only run if you're
installing a component from source.  In that sense, it might make
sense to add a new hook, source-install.d and only run those scripts
if the type is a source type in the source-repositories configuration.
 We could then have a package-install.d to handle the installation
from the packages type.   The install.d hook could still exist to do
things that might be common to the 2 methods.

Thoughts on that approach or other ideas?

I'm currently working on a patchset I can submit to help prove it out.
 But, I'd like to start discussion on the approach now to see if there
are other ideas or major opposition to that approach.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 3:22 PM, Fox, Kevin M  wrote:
> Sounds very useful. Would there be a diskimage-builder flag then to say you 
> prefer packages over source? Would it fall back to source if you specified 
> packages and there were only source-install.d for a given element?

Yes, you could pick which you wanted via environment variables.
Similar to the way you can pick if you want git head, a specific
gerrit review, or a released tarball today via $DIB_REPOTYPE_,
etc.  See: 
https://github.com/openstack/diskimage-builder/blob/master/elements/source-repositories/README.md
for more info about that.

If you specified something that didn't exist, it should probably fail
with an error.  The default behavior would still be installing from
git master source if you specified nothing though.


>
> Thanks,
> Kevin
> ________
> From: James Slagle [james.sla...@gmail.com]
> Sent: Tuesday, January 07, 2014 12:01 PM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [TripleO] Installing from packages in  
> tripleo-image-elements
>
> Hi,
>
> I'd like to discuss some possible ways we could install the OpenStack
> components from packages in tripleo-image-elements.  As most folks are
> probably aware, there is a "fork" of tripleo-image-elements called
> tripleo-puppet-elements which does install using packages, but it does
> so using Puppet to do the installation and for managing the
> configuration of the installed components.  I'd like to kind of set
> that aside for a moment and just discuss how we might support
> installing from packages using tripleo-image-elements directly and not
> using Puppet.
>
> One idea would be to add support for a new type (or likely 2 new
> types: rpm and dpkg) to the source-repositories element.
> source-repositories already knows about the git, tar, and file types,
> so it seems somewhat natural to have additional types for rpm and
> dpkg.
>
> A complication with that approach is that the existing elements assume
> they're setting up everything from source.  So, if we take a look at
> the nova element, and specifically install.d/74-nova, that script does
> stuff like install a nova service, adds a nova user, creates needed
> directories, etc.  All of that wouldn't need to be done if we were
> installing from rpm or dpkg, b/c presumably the package would take
> care of all that.
>
> We could fix that by making the install.d scripts only run if you're
> installing a component from source.  In that sense, it might make
> sense to add a new hook, source-install.d and only run those scripts
> if the type is a source type in the source-repositories configuration.
>  We could then have a package-install.d to handle the installation
> from the packages type.   The install.d hook could still exist to do
> things that might be common to the 2 methods.
>
> Thoughts on that approach or other ideas?
>
> I'm currently working on a patchset I can submit to help prove it out.
>  But, I'd like to start discussion on the approach now to see if there
> are other ideas or major opposition to that approach.
>
> --
> -- James Slagle
> --
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 3:23 PM, Clint Byrum  wrote:
> What would be the benefit of using packages?

We're building images on top of different distributions of Linux.
Those distributions themselves offer packaged and supported OpenStack
components.  So, one benefit is that you'd be using what's "blessed"
by your distro if you chose to.  I think that's a farily common way
people are going to be used to installing components. The OpenStack
Installation guide says to install from packages, fwiw.

> We've specifically avoided packages because they complect[1] configuration
> and system state management with software delivery. The recent friction
> we've seen with MySQL is an example where the packages are not actually
> helping us, they're hurting us because they encode too much configuration
> instead of just delivering binaries.

We're trying to do something fairly specific with the read only /
partition.  You're right, most packages aren't going to handle that
well.  So, yes you have a point from that perspective.

However, there are many examples of when packages help you.
Dependency resolution, version compatibility, methods of verification,
knowing what's installed, etc.  I don't think that's really an
argument or discussion worth having, because you either want to use
packages or you want to build it all from source.  There are
advantages/disadvantages to both methods, and what I'm proposing is
that we support both methods, and not require everyone to only be able
to install from source.

> Perhaps those of us who have been involved a bit longer haven't done
> a good job of communicating our reasons. I for one believe in the idea
> that image based updates eliminate a lot of the entropy that comes along
> with a package based updating system. For that reason alone I tend to
> look at any packages that deliver configurable software as potentially
> dangerous (note that I think they're wonderful for libraries, utilities,
> and kernels. :)

Using packages wouldn't prevent you from using the image based update
mechanism.  Anecdotally, I think image based updates could be a bit
heavy handed for something like picking up a quick security or bug fix
or the like.  That would be a scenario where a package update could
really be handy.  Especially if someone else (e.g., your distro) is
maintaining the package for you.

For this proposal though, I was only talking about installation of the
components at image build time.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 3:48 PM, Chris Jones  wrote:
> Hi
>
> Assuming we want to do this, but not necessarily agreeing that we do want to, 
> I would suggest:
>
> 1) I think it would be nice if we could avoid separate dpkg/rpm types by 
> having a package type and reusing the package map facility.

Indeed, I'd like to see one package type as well. I think we could
start with that route, and only split it out if there was a proven
technical need.

> 2) Clear up the source-repositories inconsistency by making it clear that 
> multiple repositories of the same type do not work in 
> source-repositories-nova (this would be a behaviour change, but would mesh 
> more closely with the docs, and would require refactoring the 4 elements we 
> ship atm with multiple git repos listed)

Could you expand on this a bit?  I'm not sure what inconsistency
you're referring to.

> 3) extend arg_to_element to parse element names like "nova/package", 
> "nova/tar", "nova/file" and "nova/source" (defaulting to source), storing the 
> choice for later.
>
> 4) When processing the nova element, apply only the appropriate entry in 
> source-repositories-nova
>
> 5) Keep install.d as-is and make the scripts be aware of the previously 
> stored choice of element origin in the elements (as they add support for a 
> package origin)
>
> 6) Probably rename source-repositories to something more appropriate.

All good ideas.  I like the mechanism to specify the type as well.  I
wonder if we could have a global build option as well that said to use
packages or source, or whatever, for all components that support that
type.  That way you wouldn't have to specify each individually.

> As for whether we should do this or not... like Clint I want to say no, but 
> I'm also worried about people forking t-i-e and not pushing their 
> fixes/improvements and new elements back up to us because we're too diverged.

I feel that not offering a choice will only turn people off from using
t-i-e. Only offering an install from source option is not likely to
cause large groups of people to suddenly decide that only installing
from source is the way to go and then start using t-i-e exclusively.
So, that's why I'd really like to see support for packages in the main
repo itself.

> If this is a real customer need, I would come down in favour of doing it if 
> the cost of the above implementation (or an alternate one) isn't too high.

+1.  Installing from source (master) would still be the default.  And
any implementations that allowed something different would have to not
disrupt that.  Similar to how we've added new install options in the
paste (source-repositories, tar, etc) and have kept disruptions to a
minimum.



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
;m proposing is
>> that we support both methods, and not require everyone to only be able
>> to install from source.
>>
>
> "Install from source" is probably not the right way to put this. We're
> installing the virtualenvs from tarballs downloaded from pypi. We're
> also installing 99.9% python, so we're not really going "from source",
> we're just going "from git".

Yes, "from source" basically means "from git".  But, I fail to see the
distinction you're making in this context. Yes, the python source is
for all intents the same as the executable/library that is used at
runtime. Why are you opposed to installing that "source" from a
package vs a git repo?  What does using the git repo buy you?  If it's
avoiding the added complexity of the package, I already pointed out
some advantages to packaging as to why some people choose to use them
and thus accept that complexity

> But anyway, I see your point and will capitulate that it is less weird
> for people and thus may make the pill a little easier to swallow. But if
> I could have it my way, I'd suggest that the packages be built to mirror
> the structure of the element end-products as much as possible so that
> they can be used with minimal change to elements.

I don't know to what degree that's possible.  However, I think most
distros try to be as "upstream" friendly as possible.  In this
context, if the OpenStack community says "this is how we recommend you
install our stuff, and our reference is t-i-e", then I think packagers
and distros are inclined to follow that closely as much as possible.

>
>> > Perhaps those of us who have been involved a bit longer haven't done
>> > a good job of communicating our reasons. I for one believe in the idea
>> > that image based updates eliminate a lot of the entropy that comes along
>> > with a package based updating system. For that reason alone I tend to
>> > look at any packages that deliver configurable software as potentially
>> > dangerous (note that I think they're wonderful for libraries, utilities,
>> > and kernels. :)
>>
>> Using packages wouldn't prevent you from using the image based update
>> mechanism.  Anecdotally, I think image based updates could be a bit
>> heavy handed for something like picking up a quick security or bug fix
>> or the like.  That would be a scenario where a package update could
>> really be handy.  Especially if someone else (e.g., your distro) is
>> maintaining the package for you.
>>
>> For this proposal though, I was only talking about installation of the
>> components at image build time.
>>
>
> The entire point of image based updates is that they are heavy handed.
> The problem we're trying to solve is that you have a data center of (n)
> machines and you don't want (n) unique sets of software,  where each
> machine might have some hot fixes and not others. At 1000 machines it
> becomes critical. At 1 machines, the entropy matrix starts to get
> scary.

I think I addressed this point earlier up.  But, using packages
doesn't mean (n) unique sets of software.  And using images doesn't
mean 1 unique set of software.  Certainly, we can come up with
solutions that aim to make it such that entropy is not introduced
accidentally.  And I think an image approach is a very good mechanism
for that.  But, even that does not protect completely against a well
meaning operator accidentally deploying the wrong thing, or making a
change that is not reflected in the image.  Just like we can't protect
against them logging in and running "yum update foo" inconsistently
across their deployment.

Just to clarify, I'm not arguing against the image based update
approach.  I'm proposing a way to install the OpenStack components
from packages at image build time.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 6:04 PM, Clint Byrum  wrote:
> Excerpts from Chris Jones's message of 2014-01-07 14:43:31 -0800:
>> Hi
>>
>> > On 7 Jan 2014, at 22:18, Clint Byrum  wrote:
>> > Packages do the opposite,
>> > and encourage entropy by promising to try and update software
>>
>> Building with packages doesn't require updating running systems with 
>> packages and more than building with git requires updating running systems 
>> with git pull.
>> One can simply build (and test!) a new image with updated packages and 
>> rebuild/takeover nodes.
>>
>
> Indeed, however one can _more_ simply build an image without package
> tooling...  and they will be more similar across multiple platforms.
>
> My question still stands, what are the real advantages? So far the only
> one that matters to me is "makes it easier for people to think about
> using it."

I'm reminded of when I first started looking at TripleO there were a
few issues with installing from git (I'll say that from now on :)
related to all the python distribute -> setuptools migration.  Things
like if you're base cloud image had the wrong version of pip you
couldn't migrate to setuptools cleanly.  Then you had to run the
setuptools update twice, once to get the distribute legacy wrapper and
then again to latest setuptools.  If I recall there were other
problems with virtualenv incompatibilities as well.

Arguably, installing from packages would have made that easier and less complex.

Sure, the crux of the problem was likely that versions in the distro
were too old and they needed to be updated.  But unless we take on
building the whole OS from source/git/whatever every time, we're
always going to have that issue.  So, an additional benefit of
packages is that you can install a known good version of an OpenStack
component that is known to work with the versions of dependent
software you already have installed.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
On Tue, Jan 7, 2014 at 6:12 PM, Clint Byrum  wrote:
> Excerpts from Fox, Kevin M's message of 2014-01-07 13:11:13 -0800:
>> I was going to stay silent on this one, but since you asked...
>>
>> /me puts his customer hat on
>>
>> We source OpenStack from RDO for the packages and additional integration 
>> testing that comes from the project instead of using OpenStack directly. I 
>> was a little turned off from Triple-O when I saw it was source only. The 
>> feeling being that it is "too green" for our tastes. Which may be 
>> inaccurate. While I might be convinced to use source, its a much harder sell 
>> to us currently then using packages.
>>
>
> Kevin, thanks for sharing. I think I understand that it is just a new
> way of thinking and that makes it that much harder to consume.
>
> We have good reasons for not using packages. And we're not just making
> this up as a new crazy idea, we're copying what companies like Netflix
> have published about running at scale. We need to do a better job at
> making sure why we're doing some of the things we're doing.

Do you have a link for the publication handy? I know they use a
blessed AMI approach.  But I'm curious about the "not using packages"
part, and the advantages they get from that.  All I could find from
googling is people trying to install netflix from packages to watch
movies :).

Their AMI build tool seems to indicate they package their apps:
https://github.com/Netflix/aminator

As does this presentation:
http://www.slideshare.net/garethbowles/building-netflixstreamingwithjenkins-juc




-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-07 Thread James Slagle
 weeks into a new development cycle.

Meaning, I have some confidence that OpenStack as a community has done
some testing.  We have qa, gates, unit and functional tests, etc.  I
don't think that confidence is misplaced or unnecessary.

If a distro says a set of OpenStack packages work with a given version
of their OS, then that confidence is not misplaced either IMO.

> We provide test
> suites to users and we will encourage users to test their own things. I
> imagine some will also ship packaged products based on TripleO that will
> also be tested as a whole, not as individual packages.

This is a new and rather orthogonal point.  I'm not talking about
testing individual packages.  You're right, that makes little sense.
The context is about testing the deployed image as a whole, and the
set of packages that are advertised or purported  to work together to
make the image work.  I don't think any distro says "hi, we've tested
these 1,000 packages individually, but not running together, come use
our stuff".

> We can stop all of those things with or without packages. My point isn't
> to say packages are harmful in an image-build-only context, it is to
> say that I don't see the benefit.

Other benefits have been mentioned in the thread, I won't rehash.

> I think you're basically saying they're worth complexity because somebody
> else tests them for you. I disagree, but I definitely would understand
> if people said I sound crazy and still want those packages.

I didn't say that at all.  Merely that a set of packages that are
advertised to work on a given OS version and are supported add
benefit.

In fact, I don't think they really add that much complexity compared
to what installing from git does.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread James Slagle
On Tue, Jan 7, 2014 at 11:20 PM, Robert Collins
 wrote:
> On 8 January 2014 12:18, James Slagle  wrote:
>> Sure, the crux of the problem was likely that versions in the distro
>> were too old and they needed to be updated.  But unless we take on
>> building the whole OS from source/git/whatever every time, we're
>> always going to have that issue.  So, an additional benefit of
>> packages is that you can install a known good version of an OpenStack
>> component that is known to work with the versions of dependent
>> software you already have installed.
>
> The problem is that OpenStack is building against newer stuff than is
> in distros, so folk building on a packaging toolchain are going to
> often be in catchup mode - I think we need to anticipate package based
> environments running against releases rather than CD.

I just don't see anyone not building on a packaging toolchain, given
that we're all running the distro of our choice and pip/virtualenv/etc
are installed from distro packages.  Trying to isolate the building of
components with pip installed virtualenvs was still a problem.  Short
of uninstalling the build tools packages from the cloud image and then
wget'ing the pip tarball, I don't think there would have been a good
way around this particular problem.  Which, that approach may
certainly make some sense for a CD scenario.

Agreed that packages against releases makes sense.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-*-config in tripleo repositories

2014-01-09 Thread James Slagle
On Thu, Jan 9, 2014 at 1:56 PM, Dan Prince  wrote:
> I'm not the biggest fan of having multiple venv's for each component though. 
> Especially now that we have a global requirements.txt file where we can 
> target a common baseline. Multiple venvs causes lots of duplicated libraries 
> and increased image build time. Is anyone planning on making consolidated 
> venv's an option? Or perhaps even just using a consolidated venv as the 
> default where possible.

I'm not planning on it, but I like the idea quite a bit :).



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread James Slagle
On Fri, Jan 10, 2014 at 10:27 AM, Jay Dobies  wrote:
>> There's few pieces of concepts which I think is missing from the list:
>> - overclouds: after Heat successfully created the stack, Tuskar needs to
>> keep track whether it applied the post configuration steps (Keystone
>> initialization, registering services, etc) or not. It also needs to know
>> the name of the stack (only 1 stack named 'overcloud' for Icehouse).
>
>
> I assumed this sort of thing was captured by the resource status, though I'm
> far from a Heat expert. Is it not enough to assume that if the resource
> started successfully, all of that took place?

Not currently.  Those steps are done seperately from a different host
after Heat reports the stack as completed and running.  In the Tuskar
model, that host would be the undercloud.  Tuskar would have to know
what steps to run do the post configuration/setup of the overcloud.

I believe It would be possible to instead automate that so that it
happens as part of the os-refresh-config cycle that runs scripts at
boot time in an image.  At the end of the initial os-refresh-config
run there is a callback to Heat to indicate success.  So, if we did
that, the Overcloud would basically configure itself then callback to
Heat to indicate it all worked.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread James Slagle
On Fri, Jan 10, 2014 at 11:01 AM, Imre Farkas  wrote:
> On 01/10/2014 04:27 PM, Jay Dobies wrote:
>>> There's few pieces of concepts which I think is missing from the list:
>>> - overclouds: after Heat successfully created the stack, Tuskar needs to
>>> keep track whether it applied the post configuration steps (Keystone
>>> initialization, registering services, etc) or not. It also needs to know
>>> the name of the stack (only 1 stack named 'overcloud' for Icehouse).
>>
>>
>> I assumed this sort of thing was captured by the resource status, though
>> I'm far from a Heat expert. Is it not enough to assume that if the
>> resource started successfully, all of that took place?
>>
>
> I am also far from a Heat expert, I just had a some really hard times when I
> previously expected from my Tuskar deployed overcloud that it's ready to
> use. :-)
>
> In short, having the resources started is not enough, Heat stack-create is
> only a part of the deployment story. There was a few emails on the mailing
> list about this:
> http://lists.openstack.org/pipermail/openstack-dev/2013-December/022217.html
> http://lists.openstack.org/pipermail/openstack-dev/2013-December/022887.html
>
> There was also a discussion during the last TripleO meeting in December,
> check the topic 'After heat stack-create init operations (lsmola)'
> http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html

Thanks for posting the links :) Very helpful.  There are some really
good points there in the irc log about *not* doing what I suggested
with the local machine os-refresh-config scripts :).

So, I think it's likely that Tuskar will need to orchestrate this
setup in some fasion.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-15 Thread James Slagle
On Mon, Jan 13, 2014 at 4:47 AM, Jaromir Coufal  wrote:
>> - For editing a role, does it make a new image with the changes to what
>> services are deployed each time it's saved?
>
> So, there are two things - one thing is provisioning image. We are not
> dealing with image builder at the moment. So the image already contains
> services which we should be able to discover (what OpenStack services are
> included there). And then you go to service tab and enable/disable which
> services are provided within a role + their configuration.

This does not seem quite right to me.  I don't think we want to be
enabling or disabling different services in images at deployment time.
That's one of the reasons that we have the element nature of
diskimage-builder, so that you can build an image with the software
components you want. Any service that you include in the image build
is automatically enabled. If you want that image with a service
disabled, you build a new image.

I think the implementation would be a little weird as well. How would
you disable a service in the image? You could mount and modify the
image before deploying, but that's quite a bit of added complexity of
downloading an image, modifying it, and saving it.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-15 Thread James Slagle
on this train of thought:
>
> - I'm afraid of the idea of applying changes immediately for the same
> reasons I'm worried about a few other things. Very little of what we do will
> actually finish executing immediately and will instead be long running
> operations. If I edit a few roles in a row, we're looking at a lot of
> outstanding operations executing against other OpenStack pieces (namely
> Heat).
>
> The idea of immediately also suffers from a sort of "Oh shit, that's not
> what I meant" when hitting save. There's no way for the user to review what
> the larger picture is before deciding to make it so.

+1

> - Also falling into this category is the image creation. This is not
> something that finishes immediately, so there's a period between when the
> resource category is saved and the new image exists.

Since I don't think Tuskar should be an image building service, and no
other one currently exists, I think we should require the
administrator to build their images and load them into glance as a
prerequisite before using them in a deployment.

> If the image is immediately created, what happens if the user tries to
> change the resource category counts while it's still being generated? That
> question applies both if we automatically update existing nodes as well as
> if we don't and the user is just quick moving around the UI.
>
> What do we do with old images from previous configurations of the resource
> category? If we don't clean them up, they can grow out of hand. If we
> automatically delete them when the new one is generated, what happens if
> there is an existing deployment in process and the image is deleted while it
> runs?

Both these points are not as relevant given my earlier statement.
But, if I turn out to be wrong about that :), then I'd say that we
don't want to clean up old images automatically.  I don't like
surprises, even if I can configure how many old images to keep.  I
think that deleting should require manual intervention.

> We need some sort of task tracking that prevents overlapping operations from
> executing at the same time. Tuskar needs to know what's happening instead of
> simply having the UI fire off into other OpenStack components when the user
> presses a button.
>
> To rehash an earlier argument, this is why I advocate for having the
> business logic in the API itself instead of at the UI. Even if it's just a
> queue to make sure they don't execute concurrently (that's not enough IMO,
> but for example), the server is where that sort of orchestration should take
> place and be able to understand the differences between the configured state
> in Tuskar and the actual deployed state.
>
> I'm off topic a bit though. Rather than talk about how we pull it off, I'd
> like to come to an agreement on what the actual policy should be. My
> concerns focus around the time to create the image and get it into Glance
> where it's available to actually be deployed. When do we bite that time off
> and how do we let the user know it is or isn't ready yet?

I think this becomes simpler if you're not worried about building
images. Even so, some task tracking will likely be needed. TaskFlow[3]
and Mistral[4] may be relevant.


> - Editing a node is going to run us into versioning complications. So far,
> all we've entertained are ways to map a node back to the resource category
> it was created under. If the configuration of that category changes, we have
> no way of indicating that the node is out of sync.
>
> We could store versioned resource categories in the Tuskar DB and have the
> version information also find its way to the nodes (note: the idea is to use
> the metadata field on a Heat resource to store the res-cat information, so
> including version is possible). I'm less concerned with eventual reaping of
> old versions here since it's just DB data, though we still hit the question
> of when to delete old images.

Is resource category the same as role?  Sorry :), I probably need to
go back and re-read the terminology thread. If so, I think versioning
them in the Tuskar db makes sense. That way you know what's been
deployed and what hasn't, as well as any differences.

> - For the comment on a generic image with service configuration, the first
> thing that came to mind was the thread on creating images from packages [1].
> It's not the exact same problem, but see Clint Byrum's comments in there
> about drift. My gut feeling is that having specific images for each res-cat
> will be easier to manage than trying to edit what services are running on a
> node.

+1.

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-August/013122.html
[2] https://wiki.openstack.org/wiki/NovaImageBuilding
[3] https://wiki.openstack.org/wiki/TaskFlow
[4] https://wiki.openstack.org/wiki/Mistral

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] milestone-proposed branches

2014-01-16 Thread James Slagle
At last summit, we talked about doing stable branches and releases for
the TripleO projects for Icehouse.

I'd like to propose doing a milestone-proposed branch[1] and tagged
release for icehouse milestones 2 and 3. Sort of as dry run and
practice, as I think it could help tease out some things we might not
have considered when we do try to do icehouse stable branches.

The icehouse milestone 2 date is January 23rd [2]. So, if there is
concensus to do this, we probably need to get the branches created
soon, and then do any bugfixes in the branches (master too of course)
up unitl the 23rd.

I think it'd be nice if we had a working devtest to use with the
released tarballs.  This raises a couple of points:
 - We probably need a way in devtest to let people use a different
branch (or tarball) of the stuff that is git cloned.
- What about tripleo-incubator itself? We've said in the past we don't
want to attempt to stabilize or release that due to it's "incubator
nature".  But, if we don't have a stable set of devtest instructions
(and accompanying scripts like setup-endpoints, etc), then using an
ever changing devtest with the branches/tarballs is not likely to work
for very long.

And yes, I'm volunteering to do the work to support the above, and the
release work :).

Thoughts?

[1] https://wiki.openstack.org/wiki/BranchModel
[2] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-01-17 Thread James Slagle
On Thu, Jan 16, 2014 at 7:29 PM, Clint Byrum  wrote:
> Note that tripleo-incubator is special and should not be released. It
> is intentionally kept unfrozen and unreleased to make sure there is no
> illusion of stability.

I think it would be nice if we could point people at a devtest that
they could use with our other released stuff. Without that, we might
make a change to devtest, such as showing the use of a new heat
parameter in our templates, and if they're trying to follow along with
a released tripleo-heat-templates then they would have a problem.

Without a branch of incubator, there's no story or documentation
around using any of our released stuff.  You could follow along with
devtest to get an idea of how it's supposed to work and indeed it
might even work, but I don't think that's good enough. There is
tooling in incubator that has proved it's usefulness. Take an example
like setup-endpoints, what we're effectively saying without allowing
people to use that is that there is a useful tool that will setup
endpoints for you, but don't use it with our released stuff because
it's not gauranteed to work and instead make these 10'ish calls to
keystone via some other method. Then you'd also end up with a
different but parallel set of instructions for using our released
stuff vs. not.

This is prohibitive to someone who may want to setup a tripleo CI/CD
cloud deploying stable icehouse or from milestone branches. I think
people would just create their own fork of tripleo-incubator and use
that.

> If there are components in it that need releasing, they should be moved
> into relevant projects or forked into their own projects.

I'd be fine with that approach, except that's pretty much everything
in incubator, the scripts, templates, generated docs, etc. Instead of
creating a new forked repo, why don't we just rename tripleo-incubator
to tripleo-deployment and have some stable branches that people could
use with our releases?

I don't feel like that precludes tripleo from having no stability on
the master branch at all.

> Excerpts from Ryan Brady's message of 2014-01-16 07:42:33 -0800:
>> +1 for releases.
>>
>> In the past I requested a tag for tripleo-incubator to make it easier to 
>> build a package and test.
>>
>> In my case a common tag would be easier to track than trying to gather all 
>> of the commit hashes where
>> the projects are compatible.
>>
>> Ryan
>>
>> - Original Message -
>> From: "James Slagle" 
>> To: "OpenStack Development Mailing List" 
>> Sent: Thursday, January 16, 2014 10:13:58 AM
>> Subject: [openstack-dev] [TripleO] milestone-proposed branches
>>
>> At last summit, we talked about doing stable branches and releases for
>> the TripleO projects for Icehouse.
>>
>> I'd like to propose doing a milestone-proposed branch[1] and tagged
>> release for icehouse milestones 2 and 3. Sort of as dry run and
>> practice, as I think it could help tease out some things we might not
>> have considered when we do try to do icehouse stable branches.
>>
>> The icehouse milestone 2 date is January 23rd [2]. So, if there is
>> concensus to do this, we probably need to get the branches created
>> soon, and then do any bugfixes in the branches (master too of course)
>> up unitl the 23rd.
>>
>> I think it'd be nice if we had a working devtest to use with the
>> released tarballs.  This raises a couple of points:
>>  - We probably need a way in devtest to let people use a different
>> branch (or tarball) of the stuff that is git cloned.
>> - What about tripleo-incubator itself? We've said in the past we don't
>> want to attempt to stabilize or release that due to it's "incubator
>> nature".  But, if we don't have a stable set of devtest instructions
>> (and accompanying scripts like setup-endpoints, etc), then using an
>> ever changing devtest with the branches/tarballs is not likely to work
>> for very long.
>>
>> And yes, I'm volunteering to do the work to support the above, and the
>> release work :).
>>
>> Thoughts?
>>
>> [1] https://wiki.openstack.org/wiki/BranchModel
>> [2] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
>>
>> --
>> -- James Slagle
>> --
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-01-22 Thread James Slagle
On Thu, Jan 16, 2014 at 10:32 AM, Thierry Carrez  wrote:
> James Slagle wrote:
>> [...]
>> And yes, I'm volunteering to do the work to support the above, and the
>> release work :).
>
> Let me know if you have any question or need help. The process and tools
> used for the integrated release are described here:
>
> https://wiki.openstack.org/wiki/Release_Team/How_To_Release

Thanks Thierry, I wanted to give this a go for icehouse milestone 2,
but given that those were cut yesterday and there are still some
outstanding doc updates in review, I'd like to shoot for milestone 3
instead. Is there anything additional we need to do to make that
happen?

I read through that wiki page. I did have a couple of questions:

Who usually runs through the steps there? You? or a project member?

When repo_tarball_diff.sh is run, are there any acceptable missing
files? I'm seeing an AUTHORS and ChangeLog file showing up in the
output from our repos, those are automatically generated, so I assume
those are ok. There are also some egg_info files showing up, which I
also think can be safely ignored.  (I submitted a patch that updates
the grep command used in the script:
https://review.openstack.org/#/c/68471/ )

Thanks.

>
> Also note that we were considering switching from using
> milestone-proposed to using proposed/*, to avoid reusing branch names:
>
> https://review.openstack.org/#/c/65103/
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] icehouse-1 test disk images setup

2014-01-28 Thread James Slagle
Sorry to revive an old thread, but I've updated these images based on
the icehouse-2 milestone.

I've updated the instructions with the new download links:
https://gist.github.com/slagle/981b279299e91ca91bd9

To reiterate, the point here is to give people an easier on ramp to
getting a tripleo setup. This is especially important as more
developers start to get involved with Tuskar in particular. There is
definitely a lot of value in going through the whole devtest process
yourself, but it can be a bit daunting initially, and since this
eliminates the image building step with pre-built images, I think
there's less that can go wrong.

Given Clint's earlier feedback, I could see working the seed vm back
into these steps and getting the config drive setup into incubator so
that everyone that goes through devtest doesn't have to have a custom
seed. Then giving folks the option to use prebuilt vm images vs
building from scratch. Also, given the pending patches around an all
in one Overcloud, we could work the seed back into this, and still be
at just 3 vm's.

Any other feedback welcome.


On Tue, Dec 24, 2013 at 11:50 AM, James Slagle  wrote:
> I built some vm image files for testing with TripleO based off of the
> icehouse-1 milestone tarballs for Fedora and Ubuntu.  If folks are
> interested in giving them a try you can find a set of instructions and
> how to download the images at:
>
> https://gist.github.com/slagle/981b279299e91ca91bd9
>
> The steps are similar to the devtest process, but you use the prebuilt
> vm images for the undercloud and overcloud and don't need a seed vm.
> When the undercloud vm is started it uses the OpenStack Configuration
> Drive as a data source for cloud-init.  This eliminates some of the
> manual configuration that would otherwise be needed.  To that end, the
> steps currently use some different git repos for some of the tripleo
> tooling since not all of that functionality is upstream yet.  I can
> submit those upstream, but they didn't make a whole lot of sense
> without the background, so I wanted to provide that first.
>
> At the very least, this could be an easier way for developers to get
> setup with tripleo to do a test overcloud deployment to develop on
> things like Tuskar.
>
>
> --
> -- James Slagle
> --



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] icehouse-1 test disk images setup

2014-01-29 Thread James Slagle
On Tue, Jan 28, 2014 at 9:05 PM, Robert Collins
 wrote:
> So, thoughts...
>
> I do see this as useful, but I don't see an all-in-one overcloud as
> useful for developers of tuskar (or pretty much anything). It's just
> not realistic enough.

True.

I do however see the all in one useful to test that your deployment
"infrastructure" is working. PXE is setup right, iscsi is going to
work, etc. Networking, on some level, is working. No need to start two
vm's to see it fail twice.

>
> I'm pro having downloadable images, long as we have rights to do that
> for whatever OS we're based on. Ideally we'd have images for all the
> OSes we support (except those with restrictions like RHEL and SLES).
>
> Your instructions at the moment need to be refactored to support
> devtest_testenv and other recent improvements :)

Indeed, my goal would be to work what I have into devtest, not the
other way around.

> BTW the MTU note you have will break folks actual environments unless
> they have jumbo frames on everything- I *really wouldn't do that* -
> instead work on bug https://bugs.launchpad.net/neutron/+bug/1270646

Good point, I wasn't actually sure if what I was seeing was that bug
or not.  I'll look into it.

Thanks, I appreciate the feedback.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] devtest thoughts

2014-01-30 Thread James Slagle
devtest, our TripleO setup, has been rapidly evolving. We've added a
fair amount of configuration options for stuff like using actual
baremetal, and (soon) HA deployments by default. Also, the scripts
(which the docs are generated from) are being used for both CD and CI.

This is all great progress.

However, due to these changes,  I think that devtest no longer works
great as a tripleo developer setup. You haven't been able to complete
a setup following our docs for >1 week now. The patches are in review
to fix that, and they need to be properly reviewed and I'm not saying
they should be rushed. Just that it's another aspect of the problem of
trying to use devtest for CI/CD and a dev setup.

I think it might be time to have a developer setup vs. devtest, which
is more of a documented tripleo setup at this point.

In irc earlier this week (sorry if i misquoting the intent here), I
saw mention of getting setup easier by just using a seed to deploy an
overcloud.  I think that's a great idea.  We are all already probably
doing it :). Why not document that in some sort of fashion?

There would be some initial trade offs, around folks not necessarily
understanding the full devtest process. But, you don't necessarily
need to understand all of that to hack on the upgrade story, or
tuskar, or ironic.

These are just some additional thoughts around the process and mail I
sent earlier this week:
http://lists.openstack.org/pipermail/openstack-dev/2014-January/025726.html
But, I thought this warranted a broader discussion.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] devtest thoughts

2014-01-31 Thread James Slagle
On Thu, Jan 30, 2014 at 2:39 PM, Clint Byrum  wrote:
> Excerpts from James Slagle's message of 2014-01-30 07:28:01 -0800:
>> However, due to these changes,  I think that devtest no longer works
>> great as a tripleo developer setup. You haven't been able to complete
>> a setup following our docs for >1 week now. The patches are in review
>> to fix that, and they need to be properly reviewed and I'm not saying
>> they should be rushed. Just that it's another aspect of the problem of
>> trying to use devtest for CI/CD and a dev setup.
>>
>
> I wonder, if we have a gate which runs through devtest entirely, would
> that reduce the instances where we've broken everybody? Seems like it
> would, but the gate isn't going to read the docs, it is going to run the
> script, so maybe it will still break sometimes.

That would certainly help. Though, it could be hard to tell if a
failure is due to the devtest process *itself* (e.g., someone forgot
to document a step), or a change in one of the upstream OpenStack
projects.  Whereas if the process itself is less complex, I think it's
less likely to break.

>
> What if we just focus on breaking devtest less often? Seems like that is
> achievable and then we don't diverge from CI.

I'm sure it's achievable, but I'm not sure it's worth the cost. It's
difficult to anticipate how hard it's going to be in the future to
continue to bend devtest to do all of "the things" really well (CI,
CD, dev manual/scripted setup, doc generation).

That being said, there's also cost associated with maintaining a
separate dev setup. I hope that whatever we came up with though would
keep that cost fairly minimal.

>> In irc earlier this week (sorry if i misquoting the intent here), I
>> saw mention of getting setup easier by just using a seed to deploy an
>> overcloud.  I think that's a great idea.  We are all already probably
>> doing it :). Why not document that in some sort of fashion?
>>
>
> +1. I think a note at the end of devtest_seed which basically says "If
> you are not interested in testing HA baremetal, set these variables like
> so and skip to devtest_overcloud. Great idea actually, as thats what I
> do often when I know I'll be tearing down my setup later.

Agreed, I think this an easy short term win. I'll probably look at
getting that update submitted soon.

>> There would be some initial trade offs, around folks not necessarily
>> understanding the full devtest process. But, you don't necessarily
>> need to understand all of that to hack on the upgrade story, or
>> tuskar, or ironic.
>>
>
> Agreed totally. The processes are similar enough that when the time
> comes that a user needs to think about working on things which impact
> the undercloud they can back up to seed and then do that.

Thanks for the feedback.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][baremetal] Support multiple image write workers in nova bare-metal

2014-02-07 Thread James Slagle
On Fri, Feb 7, 2014 at 2:59 PM, Robert Collins
 wrote:
> On 7 February 2014 23:02, Taurus Cheung  wrote:
>> Hi,
>>
>>
>>
>> I am working on deploying images to bare-metal machines using nova
>> bare-metal. In existing implementation in nova-baremetal-deploy-helper.py,
>> there's only 1 worker to write image to bare-metal machines. If there is a
>> number of bare-metal instances to deploy, they need to queue up and wait to
>> be served by the single worker. Would the future implementation be improved
>> to support multiple workers?
>
> I think we'd all like to do multiple deploys at once, but there are
> significant thrashing risks in just running concurrent dd's - for
> instance, datacentre networks are faster than single disks (so
> cloud-scale architectures have - paradoxically to most folk :)) more
> network bandwidth than persistent IO bandwidth. In fact this patch
> (https://review.openstack.org/#/c/71219/) reduces a source of
> thrashing (based on testing on our prod hardware) to improve overall
> performance.

I agree here, and basically saw that exact behavior when I
experimented with adding multiple workers to
nova-baremetal-deploy-helper, and saw almost no performance
improvement. Starting 2 workers just resulted in each individual
deploy basically taking twice as long.

>
> Longer term with Ironic I can see multicast/bittorrent/that sort of
> thing being used to achieve efficient concurrency when deploying many
> identical images.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] in-instance update hooks

2014-02-12 Thread James Slagle
On Tue, Feb 11, 2014 at 12:22 AM, Clint Byrum  wrote:
> Hi, so in the previous thread about rolling updates it became clear that
> having in-instance control over updates is a more fundamental idea than
> I had previously believed. During an update, Heat does things to servers
> that may interrupt the server's purpose, and that may cause it to fail
> subsequent things in the graph.
>
> Specifically, in TripleO we have compute nodes that we are managing.
> Before rebooting a machine, we want to have a chance to live-migrate
> workloads if possible, or evacuate in the simpler case, before the node
> is rebooted. Also in the case of a Galera DB where we may even be running
> degraded, we want to ensure that we have quorum before proceeding.
>
> I've filed a blueprint for this functionality:
>
> https://blueprints.launchpad.net/heat/+spec/update-hooks
>
> I've cobbled together a spec here, and I would very much welcome
> edits/comments/etc:
>
> https://etherpad.openstack.org/p/heat-update-hooks

I like this approach.

Could this work for the non-reboot required incremental update (via
rsync or whatever) idea that's been discussed as well? I think it'd be
nice if we had a model that worked for both the rebuild case and
incremental case.

What if there was an additional type for actions under action_hooks
called update or incremental (I'm not sure if there is a term for this
in Heat today) in addition to the rebuild and delete action choices
that are already there.

When the instance sees that this action type is pending, it can
perform the update, and then use the wait condition handle to indicate
to Heat that the update is complete vs. using the wait condition
handle to indicate to proceed with the rebuild.

I suppose the instance might need some additional data (such as the
new Image Id) in order to perform the incremental update. Could this
be made available somehow in the metadata structure?

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-13 Thread James Slagle
g about doing configuration changes, which
I think are well within the scope of stuff TripleO should do. But most
(if not all) package managers allow and support configuration changes
to config files without complaining.

All that being said, assuming if we go with "A", I think we could
likely come up with some more elegant solutions to account for some of
the differences in the package and source installs. It's mostly
procedural at the moment handled by if/else's. I'm sure we could come
up with something a bit more declarative to minimize the pain.

In fact, you're pretty much going to need the same code to exist
whether we go with A or B, it's just whether that code accounts for
the differences that exist, or attempts to reconcile them. Either way,
we have to express the differences.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-15 Thread James Slagle
expect for many installs B
> won't be an option. /mnt/state is 100% technical, as no other options
> exist - none of the Linux distro 'read only root' answers today answer
> the problem /mnt/state solves in a way compatible with Nova.
>
>> In the end I think option A is the way we have to go. Is it more work... 
>> maybe. But in the end users will like us for it. And there is always the 
>> case that by not reimplementing some of the tools and mechanisms which 
>> already exist in distros that this ends up being less work anyways. I do 
>> hope so...
>
> Certainly we're trying very hard to keep things we reimplement minimal
> and easily swap-outable (like o-r-c which I expect some deployments
> will want to replace with chef/puppet).

As I said above, the swap-outable defaults is what I think is the real key here.

We work hard to have a great upstream architecture, but we allow for
variation with opt-in and opt-out. It doesn't necessarily mean that
all variants have to be in the openstack TripleO git repos. But, where
there's a cross section of folks wanting a particular variant, and
there's people in the community volunteering to do the work and
support it, I think it makes sense to consider those variants on their
individual merits.

/mnt/state is a great example of that. There was nothing consistent
across distros, so TripleO did it's own relatively straightforward
implementation that could be used on any distro. But, TripleO
shouldn't necessarily *enforce* that it's used. It doesn't have to be
an all-or-nothing  type approach IMO.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] mid-cycle meetup?

2014-02-18 Thread James Slagle
On Fri, Jan 31, 2014 at 4:42 AM, Robert Collins
 wrote:
> Ok location:
> HP's office in Sunnyvale. Dates:  Monday 3rd March through friday 7th March.
>
> All welcome, please RSVP to me cc cody.somerville at hp.com so that we
> can arrange a suitable sized room @ the venue.
>
> I figure we'll do a everyone-together dinner on tuesday night, but the
> rest of the time dinner will be your own problem :). We'll figure
> something out for lunches everyday - the office is in the middle of
> nothing, so we won't be popping out, but there is a great cafe in the
> building, or we might get catering in the meeting room.
>
> Cheers,
> Rob
>
> On 24 January 2014 08:47, Robert Collins  wrote:
>> Hi, sorry for proposing this at *cough* the mid-way point [christmas
>> shutdown got in the way of internal acks...], but who would come if
>> there was a mid-cycle meetup? I'm thinking the HP sunnyvale office as
>> a venue.


Hi, I started an etherpad to track some logistics:
https://etherpad.openstack.org/p/tripleo-icehouse-sprint

It's a bit sparse at the moment while some details are still being
worked out.  If you know you're coming, feel free to add yourself to
the etherpad if you'd like.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-02-28 Thread James Slagle
On Wed, Jan 22, 2014 at 6:46 PM, Thierry Carrez 
wrote:
> James Slagle wrote:
>> I read through that wiki page. I did have a couple of questions:
>>
>> Who usually runs through the steps there? You? or a project member?
>
> Me for integrated projects (and most incubated ones). A project member
> for everything else.
>
>> When repo_tarball_diff.sh is run, are there any acceptable missing
>> files? I'm seeing an AUTHORS and ChangeLog file showing up in the
>> output from our repos, those are automatically generated, so I assume
>> those are ok. There are also some egg_info files showing up, which I
>> also think can be safely ignored. (I submitted a patch that updates
>> the grep command used in the script:
>> https://review.openstack.org/#/c/68471/ )
>
> Yes, there is a number of "normal" things appearing there, like the
> autogenerated AUTHORS, Changelog, ignored files and egg_info stuff. The
> goal of the script is to spot any unusual thing.
>

Hi Thierry,

I'd like to ask that the following repositories for TripleO be included in
next week's cutting of icehouse-3:

http://git.openstack.org/openstack/tripleo-incubator
http://git.openstack.org/openstack/tripleo-image-elements
http://git.openstack.org/openstack/tripleo-heat-templates
http://git.openstack.org/openstack/diskimage-builder
http://git.openstack.org/openstack/os-collect-config
http://git.openstack.org/openstack/os-refresh-config
http://git.openstack.org/openstack/os-apply-config

Are you willing to run through the steps on the How_To_Release wiki for
these repos, or should I do it next week? Just let me know how or what to
coordinate. Thanks.

-- 
-- James Slagle
--
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-03-04 Thread James Slagle
On Tue, Mar 4, 2014 at 2:08 AM, Thierry Carrez  wrote:
> Robert Collins wrote:
>> On 3 March 2014 23:12, Thierry Carrez  wrote:
>>> James Slagle wrote:
>>>> I'd like to ask that the following repositories for TripleO be included
>>>> in next week's cutting of icehouse-3:
>>>>
>>>> http://git.openstack.org/openstack/tripleo-incubator
>>>> http://git.openstack.org/openstack/tripleo-image-elements
>>>> http://git.openstack.org/openstack/tripleo-heat-templates
>>>> http://git.openstack.org/openstack/diskimage-builder
>>>> http://git.openstack.org/openstack/os-collect-config
>>>> http://git.openstack.org/openstack/os-refresh-config
>>>> http://git.openstack.org/openstack/os-apply-config
>>>>
>>>> Are you willing to run through the steps on the How_To_Release wiki for
>>>> these repos, or should I do it next week? Just let me know how or what
>>>> to coordinate. Thanks.
>>>
>>> I looked into more details and there are a number of issues as TripleO
>>> projects were not really originally configured to be "released".
>>>
>>> First, some basic jobs are missing, like a tarball job for
>>> tripleo-incubator.
>>
>> Do we need one? tripleo-incubator has no infrastructure to make
>> tarballs. So that has to be created de novo, and its not really
>> structured to be sdistable - its a proving ground. This needs more
>> examination. Slagle could however use a git branch effectively.
>
> I'd say you don't need such a job, but then I'm not the one asking for
> that repository to "be included in next week's cutting of icehouse-3".
>
> James asks if I'd be OK to "run through the steps on the How_To_Release
> wiki", and that wiki page is all about publishing tarballs.
>
> So my answer is, if you want to run the release scripts for
> tripleo-incubator, then you need a tarball job.
>
>>> Then the release scripts are made for integrated projects, which follow
>>> a number of rules that TripleO doesn't follow:
>>>
>>> - One Launchpad project per code repository, under the same name (here
>>> you have tripleo-* under tripleo + diskimage-builder separately)
>>
>> Huh? diskimage-builder is a separate project, with a separate repo. No
>> conflation. Same for os-*-config, though I haven't made a LP project
>> for os-cloud-config yet (but its not a dependency yet either).
>
> Just saying that IF you want to use the release scripts (and it looks
> like you actually don't want that), you'll need a 1:1 LP <-> repo match.
> Currently in LP you have "tripleo" (covering tripleo-* repos),
> "diskimage-builder", and the os-* projects (which I somehow missed). To
> reuse the release scripts you'd have to split tripleo in LP into
> multiple projects.
>
>>> Finally the person doing the release needs to have "push annotated tags"
>>> / "create reference" permissions over refs/tags/* in Gerrit. This seems
>>> to be missing for a number of projects.
>>
>> We have this for all the projects we release; probably not incubator
>> because *we don't release it*- and we had no intent of doing releases
>> for tripleo-incubator - just having a stable branch so that there is a
>> thing RH can build rpms from is the key goal.
>
> I agree with you. I only talked about it because James mentioned it in
> his "to be released" list.
>
>>> In all cases I'd rather limit myself to incubated/integrated projects,
>>> rather than extend to other projects, especially on a busy week like
>>> feature freeze week. So I'd advise that for icehouse-3 you follow the
>>> following simplified procedure:
>>>
>>> - Add missing tarball-creation jobs
>>> - Add missing permissions for yourself in Gerrit
>>> - Skip milestone-proposed branch creation
>>> - Push tag on master when ready (this will result in tarballs getting
>>> built at tarballs.openstack.org)
>>>
>>> Optionally:
>>> - Create icehouse series / icehouse-3 milestone for projects in LP
>>> - Manually create release and upload resulting tarballs to Launchpad
>>> milestone page, under the projects that make the most sense (tripleo-*
>>> under tripleo, etc)
>>>
>>> I'm still a bit confused with the goals here. My original understanding
>>> was that TripleO was explicitly NOT following the release cycle. How
>>> much of the integrat

Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-11-28 Thread James Slagle
On Thu, Nov 27, 2014 at 1:29 PM, Sullivan, Jon Paul
 wrote:
>> -Original Message-
>> From: Ben Nemec [mailto:openst...@nemebean.com]
>> Sent: 26 November 2014 17:03
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [diskimage-builder] Tracing levels for
>> scripts (119023)
>>
>> On 11/25/2014 10:58 PM, Ian Wienand wrote:
>> > Hi,
>> >
>> > My change [1] to enable a consistent tracing mechanism for the many
>> > scripts diskimage-builder runs during its build seems to have hit a
>> > stalemate.
>> >
>> > I hope we can agree that the current situation is not good.  When
>> > trying to develop with diskimage-builder, I find myself constantly
>> > going and fiddling with "set -x" in various scripts, requiring me
>> > re-running things needlessly as I try and trace what's happening.
>> > Conversley some scripts set -x all the time and give output when you
>> > don't want it.
>> >
>> > Now nodepool is using d-i-b more, it would be even nicer to have
>> > consistency in the tracing so relevant info is captured in the image
>> > build logs.
>> >
>> > The crux of the issue seems to be some disagreement between reviewers
>> > over having a single "trace everything" flag or a more fine-grained
>> > approach, as currently implemented after it was asked for in reviews.
>> >
>> > I must be honest, I feel a bit silly calling out essentially a
>> > four-line patch here.
>>
>> My objections are documented in the review, but basically boil down to
>> the fact that it's not a four line patch, it's a 500+ line patch that
>> does essentially the same thing as:
>>
>> set +e
>> set -x
>> export SHELLOPTS
>
> I don't think this is true, as there are many more things in SHELLOPTS than 
> just xtrace.  I think it is wrong to call the two approaches equivalent.
>
>>
>> in disk-image-create.  You do lose set -e in disk-image-create itself on
>> debug runs because that's not something we can safely propagate,
>> although we could work around that by unsetting it before calling hooks.
>>  FWIW I've used this method locally and it worked fine.
>
> So this does say that your alternative implementation has a difference from 
> the proposed one.  And that the difference has a negative impact.
>
>>
>> The only drawback is it doesn't allow the granularity of an if block in
>> every script, but I don't personally see that as a particularly useful
>> feature anyway.  I would like to hear from someone who requested that
>> functionality as to what their use case is and how they would define the
>> different debug levels before we merge an intrusive patch that would
>> need to be added to every single new script in dib or tripleo going
>> forward.
>
> So currently we have boilerplate to be added to all new elements, and that 
> boilerplate is:
>
> set -eux
> set -o pipefail
>
> This patch would change that boilerplate to:
>
> if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
> set -x
> fi
> set -eu
> set -o pipefail
>
> So it's adding 3 lines.  It doesn't seem onerous, especially as most people 
> creating a new element will either copy an existing one or copy/paste the 
> header anyway.
>
> I think that giving control over what is effectively debug or non-debug 
> output is a desirable feature.

I don't think it's debug vs non-debug. I think script writers that
have explicitly used set -x previously have then operated under the
assumption that they don't need to add any useful logging since it's
running -x. In that case, this patch is actually harmful.

>
> We have a patch that implements that desirable feature.
>
> I don't see a compelling technical reason to reject that patch.

I'm not specifically -2 on this patch based on the implementation.
It's more of the fact that I don't think this patch addresses the
problem in a meaningful way. The problem seems to be that dib either
logs too much or not enough information.

Also, it's a change to the current behavior that could be unexpected.
diskimage-builder has rather poor logging as-is. We don't use echo's
enough to actually say what's going on. Most script writers have just
relied on set -x to log everything explicitly, so there's no need to
echo or log any useful info. This patch turns off all tracing unless
specifically requested via $DIB_DEBUG_TRACE. Also, not all

Re: [openstack-dev] [TripleO] nominating James Polley for tripleo-core

2015-01-15 Thread James Slagle
On Thu, Jan 15, 2015 at 1:39 PM, Clint Byrum  wrote:
> In about 24 hours we've seen 9 core +1's, one non-core +1, and only one
> dissenting opinion from James himself which I think we have properly
> dismissed. With my nomination counting as an additional +1, that is 10,
> which is 50% of the 20 cores active the last 90 days.
>
> I believe this vote has carried. Please welcome James Polley to the
> TripleO core reviewer team. :)

I'm a little late to the party, but it was a +1 from me as well. James
has been doing really valuable reviews for a while now.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Summit] Topic review for Atlanta

2014-04-28 Thread James Slagle
On Mon, Apr 28, 2014 at 6:04 PM, Ben Nemec  wrote:
> On 04/28/2014 01:04 AM, Robert Collins wrote:
>>
>> I've collated the votes and put a proposed selection of talks (some
>> sessions merged) up; I'm going to push a draft timetable as soon as I
>> finish clicking on the clicky thing .:).
>>
>> If your session has been selected you now need to:
>>   - ensure there is an etherpad for it
>>   - link it into the global list of etherpads
>
>
> Okay, etherpad newbie here.  Where is the global list of etherpads?  Or do

Create an etherpad and link to it from here:
https://wiki.openstack.org/wiki/Summit/Juno/Etherpads


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] TripleO and Docker session etherpad

2014-04-29 Thread James Slagle
Hi, I thought I'd send out the link to the etherpad I've put together
for the summit session I proposed on TripleO and Docker. If there's
any initial discussion or something you'd like to see added, please
reply here or add directly to the etherpad.

https://etherpad.openstack.org/p/juno-summit-tripleo-and-docker

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Spec Template Change Proposal

2014-05-21 Thread James Slagle
On Wed, May 21, 2014 at 4:37 PM, Jay Dobies  wrote:
> Currently, there is the following in the template:
>
>
>
> Proposed change
> ===
>
> [snip]
>
> Alternatives
> 
>
> [snip]
>
> Security impact
> ---
>
>
>
> The unit tests assert the top and second level sections are standard, so if
> I add a section at the same level as Alternatives under Proposed Change, the
> tests will fail. If I add a third level section using ^, they pass.
>
> The problem is that you can't add a ^ section under Proposed Change. Sphinx
> complains about a title level inconsistency since I'm skipping the second
> level and jumping to the third. But I can't add a second-level section
> directly under Proposed Change because it will break the unit tests that
> validate the structure.
>
> The proposed change is going to be one of the beefier sections of a spec, so
> not being able to subdivide it is going to make the documentation messy and
> removes the ability to link directly to a portion of a proposed change.
>
> I propose we add a section at the top of Proposed Change called Overview
> that will hold the change itself. That will allow us to use third level
> sections in the change itself while still having the first and section
> section structure validated by the tests.
>
> I have no problem making the change to the templates, unit tests, and any
> existing specs (I don't think we have any yet), but before I go through
> that, I wanted to make sure there wasn't a major disagreement.
>

I'm a bit ambivalent to be honest, but adding a section for Overview
doesn't really do much IMO.  Just give an overview in the first couple
of sentences under "Proposed Change". If I go back and add an Overview
section to my spec in review, I'm just going to slap everything in
Proposed Change into one Overview section :).  To me, Work Items is
where more of the details goes (which does support aribtrary
subsections with ^^^).

In general though I think that the unit tests are too rigid and
pedantic. Plus, having to go back and update old specs when we make
changes to unit tests seems strange. No biggie right now, but we do
have a couple of specs in review. Unless we write the unit tests to be
backwards compatible. This just feels a bit like engineering just for
the sake of it.  Maybe we need a spec on it :).

I was a bit surprised to see that we don't have the Data Model section
in our specs, and when I had one, unit tests failed. We actually do
have data model stuff in Tuskar and our json structures in tripleo.

Anyway, just my $0.02.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] Should #tuskar business be conducted in the #tripleo channel?

2014-05-29 Thread James Slagle
On Thu, May 29, 2014 at 12:25 PM, Anita Kuno  wrote:
> As I was reviewing this patch today:
> https://review.openstack.org/#/c/96160/
>
> It occurred to me that the tuskar project is part of the tripleo
> program:
> http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n247
>
> I wondered if business, including bots posting to irc for #tuskar is
> best conducted in the #tripleo channel. I spoke with Chris Jones in
> #tripleo and he said the topic hadn't come up before. He asked me if I
> wanted to kick off the email thread, so here we are.
>
> Should #tuskar business be conducted in the #tripleo channel?

I'd say yes. I don't think the additional traffic would be a large
distraction at all to normal TripleO business.

I can however see how it might be nice to have #tuskar to talk tuskar
api and tuskar ui stuff in the same channel. Do folks usually do that?
Or is tuskar-ui conversation already happening in #openstack-horizon?

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread James Slagle
On Mon, Jun 16, 2014 at 12:19 PM, Tomas Sedovic  wrote:
> All,
>
> After having proposed some changes[1][2] to tripleo-heat-templates[3],
> reviewers suggested adding a deprecation period for the merge.py script.
>
> While TripleO is an official OpenStack program, none of the projects
> under its umbrella (including tripleo-heat-templates) have gone through
> incubation and integration nor have they been shipped with Icehouse.
>
> So there is no implicit compatibility guarantee and I have not found
> anything about maintaining backwards compatibility neither on the
> TripleO wiki page[4], tripleo-heat-template's readme[5] or
> tripleo-incubator's readme[6].
>
> The Release Management wiki page[7] suggests that we follow Semantic
> Versioning[8], under which prior to 1.0.0 (t-h-t is ) anything goes.
> According to that wiki, we are using a stronger guarantee where we do
> promise to bump the minor version on incompatible changes -- but this
> again suggests that we do not promise to maintain backwards
> compatibility -- just that we document whenever we break it.
>
> According to Robert, there are now downstreams that have shipped things
> (with the implication that they don't expect things to change without a
> deprecation period) so there's clearly a disconnect here.
>
> If we do promise backwards compatibility, we should document it
> somewhere and if we don't we should probably make that more visible,
> too, so people know what to expect.
>
> I prefer the latter, because it will make the merge.py cleanup easier
> and every published bit of information I could find suggests that's our
> current stance anyway.
>
> Tomas
>
> [1]: https://review.openstack.org/#/c/99384/
> [2]: https://review.openstack.org/#/c/97939/
> [3]: https://github.com/openstack/tripleo-heat-templates
> [4]: https://wiki.openstack.org/wiki/TripleO
> [5]:
> https://github.com/openstack/tripleo-heat-templates/blob/master/README.md
> [6]: https://github.com/openstack/tripleo-incubator/blob/master/README.rst
> [7]: https://wiki.openstack.org/wiki/TripleO/ReleaseManagement
> [8]: http://semver.org/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Tomas,

By and large, I think you are correct in your conclusions about the
current state of backwards compatibility in TripleO.

Much of this is the reason why I pushed for the stable branches that
we cut for icehouse. I'm not sure what "downstreams that have shipped
things" are being referred to, but perhaps those needs could be served
by the stable/icehouse branches that exist today?  I know at least for
the RDO downstream, the packages are being built off of releases done
from the stable branches. So, honestly, I'm not that concerned about
your proposed changes to rip stuff out without any deprecation from
that point of view :).

That being said, just because TripleO has taken the stance that
backwards compatibility is not guaranteed, I agree with some of the
other sentiments in this thread: that we should at least try if there
are easy things we can do.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-10 Thread James Slagle
On Mon, Mar 10, 2014 at 6:10 AM, Jiří Stránský  wrote:
> On 7.3.2014 14:50, Imre Farkas wrote:
>>
>> On 03/07/2014 10:30 AM, Jiří Stránský wrote:
>>>
>>> Hi,
>>>
>>> there's one step in cloud initialization that is performed over SSH --
>>> calling "keystone-manage pki_setup". Here's the relevant code in
>>> keystone-init [1], here's a review for moving the functionality to
>>> os-cloud-config [2].
>>>
>>> The consequence of this is that Tuskar will need passwordless ssh key to
>>> access overcloud controller. I consider this suboptimal for two reasons:
>>>
>>> * It creates another security concern.
>>>
>>> * AFAIK nova is only capable of injecting one public SSH key into
>>> authorized_keys on the deployed machine, which means we can either give
>>> it Tuskar's public key and allow Tuskar to initialize overcloud, or we
>>> can give it admin's custom public key and allow admin to ssh into
>>> overcloud, but not both. (Please correct me if i'm mistaken.) We could
>>> probably work around this issue by having Tuskar do the user key
>>> injection as part of os-cloud-config, but it's a bit clumsy.
>>>
>>>
>>> This goes outside the scope of my current knowledge, i'm hoping someone
>>> knows the answer: Could pki_setup be run by combining powers of Heat and
>>> os-config-refresh? (I presume there's some reason why we're not doing
>>> this already.) I think it would help us a good bit if we could avoid
>>> having to SSH from Tuskar to overcloud.
>>
>>
>> Yeah, it came up a couple times on the list. The current solution is
>> because if you have an HA setup, the nodes can't decide on its own,
>> which one should run pki_setup.
>> Robert described this topic and why it needs to be initialized
>> externally during a weekly meeting in last December. Check the topic
>> 'After heat stack-create init operations (lsmola)':
>>
>> http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html
>
>
> Thanks for the reply Imre. Yeah i vaguely remember that meeting :)
>
> I guess to do HA init we'd need to pick one of the controllers and run the
> init just there (set some parameter that would then be recognized by
> os-refresh-config). I couldn't find if Heat can do something like this on
> it's own, probably we'd need to deploy one of the controller nodes with
> different parameter set, which feels a bit weird.
>
> Hmm so unless someone comes up with something groundbreaking, we'll probably
> keep doing what we're doing.

Agreed,  I think what you've done here is fine.

As you keep churning through init-keystone, keep in mind that there
are some recent changes in review[1] that switch that script over to
use openstack-client instead of keystoneclient. That was needed due to
the required use of the keystone v3 api to create a domain for the
heat stack user. A fallback backwards compatibility was added to Heat
to allow the existing behavior to still work, but I don't see a reason
for you to reimplement the old way of doings things in
os-cloud-config. There is a helper script[2] in Heat that shows how
the domain should be created.

[1] https://review.openstack.org/#/c/78020/
[2] http://git.openstack.org/cgit/openstack/heat/tree/tools/create_heat_domain

> Having the ability to inject multiple keys to
> instances [1] would help us get rid of the Tuskar vs. admin key issue i
> mentioned in the initial e-mail. We might try asking a fellow Nova developer
> to help us out here.
>
>
> Jirka
>
> [1] https://bugs.launchpad.net/nova/+bug/917850
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-13 Thread James Slagle
On Thu, Mar 13, 2014 at 2:51 AM, Robert Collins
 wrote:
> So we already have pretty high requirements - its basically a 16G
> workstation as minimum.
>
> Specifically to test the full story:
>  - a seed VM
>  - an undercloud VM (bm deploy infra)
>  - 1 overcloud control VM
>  - 2 overcloud hypervisor VMs
> 
>5 VMs with 2+G RAM each.
>
> To test the overcloud alone against the seed we save 1 VM, to skip the
> overcloud we save 3.
>
> However, as HA matures we're about to add 4 more VMs: we need a HA
> control plane for both the under and overclouds:
>  - a seed VM
>  - 3 undercloud VMs (HA bm deploy infra)
>  - 3 overcloud control VMs (HA)
>  - 2 overcloud hypervisor VMs
> 
>9 VMs with 2+G RAM each == 18GB
>
> What should we do about this?
>
> A few thoughts to kick start discussion:
>  - use Ironic to test across multiple machines (involves tunnelling
> brbm across machines, fairly easy)
>  - shrink the VM sizes (causes thrashing)
>  - tell folk to toughen up and get bigger machines (ahahahahaha, no)
>  - make the default configuration inline the hypervisors on the
> overcloud with the control plane:
>- a seed VM
>- 3 undercloud VMs (HA bm deploy infra)
>- 3 overcloud all-in-one VMs (HA)
>   
>  7 VMs with 2+G RAM each == 14GB
>
>
> I think its important that we exercise features like HA and live
> migration regularly by developers, so I'm quite keen to have a fairly
> solid systematic answer that will let us catch things like bad
> firewall rules on the control node preventing network tunnelling
> etc... e.g. we benefit the more things are split out like scale
> deployments are. OTOH testing the micro-cloud that folk may start with
> is also a really good idea


The idea I was thinking was to make a testenv host available to
tripleo atc's. Or, perhaps make it a bit more locked down and only
available to a new group of tripleo folk, existing somewhere between
the privileges of tripleo atc's and tripleo-cd-admins.  We could
document how you use the cloud (Red Hat's or HP's) rack to start up a
instance to run devtest on one of the compute hosts, request and lock
yourself a testenv environment on one of the testenv hosts, etc.
Basically, how our CI works. Although I think we'd want different
testenv hosts for development vs what runs the CI, and would need to
make sure everything was locked down appropriately security-wise.

Some other ideas:

- Allow an option to get rid of the seed VM, or make it so that you
can shut it down after the Undercloud is up. This only really gets rid
of 1 VM though, so it doesn't buy you much nor solve any long term
problem.

- Make it easier to see how you'd use virsh against any libvirt host
you might have lying around.  We already have the setting exposed, but
make it a bit more public and call it out more in the docs. I've
actually never tried it myself, but have been meaning to.

- I'm really reaching now, and this may be entirely unrealistic :),
butsomehow use the fake baremetal driver and expose a mechanism to
let the developer specify the already setup undercloud/overcloud
environment ahead of time.
For example:
* Build your undercloud images with the vm element since you won't be
PXE booting it
* Upload your images to a public cloud, and boot instances for them.
* Use this new mechanism when you run devtest (presumably running from
another instance in the same cloud)  to say "I'm using the fake
baremetal driver, and here are the  IP's of the undercloud instances".
* Repeat steps for the overcloud (e.g., configure undercloud to use
fake baremetal driver, etc).
* Maybe it's not the fake baremetal driver, and instead a new driver
that is a noop for the pxe stuff, and the power_on implementation
powers on the cloud instances.
* Obviously if your aim is to test the pxe and disk deploy process
itself, this wouldn't work for you.
* Presumably said public cloud is OpenStack, so we've also achieved
another layer of "On OpenStack".


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Alternating meeting time for more TZ friendliness

2014-03-18 Thread James Slagle
Our current meeting time is Tuesdays at 19:00 UTC.  I think this works
ok for most folks in and around North America.

It was proposed during today's meeting to see if there is interest is
an alternating meeting time every other week so that we can be a bit
more friendly to those folks that currently can't attend.
If that interests you, speak up :).

For reference, the current meeting schedules are at:
https://wiki.openstack.org/wiki/Meetings

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] PTL candidacy

2014-04-03 Thread James Slagle
evstack, and also
implementations now in TripleO.  I'd like to see the elements become more
universal to where they could be used outside of an image building context and
perhaps even in devstack. tripleo-image-elements then just becomes additional
image specific logic for TripleO that can be layered on top of the install
elements.

My commits: https://review.openstack.org/#/q/owner:slagle,n,z

My reviews: https://review.openstack.org/#/q/reviewer:slagle,n,z

http://www.russellbryant.net/openstack-stats/tripleo-reviewers-180.txt

Thank you for your consideration!

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread James Slagle
On Thu, Apr 3, 2014 at 7:02 AM, Robert Collins
 wrote:
> Getting back in the swing of things...
>
> Hi,
> like most OpenStack projects we need to keep the core team up to
> date: folk who are not regularly reviewing will lose context over
> time, and new folk who have been reviewing regularly should be trusted
> with -core responsibilities.
>
> In this months review:
>  - Dan Prince for -core
>  - Jordan O'Mara for removal from -core
>  - Jiri Tomasek for removal from -core
>  - Jamomir Coufal for removal from -core

+1 to all.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] stable/icehouse branches cut

2014-04-04 Thread James Slagle
The stable/icehouse branches for:

tripleo-image-elements
tripleo-heat-templates
tuskar

Have been created from the latest tags, which I just tagged and
released yesterday.

The stable/icehouse branch for tripleo-incubator was cut from the
latest sha as of this afternoon (since we don't tag and release this
repo).

For now, we need some ACL overrides in gerrit to allow folks to review
appropriately. I've submitted that change to openstack-infra/config:
https://review.openstack.org/85485

For questions about committing/proposing/backporting changes to the
stable branches, this link[1] has a lot of good info. It talks about
milestone-proposed branches, but the process would be the same for our
stable/icehouse branches.

[1] 
https://wiki.openstack.org/wiki/GerritJenkinsGithub#Authoring_Changes_for_milestone-proposed

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread James Slagle
ould go about adding additional parameters to the templates that they
wanted exposed. Perhaps even give them a way to "merge" their custom
template snippets that add Parameters into those from
tripleo-heat-templates.

And, in fact, I think in most cases it *wouldn't* be a case of users
wanting to expose stuff so that they can have multiple values for it
in their environment. If that was true, I think we would have
identified those already and would have them exposed b/c it would
likely be useful to us as well.

Instead they likely need to have something exposed just to set it
differently, but that different value is the *same* across their
environment. In which case, they could just use a custom element that
provided some override json/yaml for os-collect-config during their
image builds.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread James Slagle
On Mon, Apr 7, 2014 at 7:50 PM, Robert Collins
 wrote:
> tl;dr: 3 more core members to propose:
> bnemec
> greghaynes
> jdob

+1 to all. I've valued the feedback from these individuals as both
fellow reviewers and on my submitted patches.



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][design] review based conceptual design process

2014-04-15 Thread James Slagle
On Tue, Apr 15, 2014 at 2:44 PM, Robert Collins
 wrote:
> I've been watching the nova process, and I think its working out well
> - it certainly addresses:
>  - making design work visible
>  - being able to tell who has had input
>  - and providing clear feedback to the designers
>
> I'd like to do the same thing for TripleO this cycle..
>
> I'm thinking we can just add docs to incubator, since thats already a
> repository separate to our production code - what do folk think?

+1 from me.

Think I'd prefer a separate repo for tripleo-specs though.

One thing that I don't think I saw called out specifically in the
nova-specs thread was about keeping these spec and design documents
updated. I'm guessing the plan around that would just be to submit
updates in gerrit as patches, and then we can all review the updates
as well. I think it's important that we try to keep them up to date
and accurate as possible.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread James Slagle
On Wed, Sep 24, 2014 at 9:16 AM, Jay Pipes  wrote:
> On 09/24/2014 03:19 AM, Robert Collins wrote:
>>
>> On 24 September 2014 16:38, Jay Pipes  wrote:
>>>
>>> On 09/23/2014 10:29 PM, Steven Dake wrote:
>>>>
>>>>
>>>> There is a deployment program - tripleo is just one implementation.
>>>
>>>
>>> Nope, that is not correct. Like it or not (I personally don't), Triple-O
>>> is
>>> *the* Deployment Program for OpenStack:
>>>
>>>
>>> http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n284
>>>
>>> Saying Triple-O is just one implementation of a deployment program is
>>> like
>>> saying Heat is just one implementation of an orchestration program. It
>>> isn't. It's *the* implemenation of an orchestration program that has been
>>> blessed by the TC:
>>>
>>>
>>> http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n112
>>
>>
>> Thats not what Steve said. He said that the codebase they are creating
>> is a *project* with a target home of the OpenStack Deployment
>> *program*, aka TripleO. The TC blesses social structure and code
>> separately: no part of TripleO has had its code blessed by the TC yet
>> (incubation/graduation), but the team was blessed.
>
>
> There are zero programs in the OpenStack governance repository that have
> competing implementations for the same thing.
>
> Like it or not, the TC process of blessing these "teams" has effectively
> blessed a single implementation of something.

And it looks to me like what's being proposed here is that there is a
group of folks who intend to work on Knoll, and they are indicating
that they plan to participate and would like to be a part of that
"team". Personally, as a TripleO "team" member, I welcome that
approach and their willingness to participate and share experience
with the Deployment program.

Meaning: exactly what you seem to claim is not possible due to some
perceived blessing, is indeed in fact happening, or trying to come
about.

It would be great if Heat was already perfect and great at doing
container orchestration *really* well. I'm not saying Kubernetes is
either, but I'm not going to dismiss it just b/c it might "compete"
with Heat. I see lots of other integration points with OpenStack
services  (using heat/nova/ironic to deploy kubernetes host, still
using ironic to deploy baremetal storage nodes due to the iscsi issue,
etc).


>
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread James Slagle
On Wed, Sep 24, 2014 at 9:41 AM, James Slagle  wrote:

> And it looks to me like what's being proposed here is that there is a
> group of folks who intend to work on Knoll, and they are indicating

Oops, I meant Kolla, obviously :-).




-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-24 Thread James Slagle
On Wed, Sep 24, 2014 at 10:03 AM, Jay Pipes  wrote:
> On 09/24/2014 09:41 AM, James Slagle wrote:
>> Meaning: exactly what you seem to claim is not possible due to some
>> perceived blessing, is indeed in fact happening, or trying to come
>> about.
>
>
> :) Talking about something on the ML is not the same thing as having that
> thing happen in real life.

Hence the "trying to come about". And the only thing proposed for real
life right now is a project under stackforge whose long term goal is
to merge into the Deployment program. I don't get the opposition to a
long term goal.

> Kolla folks can and should discuss their end goal
> of being in the openstack/ code namespace and offering an alternate
> implementation for deploying OpenStack. That doesn't mean that the Technical
> Committee will allow this, though.

Certainly true. Perhaps the mission statement for the Deployment
program needs some tweaking. Perhaps it will be covered by whatever
plays out within the larger OpenStack changes that are being discussed
about the future of programs/projects/etc.

Personally, I think there is some room for interpretation in the
existing mission statement around the "wherever possible" phrase.
Where it's not possible, OpenStack does not have to be used. So again,
we probably need to update for clarity. I think the Deployment program
should work with the TC to help define what it wants to be.

> Which is what I'm saying... the real
> world right now does not match this perception that a group can just state
> where they want to end up in the openstack/ code namespace and by just
> "being up front about it", that magically happens.

I'm not sure who you are arguing against that has that perception :).

I've reread the thread, and I see desires being voiced  to join an
existing program, and some initial support offered in favor of that,
minus your responses ;-). Obviously patches would have to be proposed
to the governance repo to add projects under the program, those would
have to be approved by people with +2 in governance, etc. No one
claims it will be magically done.

>> It would be great if Heat was already perfect and great at doing
>> container orchestration *really* well. I'm not saying Kubernetes is
>> either, but I'm not going to dismiss it just b/c it might "compete"
>> with Heat. I see lots of other integration points with OpenStack
>> services  (using heat/nova/ironic to deploy kubernetes host, still
>> using ironic to deploy baremetal storage nodes due to the iscsi issue,
>> etc).
>
>
> Again, I'm not dismissing Kolla whatsoever. I think it's a great initiative.
> I'd point out that Fuel has been doing deployment with Docker containers for
> a while now, also out in the open, but on stackforge. Would the deployment
> program welcome Fuel into the openstack/ code namespace as well? Something
> to think about.

Based on what you're saying about the Deployment program, you seem to
indicate the TC would say No.

I don't speak for the program. In the past, I've personally expressed
support for alternative implementations where they make sense for
OpenStack as a whole, and I still feel that way.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] PTL Candidacy

2014-09-24 Thread James Slagle
I'd like to announce my candidacy for TripleO PTL.

I think most folks who have worked in the TripleO community probably know me.
For those who don't, I work for Red Hat, and over the last year and a half that
I've been involved with TripleO I've worked in different areas. My focus has
been on improvements to the frameworks to support things such as other distros,
packages, and offering deployment choices. I've also tried to focus on
stabilization and documentation as well.

I stand by what I said in my last candidacy announcement[1], so I'm not going
to repeat all of that here :-).

One of the reasons I've been so active in reviewing changes to the project is
because I want to help influence the direction and move progress forward for
TripleO. The spec process was new for TripleO during the Juno cycle, and I also
helped define that. I think that process is working well and will continue to
evolve during Kilo as we find what works best.

The TripleO team has made a lot of great progress towards full HA deployments,
CI improvements, rearchitecting Tuskar as a deployment planning service, and
driving features in Heat to support our use cases. I support this work
continuing in Kilo.

I continue to believe in TripleO's mission to use OpenStack itself.  I think
the feedback provided by TripleO to other projects is very valuable. Given the
complexity to deploy OpenStack, TripleO has set a high bar for other
integrated projects to meet to achieve this goal. The resulting new features
and bug fixes that have surfaced as a result has been great for all of
OpenStack.

Given that TripleO is the Deployment program though, I also support alternative
implementations where they make sense. Those implementations may be in
TripleO's existing projects themselves, new projects entirely, or pulling in
existing projects under the Deployment program where a desire exists. Not every
operator is going to deploy OpenStack the same way, and some organizations
already have entrenched and accepted tooling.

To that end, I would also encourage integration with other deployment tools.
Puppet is one such example and already has wide support in the broader
OpenStack community. I'd also like to see TripleO support different update
mechanisms potentially with Heat's SoftwareConfig feature, which didn't yet
exist when TripleO first defined an update strategy.

The tripleo-image-elements repository is a heavily used part of our process and
I've seen some recurring themes come up that I'd like to see addressed. Element
idempotence seems to often come up, as well as the ability to edit already
built images. I'd also like to see our elements more generally applicable to
installing OpenStack vs. just installing OpenStack in an image building
context.  Personally, I support these features, but mostly, I'd like to drive
to a consensus on those points during Kilo.

I'd love to see more people developing and using TripleO where they can and
providing feedback. To enable that, I'd like for easier developer setups to
be a focus during Kilo so that it's simpler for people to contribute without
such a large initial learning curve investment. Downloadable prebuilt images
could be one way we could make that process easier.

There have been a handful of mailing list threads recently about the
organization of OpenStack and how TripleO/Deployment may fit into that going
forward. One thing is clear, the team has made a ton of great progress since
it's inception. I think we should continue on the mission of OpenStack owning
it's own production deployment story, regardless of how programs may be
organized in the future, or what different paths that story may take.

Thanks for your consideration!

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-April/031772.html


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-26 Thread James Slagle
On Fri, Sep 26, 2014 at 4:50 PM, Jay Pipes  wrote:
> Heh, I just got off the phone with Monty talking about this :) Comments
> inline...
>
> On 09/22/2014 03:11 PM, Tim Bell wrote:
>>
>> The quality designation is really important for the operator
>> community who are trying to work out what we can give to our end
>> users.
>
>
> So, I think it's important to point out here that there are three different
> kinds of operators/deployers:
>
>  * Ones who use a distribution of OpenStack (RDO, UCA, MOS, Nebula, Piston,
> etc)
>  * Ones who use Triple-O
>  * Ones who go it alone and install (via source, a mixture of source and
> packages, via config management like Chef or Puppet, etc)

I'm not sure TripleO fits in this list. It is not just a collection of
prescriptive OpenStack bits used to do a deployment. TripleO is
tooling to build OpenStack to deploy OpenStack. You can use whatever
"source" (packages, distribution, released tarballs, straight from
git) you want to build that OpenStack. TripleO could deploy your first
or third bullet item.

>
> In reality, you are referring to the last group, since operators in the
> first group are saying "we are relying on a distribution to make informed
> choices about what is ready for prime time because we tested these things
> together". Operators in the second group are really only HP right now,
> AFAICT, and Triple-O's "opinion" on the production readiness of the things
> it deploys in the undercloud are roughly equal to "all of the integrated
> release that the TC defines".

FWIW, TripleO offers deploying using distributions, by installing from
packages from the RDO repositories. There's nothing RDO specific about
it though, any packaged OpenStack distribution could be installed with
the TripleO tooling. RDO is just likely the most well tested.

Even when not installing via a distribution, and either directly from
trunk or the integrated release tarballs, I don't know that any
TripleO opinion enters into it. TripleO uses the integrated projects
of OpenStack to deploy an overcloud. In an overcloud, you may see
support for some incubated projects, depending on if there's interest
from the community for that support.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Triple-O] Openstack Onboarding

2014-10-22 Thread James Slagle
On Tue, Oct 21, 2014 at 1:08 PM, Clint Byrum  wrote:
> So Tuskar would be a part of that deployment cloud, and would ask you
> things about your hardware, your desired configuration, and help you
> get the inventory loaded.
>
> So, ideally our gate would leave the images we test as part of the
> artifacts for build, and we could just distribute those as part of each
> release. That probably wouldn't be too hard to do, but those images
> aren't exactly small so I would want to have some kind of strategy for
> distributing them and limiting the unique images users are exposed to so
> we're not encouraging people to run CD by downloading each commit's image.

I think the downloadable images would be great. We've done such a
thing for RDO.  And (correct me if I'm wrong), but I think the Helion
community distro does so as well? If that's the use case that seems to
work well downstream, it'd be nice to have a similar model upstream as
well.

For a bootstrap process just to try things out or get setup for
development, we could skip one layer and go straight from the seed to
the overcloud. In such a scenario, it may make sense to refer to the
seed as the undercloud since it's also your deployment cloud, and so
tuskar would likely be installed there.

We could also explore using the same image for the seed and
undercloud, and that would give folks one less image to
build/download.

Agreed that this will be good to discuss at the contributor meetup at
the summit. I think the first 2 bullet points on the etherpad are
pretty related actually (Adam's OpenStack Setup and James Polley's
Pathways into TripleO).

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-06-27 Thread James Slagle
Things are a bit confusing right now, especially with what's been
proposed.  Let me try and clarify (even if just for my own sake).

Currently the choices offered are:

1. mysql percona with the percona tarball
2. mariadb galera with mariadb.org packages
3. mariadb galera with rdo packages

And, we're proposing to add

4. mysql percona with percona packages: https://review.openstack.org/#/c/90134
5. mariadb galera with fedora packages https://review.openstack.org/#/c/102815/

4 replaces 1, but only for Ubuntu/Debian, it doesn't work on Fedora/RH
5 replaces 3 (neither of which work on Ubuntu/Debian, obviously)

Do we still need 1? Fedora/RH + percona tarball.  I personally don't think so.

Do we still need 2? Fedora/RH or Ubuntu/Debian with galera packages
from maraidb.org. For the Fedora/RH case, I doubt it, people will just
use 5.

3 will be gone (replaced by 5).

So, yes, I'd like to see 5 as the default for Fedora/RH and 4 as the
default for Ubuntu/Debian, and both those tested in CI. And get rid of
(or deprecate) 1-3.





On Thu, Jun 26, 2014 at 5:30 PM, Giulio Fidente  wrote:
> On 06/26/2014 11:11 AM, Jan Provaznik wrote:
>>
>> On 06/25/2014 06:58 PM, Giulio Fidente wrote:
>>>
>>> On 06/16/2014 11:14 PM, Clint Byrum wrote:
>>>>
>>>> Excerpts from Gregory Haynes's message of 2014-06-16 14:04:19 -0700:
>>>>>
>>>>> Excerpts from Jan Provazník's message of 2014-06-16 20:28:29 +:
>>>>>>
>>>>>> Hi,
>>>>>> MariaDB is now included in Fedora repositories, this makes it
>>>>>> easier to
>>>>>> install and more stable option for Fedora installations. Currently
>>>>>> MariaDB can be used by including mariadb (use mariadb.org pkgs) or
>>>>>> mariadb-rdo (use redhat RDO pkgs) element when building an image. What
>>>>>> do you think about using MariaDB as default option for Fedora when
>>>>>> running devtest scripts?
>>>>
>>>>
>>>> (first, I believe Jan means that MariaDB _Galera_ is now in Fedora)
>>>
>>>
>>> I think so too.
>>>
>>>>> Id like to give this a try. This does start to change us from being a
>>>>> deployment of openstck to being a deployment per distro but IMO thats a
>>>>> reasonable position.
>>>>>
>>>>> Id also like to propose that if we decide against doing this then these
>>>>> elements should not live in tripleo-image-elements.
>>>>
>>>>
>>>> I'm not so sure I agree. We have lio and tgt because lio is on RHEL but
>>>> everywhere else is still using tgt IIRC.
>>>>
>>>> However, I also am not so sure that it is actually a good idea for
>>>> people
>>>> to ship on MariaDB since it is not in the gate. As it diverges from
>>>> MySQL
>>>> (starting in earnest with 10.x), there will undoubtedly be subtle issues
>>>> that arise. So I'd say having MariaDB get tested along with Fedora will
>>>> actually improve those users' test coverage, which is a good thing.
>>>
>>>
>>> I am favourable to the idea of switching to mariadb for fedora based
>>> distros.
>>>
>>> Currently the default mysql element seems to be switching [1], yet for
>>> ubuntu/debian only, from the percona provided binary tarball of mysql to
>>> the percona provided packaged version of mysql.
>>>
>>> In theory we could further update it to use percona provided packages of
>>> mysql on fedora too but I'm not sure there is much interest in using
>>> that combination where people gets mariadb and galera from the official
>>> repos.
>>>
>>
>> IIRC fedora packages for percona xtradb cluster are not provided (unless
>> something has changed recently).
>
>
> I see, so on fedora it will be definitely easier and safer to just use the
> mariadb/galera packages provided in the official repo ... and this further
> reinforces my idea that it is the best option to use that by default for
> fedora
>
>
> --
> Giulio Fidente
> GPG KEY: 08D733BA
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-06-27 Thread James Slagle
On Fri, Jun 27, 2014 at 4:13 PM, Clint Byrum  wrote:
> Excerpts from James Slagle's message of 2014-06-27 12:59:36 -0700:
>> Things are a bit confusing right now, especially with what's been
>> proposed.  Let me try and clarify (even if just for my own sake).
>>
>> Currently the choices offered are:
>>
>> 1. mysql percona with the percona tarball
>
> Percona Xtradb Cluster, not "mysql percona"
>
>> 2. mariadb galera with mariadb.org packages
>> 3. mariadb galera with rdo packages
>>
>> And, we're proposing to add
>>
>> 4. mysql percona with percona packages: 
>> https://review.openstack.org/#/c/90134
>> 5. mariadb galera with fedora packages 
>> https://review.openstack.org/#/c/102815/
>>
>> 4 replaces 1, but only for Ubuntu/Debian, it doesn't work on Fedora/RH
>> 5 replaces 3 (neither of which work on Ubuntu/Debian, obviously)
>>
>> Do we still need 1? Fedora/RH + percona tarball.  I personally don't think 
>> so.
>>
>> Do we still need 2? Fedora/RH or Ubuntu/Debian with galera packages
>> from maraidb.org. For the Fedora/RH case, I doubt it, people will just
>> use 5.
>>
>> 3 will be gone (replaced by 5).
>>
>> So, yes, I'd like to see 5 as the default for Fedora/RH and 4 as the
>> default for Ubuntu/Debian, and both those tested in CI. And get rid of
>> (or deprecate) 1-3.
>>
>
> I'm actually more confused now than before I read this. The use of
> numbers is just making my head spin.

There are 5 choices, some of which are not totally clear.  Hence the
need to clean things up.

>
> It can be stated this way I think:
>
> On RPM systems, use MariaDB Galera packages.
> If packages are in the distro, use distro packages. If packages are
> not in the distro, use RDO packages.

There won't be a need to install from the RDO repositories. Mariadb
galera packages are in the main Fedora package repositories, and for
RHEL, they're in the epel repositories.

>
> On DEB systems, use Percona XtraDB Cluster packages.
> If packages are in the distro, use distro packages. If packages are
> not in the distro, use upstream packages.
>
> If anything doesn't match those principles, it is a bug.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-14 Thread James Slagle
On Wed, Jul 9, 2014 at 11:52 AM, Clint Byrum  wrote:
> So, I propose that we add jonpaul-sullivan and lxsli to the TripleO core
> reviewer team.

I'm +1 to adding both as core reviewers, I've found their reviews to
be well reasoned and consistent.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-07-25 Thread James Slagle
On Fri, Jul 25, 2014 at 9:59 AM, John Griffith
 wrote:

> The LIO versus Tgt thing however is a bit troubling.  Is there a reason that
> TripleO decided to do the exact opposite of what the defaults are in the
> rest of OpenStack here?  Also any reason why if there was a valid
> justification for this it didn't seem like it might be worthwhile to work
> with the rest of the OpenStack community and share what they considered to
> be the better solution here?

Not really following what you find troubling. Cinder allows you to
configure it to use Tgt or LIO. Are you objecting to the fact that
TripleO allows people to *choose* to use LIO?

As was explained in the review[1], Tgt is the default for TripleO. If
you want to use LIO, TripleO offers that choice, just like Cinder
does.

[1] https://review.openstack.org/#/c/78463/


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Spec process

2014-07-29 Thread James Slagle
Last week at the TripleO midcycle we discussed the spec process that
we've adopted. Overall, I think most folks are liking the specs
themselves. I heard general agreement that we're helping to tease out
issues and potential implementation disagreements earlier in the
process, and that's a good thing.  I don't think I heard any contrary
opinions to that anyway :-).

The point was raised that we have a lot of specs in review, and
relatively few approved specs. All agreed that the time to review a
spec can be consuming and requires more commitment. We proposed asking
core reviewers to commit to reviewing at least 1 spec a week. jdob
emailed the list about that:

http://lists.openstack.org/pipermail/openstack-dev/2014-July/040926.html

Please reply to that thread if you have an opinion on that point.
Everyone at the midcycle was in general agreement about that
commitment (hence probably why not a lot of people have replied), but
we wanted to be sure to poll those that couldn't attend the midcycle
as well.

The idea of opening up the spec approval process to other TripleO core
reviewers also came up. Personally, I felt this might reduce the
workload on one person a bit and increase bandwidth to get specs
approved. Since the team is new to this process, we talked about
defining what it means when a spec is ready to be approved. I
volunteered to pull together a wiki page on that topic and have done
so here:

https://wiki.openstack.org/wiki/TripleO/SpecReviews

Thoughts on modifications, additions, subtractions, etc., are all welcome.

Finally, the juno-2 milestone has passed. Many (if not all?)
integrated projects have already -2'd specs that have not been
approved, indicating they are not going to make Juno. There are many
valid reasons to do this: focus, stabilization, workload, etc.

Personally, I don't feel like TripleO agreed or had discussion on this
point as a community. I'm actually not sure right off (without digging
through archives) if the spec freeze is an OpenStack wide process or
for individual projects. And, if it is OpenStack wide, would that
apply just to projects that are part of the integrated release.

I admit some selfishness here...since I have some outstanding specs.
But, I think we need to come to a consensus if we are going to have a
spec freeze for TripleO around the time of other projects or not, and
at the very least, define what those dates are. Additionally, we
haven't defined or talked about if we'd have an exception process to
the freeze if someone wanted to propose an exception.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Kicking TripleO up a notch

2013-10-03 Thread James Slagle
d the low level tooling because instead they're focused on the CD environment?

>  - will need community buy-in and support to make it work : two of the
> key things about working Lean are keeping WIP - inventory - low and
> ensuring that bottlenecks are not used for anything other than
> bottleneck tasks. Both of these things impact what can be done at any
> point in time within the project: we may need to say 'no' to proposed
> work to permit driving more momentum as a whole... at least in the
> short term.

Can you give some examples of something that might be said no to?

In my head, I read that as "refactoring or new functionality that is likely to
break stuff that works now".

Some of the high level things that are important to me now are:

- getting fixes committed to any of the repos that correct issues that
   are causing things to not work as intended
- new d-i-b elements for new functionality
- minor changes to existing d-i-b elements, things like make something
more configurable
  if needed, fix installation issues, etc
- perhaps new heat templates for additional deployment scenarios (as opposed
  to changes to existing heat templates)

Do you see anything like that suffering?

Like you say below, it's open source, so people can still work on what
they want to :).  And in that regard, the things I mentioned above might
only really suffer if there is suddenly a much longer turn around time on
reviews, upstream feedback, etc.

> Basic principles:
>  - unblock bottlenecks first, then unblock everyone else.
>  - folk are still self directed - it's open source - but clear
> visibility of work needed and it's relevance to the product *right
> now* is crucial info for people to make good choices. (and similar
> Mark McLoughlin was asking about what bugs to work on which is a
> symptom of me/us failing to provide clear visibility)
>  - clear communication about TripleO and plans / strategy and priority
> (Datacentre ops? Continuous deployment story?)

+1, that suff make sense.

> Implementing this:
> For TripleO we've broken down the long term vision in a few phases
> that *start* with an end user deliverable and then backfill to add
> sophistication and polish.
>
> We're suggesting that at any point in time the following should be the
> heuristics for TripleO contributors for what to work on:
> 1) Firedrill ‘something we've delivered broke’: Aim to avoid this but
> do if it happens it takes priority.
> 2) Things to make things we've delivered and are maintaining more
> reliable / less likely to break: Things that reduce category 1 work.
> 3) Things to make the things we've delivered better *or* things to
> make something new exist/get delivered.
>
> Our long term steady state should be a small amount of category 2 work
> and a lot of category 3 with no category 1; but to get there we have
> to go through a crucible where it will be all category 1 and category
> 2: we should expect all forward momentum to stop while we get our

I'd classify forward momentum recently as polishing the devtest story,
and working
on the tooling to do so.  So, maybe that is set aside for a moment while
the CD environment is brought up.

However, I think that having a working devtest is important.
devtest can be quite daunting to a newcomer, but, a nice thing about it
is that it gives people not familiar with tripleo and new contributors
a place to
get started.   And, I think that's important for the community.

> stuff lined up and live. After that though we'll have a small stable
> *end product* base, and we can expand that out featurewise and depth
> (reliability/performance/reduce firedrills..)wise.
>
> To surface WIP + current planned work, I find Kanban works super well.
> So I am proposing the following structure:
>  - Current work the team is focused on will be represented as Kanban cards
>  - Those cards can be standalone, or link to an etherpad, or a bug, or
> a blueprint as appropriate
>- standalone cards should be those that don't fit as bugs or
> blueprints; we shouldn't replace those other tracking systems
>  - As a team we all commit to picking up work based on the heuristics above
>  - The kanban exposes the category of work directly, making it easy to choose
>  - if there is someone working on a higher category of work than us,
> we should bias to *helping them* rather than continuing on our own way
> or picking up a new lower category card: it's better to unblock the
> system as a whole than push forward something we can't use yet.

I'll say that I really like tracking stuff in trello.  I think the
reality is that there are going
to be some well defined project goals (like you're doing here), and
probably other
people (or groups of people) within the community may have sightly
different goals.

Not saying that those are necessarily going to conflict.  Just that there may
be other stuff that folks are trying to accomplish..  The more
stuff like that that can be shared in a public trello for tripleo, the
better for
everyone.



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO core reviewer update - november

2013-10-30 Thread James Slagle
On Wed, Oct 30, 2013 at 5:06 AM, Robert Collins
 wrote:
> Hi, like most OpenStack projects we need to keep the core team up to
> date: folk who are not regularly reviewing will lose context over
> time, and new folk who have been reviewing regularly should be trusted
> with -core responsibilities.
>
> In this months review:
>  - James Slagle for -core
>  - Arata Notsu to be removed from -core
>  - Devananda van der veen to be removed from -core
>
> Existing -core members are eligible to vote - please indicate your
> opinion on each of the three changes above in reply to this email.
> James, please let me know if you're willing to be in tripleo-core.

I'm willing.  I plan to continue actively reviewing and contributing.  Thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Policy on spelling and grammar

2013-11-11 Thread James Slagle
-1 from me as well.

When I first started with OpenStack, I probably would have agreed with
letting small grammar mistakes and typos slide by.

However, I now feel that getting commit messages right is more
important.  Also keep in mind that with small grammar mistakes, the
intent may be obvious to a native English speaker, but to another
non-native English speaker it may not be.  And just a few small
grammar mistakes/misspellings/typos can add up until the meaning may
be harder to figure out for another non-native English speaker.

Also, I can't speak for everyone, but in general I've found most folks
open to grammar corrections if English is not their native language
b/c they want to learn and fix the mistakes.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is WSME really suitable? (Was: [nova] Autogenerating the Nova v3 API specification)

2013-08-06 Thread James Slagle
On Tue, Aug 6, 2013 at 5:35 AM, Mac Innes, Kiall  wrote:
>
> So,
>
>  From experimenting with, and looking at the WSME code - raising a
> status with `pecan.abort(404)` etc doesn't actually work.
>
> WSME sees that, and helpfully swaps it out for a HTTP 500 ;)
>
> The author of WSME even says there is currently no way to return a 404.
> So, ceilometer must be either not using anything but http 400 and http
> 500, or have replaced WSMEs error handling :/
>
> I'll have to have a look a ceilometers API to see if they ran into/fixed
> the issue.


WSME + pecan is being used in Tuskar:
https://github.com/tuskar/tuskar (OpenStack management API)

We encountered the same issue discussed here.  A solution we settled
on for now was to use a custom Renderer class that could handle
different response codes.  You set the renderer in the call to
pecan.make_app.  This was meant to be a temporary solution until
there's better support in WSME.

Here's the commit with all the details:
https://github.com/tuskar/tuskar/commit/16d3fec0e7d28be04252ad6b779ca6460b4918f5


--
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] Derekh for tripleo core

2013-08-28 Thread James Slagle
+1


On Tue, Aug 27, 2013 at 5:25 PM, Robert Collins
wrote:

> http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
> http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
>
> - Derek is reviewing fairly regularly and has got a sense of the
> culture etc now, I think.
>
> So - calling for votes for Derek to become a TripleO core reviewer!
>
> I think we're nearly at the point where we can switch to the 'two
> +2's' model - what do you think?
>
> Also tsk! to those cores who aren't reviewing as regularly :)
>
> Cheers,
> Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- James Slagle
--
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Scaling of TripleO

2013-09-06 Thread James Slagle
tp://fedorapeople.org/~slagle/drawing0.png
[2] http://fedorapeople.org/~slagle/drawing1.png
[3] http://fedorapeople.org/~slagle/drawing2.png

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Scaling of TripleO

2013-09-07 Thread James Slagle
> of services than what is needed on the full Undercloud Node.  Essentially, it
>> is enough services to do baremetal provisioning, Heat orchestration, and
>> Neutron for networking.  Diagram of this idea is at [2].  In the diagram, 
>> there
>> is one Leaf Node per logical rack.
>>
>
> I think this is very close to the near-term evolution I've been thinking
> about for TripleO. We want to get good at deploying a simple architecture
> first, but then we know we don't need to be putting the heat engines,
> nova schedulers, etc, on every scale-out box in the undercloud.

That's good to know :).

>
>> In this model, the Undercloud provisions and deploys Leaf Nodes as needed 
>> when
>> new hardware is added to the environment.  The Leaf Nodes then handle
>> deployment requests from the Undercloud for the Overcloud nodes.
>>
>> As such, there is some scalability built into the architecture in a 
>> distributed
>> fashion.  Adding more scalability and HA would be accomplished in a similar
>> fashion to Idea 0, by adding additional HA Leaf Nodes, etc.
>>
>> Pros/Cons (+/-):
>> + As scale is added with more Leaf Nodes, it's a smaller set of services.
>> - Additional image management of the Leaf Node image
>
> I think if you've accepted image management for 3 images (undercloud,
> overcloud-control, overcloud-compute), adding one more is not such a
> daunting thing. The benefit is that there is less software running that
> may break your services.

+1.  I just wanted to make sure it was listed as something additional to do for
this Idea.

>
>> - Additional rack space wasted for the Leaf Node
>
> This is unavoidable for scale-out IMO. There is certainly a scenario where
> we can convert some of these to overcloud resources after an initial data 
> center
> bring-up, so that also mitigates the impact.
>
>> + Smaller failure domain as the logical rack is only dependent on the Leaf
>>   Node.
>> + The ratio of HA Management Nodes would be smaller because of the offloaded
>>   services.
>
> I'm not sure I follow what an "HA Management Node" is.

I tried to explain it above a bit.  Basically, you can add a node for
scalability, but that doesn't necessarily eliminate SPOF, unless you're aiming
for HA specifically, and configure the node that way.

So, in this Idea, I think a user is encouraged (so to speak) to deploy Leaf
Nodes for Logical Racks.  Deploying Leaf Nodes should be cheaper and easier
than deploying something with all Undercloud services.  You would be inherently
adding scale to the architecture as you deploy it.  And hopefully, that would
mean less nodes to add further down the road for only scalability/HA reasons.

>> Idea 2
>> --
>> Pros/Cons (+/-):
>> + network/security isolation
>> - multiple Undercloud complexity
>
> This is probably the main reason I am skeptical at this idea. We
> shouldn't have to make a whole new cloud/region/etc. just to scale what
> is essentially a homogeneous service. It adds management complexity,
> and complexity is far worse than a small amount of image management
> (which seems to be main difference between 1 and 2).
>
> All great ideas, thanks for sharing!

No problem, appreciate the time it took to read through it and reply :).

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Scaling of TripleO

2013-09-09 Thread James Slagle
regates was another.

> 
> This leads me to suggest a very simple design:
>  - one undercloud per fully-reachable-fabric-of-IPMI control. Done :)
>  - we gather data on performance scaling as node counts scales

What type of hardware access does the team have to do any sort of performance
scaling testing?

I can ask around and see what I can find.

Alternatively, we could probably work on some sort of performance test suite
that tested without a bunch of physical hardware.  E.g, you don't necessarily
need a bunch of distinct nodes to test something like how many iscsi targets
can Nova Compute reasonably populate at once, etc.

>  - use that to parameterise how to grow the undercloud control plane for a 
> cloud
> 
> HTH!

It does, excellent feedback!

--
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ian Wienand as core reviewer on diskimage-builder

2015-11-04 Thread James Slagle
On Wed, Nov 4, 2015 at 12:25 AM, Gregory Haynes  wrote:
> Hello everyone,
>
> I would like to propose adding Ian Wienand as a core reviewer on the
> diskimage-builder project. Ian has been making a significant number of
> contributions for some time to the project, and has been a great help in
> reviews lately. Thus, I think we could benefit greatly by adding him as
> a core reviewer.
>
> Current cores - Please respond with any approvals/objections by next Friday
> (November 13th).

+1

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-16 Thread James Slagle
On Mon, May 16, 2016 at 10:34 AM, Pradeep Kilambi  wrote:
> Hi Everyone:
>
> I wanted to start a discussion around considering backporting Aodh to
> stable/liberty for upgrades. We have been discussing quite a bit on whats
> the best way for our users to upgrade ceilometer alarms to Aodh when moving
> from liberty to mitaka. A quick refresh on what changed, In Mitaka,
> ceilometer alarms were replaced by Aodh. So only way to get alarms
> functionality is use aodh. Now when the user kicks off upgrades from liberty
> to Mitaka, we want to make sure alarms continue to function as expected
> during the process which could take multiple days. To accomplish this I
> propose the following approach:
>
> * Backport Aodh functionality to stable/liberty. Note, Aodh functionality is
> backwards compatible, so with Aodh running, ceilometer api and client will
> redirect requests to Aodh api. So this should not impact existing users who
> are using ceilometer api or client.
>
> * As part of Aodh deployed via heat stack update, ceilometer alarms services
> will be replaced by openstack-aodh-*. This will be done by the puppet apply
> as part of stack convergence phase.
>
> * Add checks in the Mitaka pre upgrade steps when overcloud install kicks
> off to check and warn the user to update to liberty + aodh to ensure aodh is
> running. This will ensure heat stack update is run and, if alarming is used,
> Aodh is running as expected.
>
> The upgrade scenarios between various releases would work as follows:
>
> Liberty -> Mitaka
>
> * Upgrade starts with ceilometer alarms running
> * A pre-flight check will kick in to make sure Liberty is upgraded to
> liberty + aodh with stack update
> * Run heat stack update to upgrade to aodh
> * Now ceilometer alarms should be removed and Aodh should be running
> * Proceed with mitaka upgrade
> * End result, Aodh continue to run as expected
>
> Liberty + aodh -> Mitaka:
>
> * Upgrade starts with Aodh running
> * A pre-flight check will kick in to make sure Liberty is upgraded to Aodh
> with stack update
> * Confirming Aodh is indeed running, proceed with Mitaka upgrade with Aodh
> running
> * End result, Aodh continue to be run as expected
>
>
> This seems to be a good way to get the upgrades working for aodh. Other less
> effective options I can think of are:
>
> 1. Let the Mitaka upgrade kick off and do "yum update" which replace aodh
> during migration, alarm functionality will be down until puppet converge
> runs and configures Aodh. This means alarms will be down during upgrade
> which is not ideal.
>
> 2. During Mitaka upgrades, replace with Aodh and add a bash script that
> fully configures Aodh and ensures aodh is functioning. This will involve
> significant work and results in duplicating everything puppet does today.

How much duplication would this really be? Why would it have to be in bash?

Could it be:

Liberty -> Mitaka

* Upgrade starts with ceilometer alarms running
* Add a new hook for the first step of Mitaka upgrade that does:
  ** sets up mitaka repos
  ** migrates from ceilometer alarms to aodh, can use puppet
  ** ensures aodh is running
* Proceed with rest of mitaka upgrade

At most, it seems we'd have to surround the puppet apply with some
pacemaker commands to possibly set maintenance mode and migrate
constraints.

The puppet manifest itself would just be the includes and classes for aodh.

One complication might be that the aodh packages from Mitaka might
pull in new deps that required updating other OpenStack services,
which we wouldn't yet want to do. That is probably worth confirming
though.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-17 Thread James Slagle
On Tue, May 17, 2016 at 12:04 PM, Pradeep Kilambi  wrote:
> Thanks Steve. I was under the impression we cannot run puppet at this
> stage. Hence my suggestion to run bash or some script here, but if we
> can find a way to easily wire the existing aodh puppet manifests into
> the upgrade process and get aodh up and running then even better, we
> dont have to duplicate what puppet gives us already and reuse that.

We could add any SoftwareDeployment resource(s) to the templates that
trigger either scripts or puppet.

>
>
>>> At most, it seems we'd have to surround the puppet apply with some
>>> pacemaker commands to possibly set maintenance mode and migrate
>>> constraints.
>>>
>>> The puppet manifest itself would just be the includes and classes for aodh.
>>
>> +1
>>
>>> One complication might be that the aodh packages from Mitaka might
>>> pull in new deps that required updating other OpenStack services,
>>> which we wouldn't yet want to do. That is probably worth confirming
>>> though.
>>
>> It seems like we should at least investigate this approach before going
>> ahead with the backport proposed - I'll -2 the backports pending further
>> discussion and investigation into this alternate approach.
>>
>
> Makes sense to me. I understand the hesitation behind backports. I'm
> happy to work with jistr and slagle to see if this is a viable
> alaternative. If we can get this working without too much effort, i'm
> all for dumping the backports and going with this.

Using a liberty overcloud-full image, I enabled the mitaka repos and
tried to install aodh:
http://paste.openstack.org/show/497395/

It looks like it will cleanly pull in just aodh packages, and there
aren't any transitive dependencies thatt require updating any other
OpenStack services.

That means that we ought to be able to take a liberty cloud and update
it to use aodh from mitaka. That could be step 1 of the upgrade. The
operator could pause there for as long as they wanted, and then
continue on with the rest of the upgrade of the other services to
Mitaka. It may even be possible to implement them as separate stack
updates.

Does that sound like it could work? Would we have to update some parts
of Ceilometer as well, or does Liberty Ceilometer and Mitaka Aodh work
together nicely?

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][infra][deployment] Adding multinode CI jobs for TripleO in nodepool

2016-05-27 Thread James Slagle
I've been working on various patches to TripleO to make it possible
for the baremetal provisioning part of the workflow to be optional. In
such a scenario, TripleO wouldn't use Nova or Ironic to boot any
baremetal nodes. Instead it would rely on the nodes to be already
installed with an OS and powered on. We then use Heat to drive the
deployment of OpenStack on those nodes...that part of the process is
largely unchanged.

One of the things this would allow TripleO to do is make use of CI
jobs using nodes just from the regular cloud providers in nodepool
instead of having to use our own TripleO cloud
(tripleo-test-cloud-rh1) to run all our jobs.

I'm at a point where I can start working on patches to try and set
this up, but I wanted to provide this context so folks were aware of
the background.

We'd probably start with our simplest configuration of a job with at
least 3 nodes (undercloud/controller/compute), and using CentOS
images. It looks like right now all multinode jobs are 2 nodes only
and use Ubuntu. My hope is that I/we can make some progress in
different multinode configurations and collaborate on any setup
scripts or ansible playbooks in a generally useful way. I know there
was interest in different multinode setups from the various deployment
teams at the cross project session in Austin.

If there are any pitfalls or if there are any concerns about TripleO
going in this direction, I thought we could discuss those here. Thanks
for any feedback.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][infra][deployment] Adding multinode CI jobs for TripleO in nodepool

2016-05-27 Thread James Slagle
On Fri, May 27, 2016 at 2:37 PM, Emilien Macchi  wrote:
> On Fri, May 27, 2016 at 2:03 PM, James Slagle  wrote:
>> I've been working on various patches to TripleO to make it possible
>> for the baremetal provisioning part of the workflow to be optional. In
>> such a scenario, TripleO wouldn't use Nova or Ironic to boot any
>> baremetal nodes. Instead it would rely on the nodes to be already
>> installed with an OS and powered on. We then use Heat to drive the
>> deployment of OpenStack on those nodes...that part of the process is
>> largely unchanged.
>>
>> One of the things this would allow TripleO to do is make use of CI
>> jobs using nodes just from the regular cloud providers in nodepool
>> instead of having to use our own TripleO cloud
>> (tripleo-test-cloud-rh1) to run all our jobs.
>>
>> I'm at a point where I can start working on patches to try and set
>> this up, but I wanted to provide this context so folks were aware of
>> the background.
>>
>> We'd probably start with our simplest configuration of a job with at
>> least 3 nodes (undercloud/controller/compute), and using CentOS
>> images. It looks like right now all multinode jobs are 2 nodes only
>> and use Ubuntu. My hope is that I/we can make some progress in
>> different multinode configurations and collaborate on any setup
>> scripts or ansible playbooks in a generally useful way. I know there
>> was interest in different multinode setups from the various deployment
>> teams at the cross project session in Austin.
>>
>> If there are any pitfalls or if there are any concerns about TripleO
>> going in this direction, I thought we could discuss those here. Thanks
>> for any feedback.
>
> It is more a question than a concern:
> are we still going to test baremetal introspection with Ironic
> somewhere in OpenStack?
>
> I like the way it goes but I'm wondering if the things that we're not
> going to test anymore will still be tested somewhere else (maybe in
> Ironic / Nova CI jobs) or maybe it's already the case and then stop me
> here.
>

I should have clarified: we're not moving away from still having our
own cloud running the TripleO jobs we have today.

This is about adding new jobs to test a different way of deploying via
TripleO Since we'd be able to use nodepool nodes directly to do that,
I'm proposing to do it that way.

If it pans out, I'd expect us to have a variety of jobs running with
different permutations so that we can have as much coverage as
possible.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][infra][deployment] Adding multinode CI jobs for TripleO in nodepool

2016-05-31 Thread James Slagle
On Mon, May 30, 2016 at 6:12 PM, Steve Baker  wrote:
> This raises the possibility of an alternative to OVB for trying/developing
> TripleO on a host cloud.
>
> If a vm version of the overcloud-full image is also generated then the host
> cloud can boot these directly. The approach above can then be used to treat
> these nodes as pre-existing nodes to adopt.
>
> I did this for a while configuring the undercloud nova to use the fake virt
> driver, but it sounds like the approach above doesn't interact with nova at
> all.

Correct, the nodes could come from anywhere. They could be prelaunched
instances on an OpenStack cloud, or any cloud for that matter. I in
fact tested this out on the Rackspace public cloud by just launching 3
vanilla Centos instances, installed an undercloud on one, and then
used the other 2 for the overcloud.

>
> So I'm +1 on this approach for *some* development environments too. Can you
> provide a list of the changes?

This is the primary patch to tripleo-heat-templates that enables it to work:
https://review.openstack.org/#/c/222772/

And a couple of other patches on the same topic branch:
https://review.openstack.org/#/q/topic:deployed-server

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] TripleO deep dive hour?

2016-06-28 Thread James Slagle
We've got some new contributors around TripleO recently, and I'd like
to offer up a "TripleO deep dive hour".

The idea is to spend 1 hour a week in a high bandwidth environment
(Google Hangouts / Bluejeans / ???) to deep dive on a TripleO related
topic. The topic could be anything TripleO related, such as general
onboarding, CI, networking, new features, etc.

I'm by no means an expert on all those things, but I'd like to
facilitate the conversation and I'm happy to lead the first few
"dives" and share what I know. If it proves to be a popular format,
hopefully I can convince some other folks to lead discussions on
various topics.

I think it'd be appropriate to record these sessions so that what is
discussed is available to all. However, I don't intend these to be a
presentation format, and instead more of a q&a discussion. If I don't
get any ideas for topics though, I may choose to prepare something to
present :).

Our current meeting time of day at 1400 UTC seems to suit a lot of
folks, so how about 1400 UTC on Thursdays? If folks think this is
something that would be valuable and want to do it, we could start
next Thursday, July 7th.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >