[openstack-dev] [ironic] Resigning as a core reviewer

2017-04-26 Thread Jay Faulkner
Hi all,


As most of you know, I'm no longer being paid to work on OpenStack Ironic. 
Working with you all has been an amazing part of my career, an I've learned 
more than you'll ever know. I'll still be in #openstack-ironic, willing to 
answer any questions about something I'm an expert in. However, as I won't be 
actively participating in the project, or reviewing code anymore, I'd like to 
request my core reviewer access be removed.


Thanks for everything! I wish for nothing but the best for everyone in the 
OpenStack community who have always treated me with kindness and patience.


Sincerely,

Jay Faulkner
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Jay Faulkner

> On Apr 11, 2017, at 12:54 AM, Nisha Agarwal <agarwalnisha1...@gmail.com> 
> wrote:
> 
> Hi John,
> 
> >With ironic I thought everything is "passed through" by default,
> >because there is no virtualization in the way. (I am possibly
> >incorrectly assuming no BIOS tricks to turn off or re-assign PCI
> >devices dynamically.)
> 
> Yes with ironic everything is passed through by default. 
> 
> >So I am assuming this is purely a scheduling concern. If so, why are
> >the new custom resource classes not good enough? "ironic_blue" could
> >mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
> >and one 1Gb nic, etc.
> >Or is there something else that needs addressing here? Trying to
> >describe what you get with each flavor to end users?
> Yes this is purely from scheduling perspective. 
> Currently how ironic works is we discover server attributes and populate them 
> into node object. These attributes are then used for further scheduling of 
> the node from nova scheduler using ComputeCapabilities filter. So this is 
> something automated on ironic side, like we do inspection of the node 
> properties/attributes and user need to create the flavor of their choice and 
> the node which meets the user need is scheduled for ironic deploy.
> With resource class name in place in ironic, we ask user to do a manual step 
> i.e. create a resource class name based on the hardware attributes and this 
> need to be done on per node basis. For this user need to know the server 
> hardware properties in advance before assigning the resource class name to 
> the node(s) and then assign the resource class name manually to the node. 
> In a broad way if i say, if we want to support scheduling based on quantity 
> for ironic nodes there is no way we can do it through current resource class 
> structure(actually just a tag) in ironic. A  user may want to schedule ironic 
> nodes on different resources and each resource should be a different resource 
> class (IMO). 
> 
> >Are you needing to aggregating similar hardware in a different way to the 
> >above
> >resource class approach?
> i guess no but the above resource class approach takes away the automation on 
> the ironic side and the whole purpose of inspection is defeated.
> 

I strongly challenge the assertion made here that inspection is only useful in 
scheduling contexts. There are users who simply want to know about their 
hardware, and read the results as posted to swift. Inspection also handles 
discovery of new nodes when given basic information about them.

-
Jay Faulkner
OSIC

> Regards
> Nisha
> 
> 
> On Mon, Apr 10, 2017 at 4:29 PM, John Garbutt <j...@johngarbutt.com> wrote:
> On 10 April 2017 at 11:31,  <sfinu...@redhat.com> wrote:
> > On Mon, 2017-04-10 at 11:50 +0530, Nisha Agarwal wrote:
> >> Hi team,
> >>
> >> Please could you pour in your suggestions on the mail?
> >>
> >> I raised a blueprint in Nova for this https://blueprints.launchpad.ne
> >> t/nova/+spec/pci-passthorugh-for-ironic and two RFEs at ironic side h
> >> ttps://bugs.launchpad.net/ironic/+bug/1680780 and https://bugs.launch
> >> pad.net/ironic/+bug/1681320 for the discussion topic.
> >
> > If I understand you correctly, you want to be able to filter ironic
> > hosts by available PCI device, correct? Barring any possibility that
> > resource providers could do this for you yet, extending the nova ironic
> > driver to use the PCI passthrough filter sounds like the way to go.
> 
> With ironic I thought everything is "passed through" by default,
> because there is no virtualization in the way. (I am possibly
> incorrectly assuming no BIOS tricks to turn off or re-assign PCI
> devices dynamically.)
> 
> So I am assuming this is purely a scheduling concern. If so, why are
> the new custom resource classes not good enough? "ironic_blue" could
> mean two GPUs and two 10Gb nics, "ironic_yellow" could mean one GPU
> and one 1Gb nic, etc.
> 
> Or is there something else that needs addressing here? Trying to
> describe what you get with each flavor to end users? Are you needing
> to aggregating similar hardware in a different way to the above
> resource class approach?
> 
> Thanks,
> johnthetubaguy
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> The Secret Of Success is

Re: [openstack-dev] [oslo]Introduction of new driver for oslo.messaging

2017-03-31 Thread Jay Faulkner
I thought this spec/proposal was embargoed until tomorrow?!

-
Jay Faulkner
OSIC

> On Mar 31, 2017, at 7:40 AM, Deja, Dawid <dawid.d...@intel.com> wrote:
> 
> Hi all,
> 
> To work around issues with rabbitMQ scalability we'd like to introduce
> new driver in oslo messaging that have nearly no scaling limits[1].
> We'd like to have as much eyes on this as possible since we believe
> that this is the technology of the future. Thanks for all reviews.
> 
> Dawid Deja
> 
> [1] https://review.openstack.org/#/c/452219/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New contributor

2017-03-30 Thread Jay Faulkner
Welcome to the project! We do have a large number of new / junior contributors, 
so simple bugs don’t stay outstanding very long. You can search ironic and 
ironic-python-agent in launchpad for bugs tagged “low-hanging-fruit”. If that 
comes up short, or you want to do something more advanced, I suggest reviewing 
the weekly ironic review priorities on our whiteboard at 
http://bit.ly/ironic-whiteboard. Additionally, I suggest you join 
#openstack-ironic on free node and say hello. 

Welcome and good luck,
Jay Faulkner
OSIC

> On Mar 29, 2017, at 8:13 PM, Julian Edwards <bigjo...@gmail.com> wrote:
> 
> Hi all
> 
> I'm looking to start contributing to Ironic, and in fact I did a
> couple of small patches already which are still waiting to be
> landed/reviewed. [1]
> 
> I'm finding it a little hard to find some more reasonable bugs to get
> started with fixing, so if any of you guys can point me at a few I
> would appreciate it, or indeed if someone is willing to do more
> involved mentoring.  (I am in the +10 time zone so this may be
> awkward, sadly)
> 
> Cheers
> J
> 
> PS  Some of you may remember me as the original Ubuntu MAAS lead, so I
> am pretty familiar with bare metal stuff generally.
> 
> [1] https://review.openstack.org/#/c/449454/ and
> https://review.openstack.org/#/c/450492/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] volunteers for cross project liaisons

2017-03-15 Thread Jay Faulkner

> On Mar 15, 2017, at 8:11 AM, Loo, Ruby  wrote:
> 
> Hi,
> 
> The ironic community is looking for volunteers to be cross-project liaisons 
> [1] for these projects:
> - oslo
> - logging working group
> - i18n

The i18n and docs projects are closely related. I also don’t think they do a 
lot of translating for ironic. Unless we have a contributor who utilizes i18n 
and is more familiar, I can take this on.

-Jay
> 
> The expectations are documented in [1] on a per-project basis. The amount of 
> commitment varies depending on the project (and I don't know what that might 
> be).
> 
> [insert here why it would be an awesome experience for you, fame, fortune, 
> ... :D]
> 
> --ruby
> 
> [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-10 Thread Jay Faulkner

> On Mar 10, 2017, at 8:28 AM, Heidi Joy Tretheway  
> wrote:
> 
> Hi Ironic team, 
> Here’s an update on your project logo. Our illustrator tried to be as true as 
> possible to your original, while ensuring it matched the line weight, color 
> palette and style of the rest. Thanks for your patience as we worked on this! 
> Feel free to direct feedback to me; we really want to get this right for you. 
> 
> 
> 


+1, this is a great evolution of the existing Pixie Boots. 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Removal of the ipminative / pyghmi driver

2017-03-09 Thread Jay Faulkner
Hi all,

The ipminative driver Is currently an anomaly in ironic’s tree, despite the 
driver being initially deprecated in Newton[1], and   our desire to drop them 
reiterated on the mailing list in December[2], it was has not been removed from 
the tree prior to the Ocata release.

At the PTG the ironic team had a short discussion about the ipminative (aka 
pyghmi) driver — the conclusion was that unless third party CI was run against 
the driver, we would be forced to follow through on the deprecation and remove 
it. Testing in upstream CI, against VirtualBMC, was mostly rejected due to both 
the ipminative driver and virtualbmc using the same python ipmi library 
(pyghmi), and therefore not being a valid test case. Additionally, further 
adding urgency to the removal, several active ironic contributors who have 
tested ipminative drivers in real-world environments have reported them as 
unstable.

The promise of a native python driver to talk to ipmi in ironic is great, but 
without proper testing and stability, keeping it in-tree does more harm to 
ironic users than good — in fact, there’s very little indication to a deployer 
using ironic that the driver may not work stably.

Therefore, I’m giving the mailing list a two week warning — unless volunteers 
come willing to run third party CI against the ipminative drivers in the next 
two weeks, I will be submitting a patch to remove them entirely from the tree. 
The driver could then be moved into ironic-staging-drivers by any interested 
contributors.

-
Jay Faulkner
OSIC

Related-bug: https://bugs.launchpad.net/ironic/+bug/1671532

[1] https://docs.openstack.org/releasenotes/ironic/newton.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-December/108666.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Some information about the Forum at the Summit in Boston

2017-03-09 Thread Jay Faulkner

> On Mar 9, 2017, at 8:23 AM, Ben Swartzlander <b...@swartzlander.org> wrote:
> 
> I might be the only one who has negative feelings about the PTG/Forum split, 
> but I suspect the foundation is suppressing negative feedback from myself and 
> other developers so I'll express my feelings here. If there's anyone else who 
> feels like me please reply, otherwise I'll assume I'm just an outlier.
> 
> The new structure is asking developers to travel 4 times a year (minimum) and 
> makes it impossible to participate in 2 or more vertical projects.
> 

+1

There was a built in assumption to the original planning, that most projects 
had a mid-cycle that people travelled to. For ironic, as an example, we 
switched those to virtual because the number of people who could get travel 
approved was very low.

-
Jay Faulkner


> I know that most of the people working on Manila have pretty limited travel 
> budgets, and meeting 4 times a year basically guarantees that a good number 
> of people will be remote at any given meeting. From my perspective if I'm 
> going to be meeting with people on the phone I'd rather be on the phone 
> myself and have everyone on equal footing.
> 
> I also normally try to participate in Cinder as well as Manila and the new 
> PTG structures makes that impossible. I decided to try to be positive and to 
> wait until after the PTG to make up my mind but having attended in Atlanta it 
> was exactly as bad as I expected in terms of my ability to participate in 
> Cinder.
> 
> I will be in Boston to try to develop a firsthand opinion of the new Forum 
> format but as of now I'm pretty unhappy with the proposal. For Manila I'm 
> proposing that the community either meets at PTG and skips conferences or 
> meetings at conferences and skips PTGs going forward. I'm not going to ask 
> everyone to travel 4 times a year.
> 
> -Ben Swartzlander
> Manila PTL
> 
> 
> On 03/07/2017 07:35 AM, Thierry Carrez wrote:
>> Hi everyone,
>> 
>> I recently got more information about the space dedicated to the "Forum"
>> at the OpenStack Summit in Boston. We'll have three different types of
>> spaces available.
>> 
>> 1/ "Forum" proper
>> 
>> There will be 3 medium-sized fishbowl rooms for cross-community
>> discussions. Topics for the discussions in that space will be selected
>> and scheduled by a committee formed of TC and UC members, facilitated by
>> Foundation staff members. In case you missed it, the brainstorming for
>> topics started last week, announced by Emilien in that email:
>> 
>> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113115.html
>> 
>> 2/ "On-boarding" rooms
>> 
>> We'll have two rooms set up in classroom style, dedicated to project
>> teams and workgroups who want to on-board new team members. Those can
>> for example be booked by project teams to run an introduction to their
>> codebase to prospective new contributors, in the hope that they will
>> join their team in the future. Those are not meant to do traditional
>> user-facing "project intro" talks -- there is space in the conference
>> for that. They are meant to provide the next logical step in
>> contributing after Upstream University and being involved on the
>> sidelines. It covers the missing link for prospective contributors
>> between attending Summit and coming to the PTG. Kendall Nelson and Mike
>> Perez will soon announce the details for this, including how projects
>> can sign up.
>> 
>> 3/ Free hacking/meetup space
>> 
>> We'll have four or five rooms populated with roundtables for ad-hoc
>> discussions and hacking. We don't have specific plans for these -- we
>> could set up something like the PTG ethercalc for teams to book the
>> space, or keep it open. Maybe half/half.
>> 
>> More details on all this as they come up.
>> Hoping to see you there !
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-01 Thread Jay Faulkner

> On Mar 1, 2017, at 11:15 AM, Pavlo Shchelokovskyy 
> <pshchelokovs...@mirantis.com> wrote:
> 
> Greetings ironicers,
> 
> I'd like to discuss the state of the gates in ironic and other related 
> projects for stable/mitaka branch.
> 
> Today while making some test patches to old branches I discovered the 
> following problems:
> 
> python-ironicclient/stable/mitaka
> All unit-test-like jobs are broken due to not handling upper constraints. 
> Because of it a newer than actually supported python-openstackclient is 
> installed, which already lacks some modules python-ironicclient tries to 
> import (these were moved to osc-lib).
> I've proposed a patch that copies current way of dealing with upper 
> constraints in tox envs [0], gates are passing.
> 
> ironic/stable/mitaka
> While not actually being gated on, using virtualbmc+ipmitool drivers is 
> broken. The reason is again related to upper constraints as what happens is 
> old enough version of pyghmi (from mitaka upper constraints) is installed 
> with most recent virtualbmc (not in upper constraints), and those versions 
> are incompatible.
> This highlights a question whether we should propose virtualbmc to upper 
> constraints too to avoid such problems in the future.
> Meanwhile a quick fix would be to hard-code the supported virtualbmc version 
> in the ironic's devstack plugin for mitaka release.
> Although not strictly supported for Mitaka release, I'd like that 
> functionality to be working on stable/mitaka gates to test for upcoming 
> removal of *_ssh drivers.
> 
> I did not test other projects yet.
> 

I can attest jobs are broken for stable/mitaka on ironic-lib as well — our jobs 
build docs unconditionally, and ironic-lib had no docs in Mitaka.

-
Jay Faulkner
OSIC

> With all the above, the question is should we really fix the gates for the 
> mitaka branch now? According to OpenStack release page [1] the Mitaka release 
> will reach end-of-life on April 10, 2017.
> 
> [0] https://review.openstack.org/#/c/439742/
> [1] https://releases.openstack.org/#release-series
> 
> Cheers,
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-02-21 Thread Jay Faulkner

> On Feb 15, 2017, at 5:25 PM, Heidi Joy Tretheway <heidi...@openstack.org> 
> wrote:
> 
> Hi Ironic team, 
> 
> [TL;DR - we agree to Miles’ proposal for two images (one mascot, one logo) 
> for different contexts. We’re looking for any final feedback on the stylized 
> logo for use on the website, while the PixieBoots mascot remains yours for 
> swag, etc.]
> 
> I’m doing my best to reply to all questions on this thread as Lucas 
> requested: 
> http://lists.openstack.org/pipermail/openstack-dev/2017-February/112212.html. 
> Please feel free to drop a note here if I’ve missed anything. I’m summarizing 
> everything below:
> 
> Design issues: 
> —The bear looked angry in v1
> Answer: We removed the angry expression and replaced it with a neutral 
> expression
> —The metal horns hand gesture is culturally inappropriate (rude in some 
> countries) (from Ruby)
> Answer: We removed this feature and replaced it, based on the team’s input, 
> with the bear holding sticks (as drumsticks)
> —What about a goat-like horned bear? (Joanna)
> Answer: We removed horned references due to the cultural reference to 
> cuckolding as the Ironic team pointed out
> —The bear looks too much like a Russian meme with two hands up
> Answer: We have a face-forward bear, not a side-view bear, and only one hand 
> raised.
> —The bear’s face decoration looked too much like the band Kiss
> Answer: While this was intentional by the designer (for a more “metal” look), 
> we removed this feature and replaced it with basic bear face coloring. Some 
> folks (including Miles, Dmitry, Sam who added +1s) would like this back in. 
> We’re happy to do it, we just need the team to agree on one direction.
> —Don’t like the style (reminds Lucas of church windows)
> Answer: The design style for all of the mascots is set. It was shared in July 
> when we started this project, and unfortunately the feedback window regarding 
> design style has passed, as 95% of projects have now received their logos. 
> —Request to abbreviate the bear so it just shows head/top of torso/hand 
> holding drumsticks (from Dmitry)
> Answer: We can revisit that with the designers, however it doesn’t match the 
> rest of the logo set, which is either face or full body of each mascot. We’re 
> happy to try this, but as we’ve already been four rounds with the team, I’m 
> soliciting ANY final feedback on this version before we finalize it. 
> 
> 

I have a question about the drumsticks — I noticed today at the PTG, that the 
cloud kitty mascot has a collar and license — that seems like a minimal manmade 
addition to that logo, similar to what actual drumsticks would be with an 
Ironic mascot. Is it completely out of the question to revisit adding those 
back to the logo? As this debate has shown, it’s difficult to transmit the idea 
of a “metal” bear without some kind of musical tool/instrument involved.

Thanks,
Jay Faulkner

> Outstanding questions: 
> —Can we use PixieBoots in the future? 
> Answer: Absolutely. You’re welcome to produce vintage swag like shirts and 
> stickers with your original logo. Any team can use their old logo in this 
> way. Put another way, if you’d like to call PixieBoots your mascot, but refer 
> to the Ironic logo our illustrators have created as merely a logo, that’s 
> fine. And you don’t have to use this logo if you don’t want to. 
> —Can we use (1) A stylized logo, matching the guidelines, for use in 
> “official” settings and anywhere that it will be seen in other projects’ 
> logos; and (2) Our existing PixieBoots mascot, for use in “official” settings 
> (laptop stickers, T-shirts, chatbots, webcomic, etc.)? (suggested by Miles)
> Answer: Great suggestion! Yes. Together with the answer above, that’s our 
> intention—we’d like for you to be able to continue to use your beloved mascot 
> in your own way, and we’d like the Ironic team to select some logo that is 
> consistent with the rest of the community project logos, that we can use on 
> official channels such as the website.
> —What will we see at the PTG?
> Answer: Out of respect for the team, we did not print stickers or signage for 
> the Ironic team with any logo on it until the team reaches an agreement. 
> —What license will the mascot have?
> Answer: It will be CC-BY-ND, which the foundation uses for most of our 
> collateral. That allows you to use it (and we’ve provided ten versions to the 
> PTLs of the projects with finalized mascots so they have a good amount of 
> flexibility in logo use). It prevents, for example, a for-profit company from 
> inserting its commercial logo into an element of the community-use mascots 
> (which was a common request early in the design process). If you would like 
> to make a derivative work, we can definitely fi

Re: [openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-17 Thread Jay Faulkner
+2 to all proposed -- Vasyl and Mario have been great folks to work with, and 
I'm glad they're gettting core access.


Thanks for all the work over the years, Devananda, I know I learned quite a few 
things working with you. Hopefully you'll be able to dedicate time to ironic 
again someday. o/


-Jay Faulkner



From: Dmitry Tantsur <dtant...@redhat.com>
Sent: Friday, February 17, 2017 1:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [ironic] End-of-Ocata core team updates

Hi all!

I'd like to propose a few changes based on the recent contributor activity.

I have two candidates that look very good and pass the formal barrier of 3
reviews a day on average [1].

First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats [2] are
high, he's doing a lot of extremely useful work around networking and CI.

Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he has
been doing some quality reviews for critical patches in the Ocata cycle.

Active cores and interested contributors, please respond with your +-1 to these
suggestions.

Unfortunately, there is one removal as well. Devananda, our team leader for
several cycles since the very beginning of the project, has not been active on
the project for some time [4]. I propose to (hopefully temporary) remove him
from the core team. Of course, when (look, I'm not even saying "if"!) he comes
back to active reviewing, I suggest we fast-forward him back. Thanks for
everything Deva, good luck with your current challenges!

Thanks,
Dmitry

[1] http://stackalytics.com/report/contribution/ironic-group/90
[2] http://stackalytics.com/?user_id=vsaienko=marks
[3] http://stackalytics.com/?user_id=mario-villaplana-j=marks
[4] http://stackalytics.com/?user_id=devananda=marks
[http://stackalytics.com/static/images/stackalytics_logo.png]<http://stackalytics.com/?user_id=devananda=marks>

Stackalytics | Devananda van der Veen contribution in OpenStack Ocata 
release<http://stackalytics.com/?user_id=devananda=marks>
stackalytics.com
Devananda van der Veen contribution in OpenStack Ocata release




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
openstack-dev mailing 
list<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
lists.openstack.org
This list for the developers of OpenStack to discuss development issues and 
roadmap. It is focused on the next release of OpenStack: you should post on 
this list if ...



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-16 Thread Jay Faulkner

> On Feb 16, 2017, at 12:20 PM, Dan Prince  wrote:
> 
> On Thu, 2017-02-16 at 19:54 +, Jeremy Stanley wrote:
>> On 2017-02-16 14:09:53 -0500 (-0500), Dan Prince wrote:
>> [...]
>>> This isn't about aligning anything. It is about artistic control.
>>> The
>>> foundation wants to have icons their way playing the "community
>>> card"
>>> to make those who had icons they like conform. It is clear you buy
>>> into
>>> this.
>>> 
>>> Each team will have its own mascot anyway so does it really matter
>>> if
>>> there is some deviation in the mix? I think not. We have a mascot
>>> we
>>> like. It even fits the general requirements for OpenStack mascots
>>> so
>>> all we are arguing about here is artistic style really. I say let
>>> the
>>> developers have some leverage in this category... what is the harm
>>> really?
>> 
>> [...]
>> 
>> You're really reading far too much conspiracy into this. Keep in
>> mind that this was coming from the foundation's marketing team, and
>> while they've been very eager to interface with the community on
>> this effort they may have failed to some degree in explaining their
>> reasons (which as we all know leaves a vacuum where conspiracy
>> theories proliferate).
>> 
>> As I understand things there are some pages on the
>> foundation-controlled www.openstack.org site where they want to
>> refer to various projects/teams and having a set of icons
>> representing them was a desire of the designers for that site, to
>> make it more navigable and easier to digest. They place significant
>> importance on consistency and aesthetics, and while that doesn't
>> necessarily match my personal utilitarian nature I can at least
>> understand their position on the matter. Rather than just words or
>> meaningless symbols as icons they thought it would be compelling to
>> base those icons on mascots, but to maintain the aesthetic of the
>> site the specific renderings needed to follow some basic guidelines.
>> They could have picked mascots at random out of the aether to use
>> there, but instead wanted to solicit input from the teams whose work
>> these would represent so that they might have some additional
>> special meaning to the community at large.
>> 
>> As I said earlier in the thread, if you have existing art you like
>> then use that in your documentation, in the wiki, on team tee-shirts
>> you make, et cetera. The goal is not to take those away. This is a
>> simple need for the marketing team and foundation Web site designers
>> to have art they can use for their own purposes which meets their
>> relatively strict design aesthetics... and if that art is also
>> something the community wants to use, then all the better but it's
>> in no way mandatory. The foundation has no direct control over
>> community members' choices here, nor have they attempted to pretend
>> otherwise that I've seen.
> 
> And there is that rub again. There is implied along with this pressure
> to adopt the new logo. If you don't you'll get a blank space as a sort
> of punishment for going your own way. As Monty said directly... they
> want conformance and cohesion over team identity.
> 
> Read the initial replies on this thread. Almost every single person
> besides (Flavio and Monty) preferred to keep the original TripleO
> mascot. Same thing on the Ironic thread as far as I can tell (those
> devs almost all initially preferred the old mascot before they were
> talked out of it.). And then you wore them down. Keep asking the same
> question again and again and I guess over time people stop caring.
> 

FWIW, I think we all still prefer the older mascot, and will use it for our 
normal contexts. I changed my vote on the logo because I think we have more 
important things to bike shed over other than logo designs :).

-Jay

> Its all just silliness really. Why the foundation got involved in this
> mascot business to begin with and didn't just leave it to the
> individual projects.
> 
> And again. Not a great time to be talking about any of this. My sense
> of urgency is largely based on the fact that Emilien sent out an
> official team stance on this. I wasn't part of that... so apologies for
> being late to this conversation.
> 
> Dan 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-02-15 Thread Jay Faulkner

Comments inline, in bold


From: Heidi Joy Tretheway 
Sent: Wednesday, February 15, 2017 2:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [ironic] New mascot design

Hi Ironic team,

[TL;DR - we agree to Miles’ proposal for two images (one mascot, one logo) for 
different contexts. We’re looking for any final feedback on the stylized logo 
for use on the website, while the PixieBoots mascot remains yours for swag, 
etc.]

I’m doing my best to reply to all questions on this thread as Lucas requested: 
http://lists.openstack.org/pipermail/openstack-dev/2017-February/112212.html.
Please feel free to drop a note here if I’ve missed anything. I’m summarizing 
everything below:

Design issues:
—The bear looked angry in v1
Answer: We removed the angry expression and replaced it with a neutral 
expression
—The metal horns hand gesture is culturally inappropriate (rude in some 
countries) (from Ruby)
Answer: We removed this feature and replaced it, based on the team’s input, 
with the bear holding sticks (as drumsticks)
—What about a goat-like horned bear? (Joanna)
Answer: We removed horned references due to the cultural reference to 
cuckolding as the Ironic team pointed out
—The bear looks too much like a Russian meme with two hands up
Answer: We have a face-forward bear, not a side-view bear, and only one hand 
raised.
—The bear’s face decoration looked too much like the band Kiss
Answer: While this was intentional by the designer (for a more “metal” look), 
we removed this feature and replaced it with basic bear face coloring. Some 
folks (including Miles, Dmitry, Sam who added +1s) would like this back in. 
We’re happy to do it, we just need the team to agree on one direction.

I'm OK either way, but liked the "KISS" style face painting.

—Don’t like the style (reminds Lucas of church windows)
Answer: The design style for all of the mascots is set. It was shared in July 
when we started this project, and unfortunately the feedback window regarding 
design style has passed, as 95% of projects have now received their logos.
—Request to abbreviate the bear so it just shows head/top of torso/hand holding 
drumsticks (from Dmitry)
Answer: We can revisit that with the designers, however it doesn’t match the 
rest of the logo set, which is either face or full body of each mascot. We’re 
happy to try this, but as we’ve already been four rounds with the team, I’m 
soliciting ANY final feedback on this version before we finalize it.
[cid:389D8878-F39E-485B-B2CA-8B586ABD228C]

I'm OK with this. I think it'll be the closest we'll get to something everyone 
can agree on. +1 from me.


Outstanding questions:
—Can we use PixieBoots in the future?
Answer: Absolutely. You’re welcome to produce vintage swag like shirts and 
stickers with your original logo. Any team can use their old logo in this way. 
Put another way, if you’d like to call PixieBoots your mascot, but refer to the 
Ironic logo our illustrators have created as merely a logo, that’s fine. And 
you don’t have to use this logo if you don’t want to.
—Can we use (1) A stylized logo, matching the guidelines, for use in “official” 
settings and anywhere that it will be seen in other projects’ logos; and (2) 
Our existing PixieBoots mascot, for use in “official” settings (laptop 
stickers, T-shirts, chatbots, webcomic, etc.)? (suggested by Miles)
Answer: Great suggestion! Yes. Together with the answer above, that’s our 
intention—we’d like for you to be able to continue to use your beloved mascot 
in your own way, and we’d like the Ironic team to select some logo that is 
consistent with the rest of the community project logos, that we can use on 
official channels such as the website.
—What will we see at the PTG?
Answer: Out of respect for the team, we did not print stickers or signage for 
the Ironic team with any logo on it until the team reaches an agreement.
—What license will the mascot have?
Answer: It will be CC-BY-ND, which the foundation uses for most of our 
collateral. That allows you to use it (and we’ve provided ten versions to the 
PTLs of the projects with finalized mascots so they have a good amount of 
flexibility in logo use). It prevents, for example, a for-profit company from 
inserting its commercial logo into an element of the community-use mascots 
(which was a common request early in the design process). If you would like to 
make a derivative work, we can definitely find a way to compromise, just send 
me a note.
—What does the foundation want to achieve with this? (from Lucas)
Answer: We’re trying to communicate, by way of design, that the projects are 
cohesive and connected, while still preserving (via a team-selected mascot) the 
team’s individual identity. We’d also like to help those projects that don’t 
have design resources present themselves on an even footing with the others. 
The majority of projects didn’t have their own 

Re: [openstack-dev] [oslo] pbr and warnerrors status

2017-02-08 Thread Jay Faulkner

> On Feb 8, 2017, at 8:15 AM, Ben Nemec <openst...@nemebean.com> wrote:
> 
> 
> 
> On 02/08/2017 09:41 AM, Doug Hellmann wrote:
>> Excerpts from Ben Nemec's message of 2017-02-08 09:20:31 -0600:
>>> 
>>> On 02/08/2017 01:53 AM, Andreas Jaeger wrote:
>>>> On 2017-02-08 00:56, Ian Cordasco  wrote:
>>>>> 
>>>>> 
>>>>> On Feb 7, 2017 5:47 PM, "Joshua Harlow" <harlo...@fastmail.com
>>>>> <mailto:harlo...@fastmail.com>> wrote:
>>>>> 
>>>>>Likely just never pulled the trigger.
>>>>> 
>>>>>Seems like we should pull it though.
>>>>> 
>>>>> 
>>>>> 
>>>>> Will have to wait until Pike given the library release freeze
>>>> 
>>>> I've pushed a patch to release pbr so that we won't forget about it:
>>>> 
>>>> https://review.openstack.org/430618
>>>> 
>>>> Andreas
>>>> 
>>> 
>>> Great, thanks everybody!
>>> 
>> 
>> It would be useful for someone to look at the pbr change and verify that
>> releasing it as-is won't break the builds for a bunch of projects by
>> turning the flag back on. If it will, we're going to want to provide a
>> way to stage the roll out. If we change the name of the flag, for
>> example, we could release pbr for support for checking warnings, and
>> then projects could enable that when they have time and with the fixes
>> to the docs needed to make it work properly.
> 
> I think the only way to do this would be to run a docs build in every project 
> with warnerrors turned on, using the unreleased pbr.  Given the amount of 
> time it's been broken, there's a good chance some issues have snuck in (they 
> had in diskimage-builder).  So maybe deprecating the old option name and 
> requiring another explicit opt-in from everyone is the safest way to go.
> 
> Thoughts on a new name?  fatalwarnings maybe?
> 

IMO, I’d suggest skipping this and just fixing the broken attribute. Projects 
that are impacted by the change simply need to merge a one-line change to set 
warnerrors=false. FWIW, you can actually run a local docs build, find and 
resolve warnings without the PBR change. It seems overkill to me to change the 
term since doing so will be super confusing to anyone who hasn’t read this 
mailing list thread.

If there’s a significant enough concern, a change could be pushed to set 
warnerrors=false on the projects that are concerned before this release is made.

-
Jay Faulkner
OSIC

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-02-02 Thread Jay Faulkner
https://en.wikipedia.org/wiki/Sign_of_the_horns came up in IRC, as the sign the 
bear is making. Obviously to me, I read it as the heavy metal gesture. 
Apparently it is offensive in some cultures, so I change my vote to -1, since I 
don’t want to offend folks in other parts of the world :).

-Jay

> On Feb 1, 2017, at 12:38 PM, Jay Faulkner <j...@jvf.cc> wrote:
> 
> Of the options presented, I think the new 3.0 version most brings to mind a 
> rocking bear.  It’s still tough to be OK with a new logo, given that pixie 
> boots is beloved by our team partially because Lucas took the time to make us 
> one — but it seems like not accepting a new logo created by the foundation 
> would lead to Ironic getting less marketing and resources, so I’m not keen to 
> go down that path. With that in mind, I’m +1 to version 3.0 of the bear.
> 
> -Jay
> 
>> On Feb 1, 2017, at 12:05 PM, Heidi Joy Tretheway <heidi...@openstack.org> 
>> wrote:
>> 
>> Hi Ironic team,
>> 
>> I’m sorry our second proposal again missed the mark. It wasn’t the 
>> illustrator’s intention to mimic the Bear Surprise painting that gained 
>> popularity in Russia as a meme. Our illustrator created a face-forward bear 
>> with paws shaped as if it had index and ring “fingers" down, like a hand 
>> gesture popular at a heavy metal concert. It was not meant to mimic the 
>> painting of a side-facing bear with paws and all “fingers" up to surprise. 
>> That said, once it’s seen, it’s tough to expect the community to un-see it, 
>> so we’ll take another approach.
>> 
>> The issue with your old mascot is twofold: it doesn’t fit the illustration 
>> style for the entire suite of 60+ mascots, and it contains a human-made 
>> element (drumsticks). As part of our overall guidelines, human-made objects 
>> and symbols were not allowed, and we applied these standards to all projects.
>> 
>> Your team told us you want a heavy metal bear, so we used the Kiss 
>> band-style makeup and the hand gesture to suggest metal without using an 
>> instrument or symbol. We tried to mimic your original logo’s expression. 
>> After releasing v1, we listened to your team’s comment that the first 
>> version was too angry looking, so you’ll see a range of expressions from 
>> fierce to neutral to happy. 
>> 
>> 
>> 
>> I’d love to find a compromise with your team that will be in keeping with 
>> the style of the project logo suite. I’ll watch your ML for additional 
>> concerns about this proposed v3:   
>> 
>> 
>> Our illustration team’s next step is to parse the community feedback from 
>> the Ironic team (note that there is a substantial amount of conflicting 
>> feedback from 21 members of your team) and determine if we have majority 
>> support for a single direction. 
>> 
>> While new project logos are optional, virtually every project asked be 
>> represented in our family of logos. Only logos in this new style will be 
>> used on the project navigator and in other promotional ways. 
>> 
>> Feel free to join me for a quick chat tomorrow at 9:30 a.m. Pacific:
>> Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/5038169769
>> Or iPhone one-tap (US Toll): +16465588656,5038169769# or 
>> +14086380968,5038169769#
>> Or Telephone: Dial: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll) 
>> Meeting ID: 503 816 9769
>> International numbers available: 
>> https://zoom.us/zoomconference?m=E5Gcj6WHnrCsWmjQRQr7KFsXkP9nAIaP
>> 
>> 
>> 
>> 
>>  
>> Heidi Joy Tretheway
>> Senior Marketing Manager, OpenStack Foundation
>> 503 816 9769 | Skype: heidi.tretheway
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-02-01 Thread Jay Faulkner
Of the options presented, I think the new 3.0 version most brings to mind a 
rocking bear.  It’s still tough to be OK with a new logo, given that pixie 
boots is beloved by our team partially because Lucas took the time to make us 
one — but it seems like not accepting a new logo created by the foundation 
would lead to Ironic getting less marketing and resources, so I’m not keen to 
go down that path. With that in mind, I’m +1 to version 3.0 of the bear.

-Jay

> On Feb 1, 2017, at 12:05 PM, Heidi Joy Tretheway  
> wrote:
> 
> Hi Ironic team,
> 
> I’m sorry our second proposal again missed the mark. It wasn’t the 
> illustrator’s intention to mimic the Bear Surprise painting that gained 
> popularity in Russia as a meme. Our illustrator created a face-forward bear 
> with paws shaped as if it had index and ring “fingers" down, like a hand 
> gesture popular at a heavy metal concert. It was not meant to mimic the 
> painting of a side-facing bear with paws and all “fingers" up to surprise. 
> That said, once it’s seen, it’s tough to expect the community to un-see it, 
> so we’ll take another approach.
>  
> The issue with your old mascot is twofold: it doesn’t fit the illustration 
> style for the entire suite of 60+ mascots, and it contains a human-made 
> element (drumsticks). As part of our overall guidelines, human-made objects 
> and symbols were not allowed, and we applied these standards to all projects.
> 
> Your team told us you want a heavy metal bear, so we used the Kiss band-style 
> makeup and the hand gesture to suggest metal without using an instrument or 
> symbol. We tried to mimic your original logo’s expression. After releasing 
> v1, we listened to your team’s comment that the first version was too angry 
> looking, so you’ll see a range of expressions from fierce to neutral to 
> happy. 
> 
> 
> 
> I’d love to find a compromise with your team that will be in keeping with the 
> style of the project logo suite. I’ll watch your ML for additional concerns 
> about this proposed v3:   
> 
>  
> Our illustration team’s next step is to parse the community feedback from the 
> Ironic team (note that there is a substantial amount of conflicting feedback 
> from 21 members of your team) and determine if we have majority support for a 
> single direction. 
> 
> While new project logos are optional, virtually every project asked be 
> represented in our family of logos. Only logos in this new style will be used 
> on the project navigator and in other promotional ways. 
> 
> Feel free to join me for a quick chat tomorrow at 9:30 a.m. Pacific:
> Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/5038169769
> Or iPhone one-tap (US Toll): +16465588656,5038169769# or 
> +14086380968,5038169769#
> Or Telephone: Dial: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll) 
> Meeting ID: 503 816 9769
> International numbers available: 
> https://zoom.us/zoomconference?m=E5Gcj6WHnrCsWmjQRQr7KFsXkP9nAIaP
>  
> 
> 
> 
>   
> Heidi Joy Tretheway
> Senior Marketing Manager, OpenStack Foundation
> 503 816 9769 | Skype: heidi.tretheway
>   
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-02-01 Thread Jay Faulkner
I concur with most of the other comments on the thread. I have a strong 
preference for our existing mascot, and don’t think the new one is an 
improvement.

Thanks,
Jay

> On Jan 31, 2017, at 12:49 PM, Jim Rollenhagen  wrote:
> 
> Hey ironic-ers,
> 
> The foundation has passed along a new version of our mascot (attached) to us, 
> and would like your feedback on it.
> 
> They're hoping to have all mascot-related things ready in time for the PTG, 
> so please do send your thoughts quickly, if you have them. :)
> 
> // jim
>  18.39.58.png>__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][ptl] PTL Candidacy for Pike

2017-01-20 Thread Jay Faulkner
Hi all,

I am thrilled to nominate myself as a candidate for ironic PTL for Pike cycle.
Most of you should know me by now — I’m Jay Faulkner, aka “JayF” on freenode
IRC. I have been in the tech industry for 10 years, starting as a systems
administrator and working my way up the ladder. Since 2014, my primary focus
has been working on bare metal clouds, culminating in my position today as a
full-time ironic developer.

Many PTL nominees post an agenda with their nomination email — I won’t do that
here. The project has been run very well, as evidenced by us achieving almost
75% of our stated goals by the end of the last two cycles (newton and ocata).
If elected PTL, I would try to maintain the productive environment we've
built, and work as a facilitator, organizer, and servant, working towards the
goals the community will set for Pike.

My experience with ironic has been varied, starting with my work on Rackspace
OnMetal, the first public cloud offering including ironic, a short stint as
manager of that development team, and now as full-time upstream developer.
These perspectives have provided me with the communication skills and
technical context needed to be PTL. Ironic lives on the border between
software and hardware, which allows me to use my operational experience toward
designing new features. Some tangible examples of this include the introduction
of both cleaning and the initial agent deploy driver were introduced.
Furthermore, I learned how to empathize with businesses trying to integrate
with OpenStack during my tenure as a manager.

One of the reasons I wanted to run for PTL is to keep as many ironic
contributors doing what they do best: writing code. In my time on the project,
I've found I can contribute best by taking care of issues that can get in the
way of progress such as fixing gate breakages, working to help and mentor new
contributors, triaging bugs, and reviewing code. Another example of my
initiative on these types of issues is my work on cross-project items, such as
my work as documentation liason, and working on our CI jobs. By taking on these
items and more as PTL, I hope to act as a force multiplier for my
ironic colleagues.

Working full time on OpenStack Ironic, and open source in general, has been
one of the most rewarding times of my career. The community around ironic has
always been friendly, and I view my fellow contributors as co-workers and
friends. I cherish the time I’ve had collaborating with you all, and hope to
be given the opportunity to continue to work with everyone as PTL during Pike.
Thank you for your consideration.

Sincerely,
Jay Faulkner

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] new os-api-ref warning, changes may be needed

2017-01-20 Thread Jay Faulkner

> On Jan 20, 2017, at 12:55 PM, Sean Dague <s...@dague.net> wrote:
> 
> On 01/20/2017 03:21 PM, Jay Faulkner wrote:
>> On Jan 20, 2017, at 9:41 AM, Sean Dague <s...@dague.net> wrote:
>>> 
>>> We released a new os-api-ref yesterday which includes a few
>>> enhancements, including the anchor links on the website working as
>>> expected now.
>>> 
>>> One of the things in there is a new warning when a parameter is used,
>>> and is not defined.
>>> 
>>> https://github.com/openstack/keystone/blob/bc8a145de14e455a2a73824e8a84d92ac27aae1c/api-ref/source/v2-ext/ksec2-admin.inc#L31
>>> - as an example
>>> 
>>> Which will generate an issue such as:
>>> 
>>> Warning, treated as error:
>>> /home/sdague/code/openstack/keystone/api-ref/source/api-ref/source/v2-ext/ksec2-admin.inc:112
>>> .rst:: WARNING: No path parameter ``userId`` found in rest_parameter stanza.
>>> 
>>> 
>> 
>> While I understand these are not desirable, is there a better way to 
>> communicate up-front that a potential gate breaking change is coming down 
>> the pipe? This change has impacted the ironic gate 
>> (https://bugs.launchpad.net/ironic/+bug/1658187), and we’re working now to 
>> resolve it, but simply a heads up a few days in advance could’ve prevented 
>> having a bunch of patches fail our api-ref jobs.
> 
> This is my bad. When I looked at the change list and saw this new thing,
> it honestly didn't occur to me that there would be much breakage because
> of the way we did the audit on the nova side.
> 
> I made a note of giving 2 days warning on items like this here -
> https://review.openstack.org/#/c/423517/
> 

Perfect, that’s what I was hoping for. It wasn’t very disruptive because of 
your email + it being caught early, but it’s not fun to waste resources, 
especially with Ironic jobs being so heavy. Thanks for the update, I appreciate 
it.

-Jay


>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] new os-api-ref warning, changes may be needed

2017-01-20 Thread Jay Faulkner
On Jan 20, 2017, at 9:41 AM, Sean Dague <s...@dague.net> wrote:
> 
> We released a new os-api-ref yesterday which includes a few
> enhancements, including the anchor links on the website working as
> expected now.
> 
> One of the things in there is a new warning when a parameter is used,
> and is not defined.
> 
> https://github.com/openstack/keystone/blob/bc8a145de14e455a2a73824e8a84d92ac27aae1c/api-ref/source/v2-ext/ksec2-admin.inc#L31
> - as an example
> 
> Which will generate an issue such as:
> 
> Warning, treated as error:
> /home/sdague/code/openstack/keystone/api-ref/source/api-ref/source/v2-ext/ksec2-admin.inc:112
> .rst:: WARNING: No path parameter ``userId`` found in rest_parameter stanza.
> 
> 

While I understand these are not desirable, is there a better way to 
communicate up-front that a potential gate breaking change is coming down the 
pipe? This change has impacted the ironic gate 
(https://bugs.launchpad.net/ironic/+bug/1658187), and we’re working now to 
resolve it, but simply a heads up a few days in advance could’ve prevented 
having a bunch of patches fail our api-ref jobs.

Thanks,
Jay Faulkner

> Long term, these really all need to be fixed, because this is specifying
> a parameter and giving the user no expectation about what it is and how
> it is used.
> 
> In the short term, if this is too hard for teams to address, remove the
> '-W' from the sphinx_build line in your api-ref section, then work
> through the warnings, and make warnings enforcing when done.
> 
> I've seen fails on keystone. But there may be fails other places as well.
> 
> Keystone fix is here - https://review.openstack.org/#/c/423387 as an
> example of what might be needed to move forward for projects.
> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [infra] Nested KVM + the gate

2017-01-17 Thread Jay Faulkner
Hi all,

Back in late October, Vasyl wrote support for devstack to auto detect, and when 
possible, use kvm to power Ironic gate jobs 
(0036d83b330d98e64d656b156001dd2209ab1903). This has lowered some job time when 
it works, but has caused failures — how many? It’s hard to quantify as the log 
messages that show the error don’t appear to be indexed by elastic search. It’s 
something seen often enough that the issue has become a permanent staple on our 
gate whiteboard, and doesn’t appear to be decreasing in quantity.

I pushed up a patch, https://review.openstack.org/#/c/421581, which keeps the 
auto detection behavior, but defaults devstack to use qemu emulation instead of 
kvm.

I have two questions:
1) Is there any way I’m not aware of we can quantify the number of failures 
this is causing? The key log message, "KVM: entry failed, hardware error 0x0”, 
shows up in logs/libvirt/qemu/node-*.txt.gz.
2) Are these failures avoidable or visible in any way?

IMO, if we can’t fix these failures, in my opinion, we have to do a change to 
avoid using nested KVM altogether. Lower reliability for our jobs is not worth 
a small decrease in job run time.

Thanks,
Jay Faulkner
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [nova] Ironic virt driver resources reporting

2017-01-03 Thread Jay Faulkner
Hey Vdrok, some comments inline.

> On Dec 30, 2016, at 8:40 AM, Vladyslav Drok <vd...@mirantis.com> wrote:
> 
> Hi all!
> 
> There is a long standing problem of resources reporting in ironic virt 
> driver. It's described in a couple of bugs I've found - [0], [1]. Switching 
> to placement API will make things better, but still there are some problems 
> there. For example, there are cases when ironic needs to say "this node is 
> not available", and it reports the vcpus=memory_mb=local_gb as 0 in this 
> case. Placement API does not allow 0s, so in [2] it is proposed to remove 
> inventory records in this case.
> 
> But the whole logic here [3] seems not that obvious to me, so I'd like to 
> discuss when do we need to report 0s to placement API. I'm thinking about the 
> following (copy-pasted from my comment on [2]):
> 
>   • If there is an instance_uuid on the node, no matter what 
> provision/power state it's in, consider the resources as used. In case it's 
> an orphan, an admin will need to take some manual action anyway.

This won’t work, because of https://bugs.launchpad.net/nova/+bug/1503453 — 
basically the Nova resource tracker checks, decides we’re lying about it being 
used for an instance because Nova’s records don’t show we do, and it reads the 
capacity to the pool.

Generally I agree with Jay Pipes’ comments — we should have available resources 
for nodes that can be scheduled to, used resources for nodes with with a nova 
instance, and report no resources whatsoever for nodes in an unschedulable 
state, such as cleaning, enroll, etc.

-
Jay Faulkner
OSIC

>   • If there is no instance_uuid and a node is in cleaning/clean wait 
> after tear down, it is a part of normal node lifecycle, report all resources 
> as used. This means we need a way to determine if it's a manual or automated 
> clean.
>   • If there is no instance_uuid, and a node:
>   • has a bad power state or
>   • is in maintenance
>   • or actually in any other case, consider it unavailable, 
> report available resources = used resources = 0. Provision state does not 
> matter in this logic, all cases that we wanted to take into account are 
> described in the first two bullets.
> 
> Any thoughts?
> 
> [0]. https://bugs.launchpad.net/nova/+bug/1402658
> [1]. https://bugs.launchpad.net/nova/+bug/1637449
> [2]. https://review.openstack.org/414214
> [3]. 
> https://github.com/openstack/nova/blob/1506c36b4446f6ba1487a2d68e4b23cb3fca44cb/nova/virt/ironic/driver.py#L262
> 
> Happy holidays to everyone!
> -Vlad
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Rechecking, gate breakages, the whiteboard, and you

2016-12-13 Thread Jay Faulkner
My client munged the link :(.

http://bit.ly/ironic-whiteboard is a correct link.

Thanks,
Jay

> On Dec 13, 2016, at 8:08 AM, Jay Faulkner <j...@jvf.cc> wrote:
> 
> Hi all,
> 
> I’ve noticed on several patches over the last few weeks, during various gate 
> outages, that some folks are blindly rechecking their patches even when the 
> gate is hard broken. Just a reminder that if you see a gate failure, and you 
> aren’t sure it’s related to your change, to please check the ironic 
> whiteboard (http://bit.ly/ironic-whiteboard— as listed in /topic in IRC). The 
> whiteboard will tell you if we expect CI to be broken, and if so, what’s 
> going on.
> 
> Blindly and repeatedly rechecking patches just uses up CI resources that 
> could be used to land patches that aren’t being held up by the broken gate, 
> so please avoid doing that.
> 
> Thanks,
> Jay Faulkner
> OSIC
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Rechecking, gate breakages, the whiteboard, and you

2016-12-13 Thread Jay Faulkner
Hi all,

I’ve noticed on several patches over the last few weeks, during various gate 
outages, that some folks are blindly rechecking their patches even when the 
gate is hard broken. Just a reminder that if you see a gate failure, and you 
aren’t sure it’s related to your change, to please check the ironic whiteboard 
(http://bit.ly/ironic-whiteboard— as listed in /topic in IRC). The whiteboard 
will tell you if we expect CI to be broken, and if so, what’s going on.

Blindly and repeatedly rechecking patches just uses up CI resources that could 
be used to land patches that aren’t being held up by the broken gate, so please 
avoid doing that.

Thanks,
Jay Faulkner
OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] RFC: deprecating "set IPMI credentials" feature in ironic-inspector

2016-12-13 Thread Jay Faulkner

> On Dec 13, 2016, at 4:40 AM, Dmitry Tantsur  wrote:
> 
> Hi folks!
> 
> Since nearly its beginning, ironic-inspector has had a controversial feature: 
> we allow a user to request changing IPMI credentials of the node after 
> introspection. The new credentials are passed back from inspector to the 
> ramdisk, and the ramdisk calls "ipmitool" to set them.
> 
> Now I realize that the feature has quite a few substantial drawbacks:
> 1. It's a special case in ironic-inspector. It's the only thing that runs 
> after introspection, and it requires special state machine states and actions.
> 2. There is no way to signal errors back from the ramdisk. We can only poll 
> the nodes to see if the new credentials match.
> 3. This is the only place where ironic-inspector modifies physical nodes (as 
> opposed to modifying the ironic database). This feels like a violation of our 
> goal.
> 4. It depends on ipmitool actually being able to update credentials from 
> within the node without knowing the current ones. I'm not sure how wildly 
> it's supported. I'm pretty sure some hardware does not support it.
> 5. It's not and never will be tested by any CI. It's not possible to test on 
> VMs at all.
> 6. Due to its dangerous nature, this feature is hidden behind a configuration 
> option, and is disabled by default.
> 
> The upside I see is that it may play nicely with node autodiscovery. I'm not 
> sure they work together today, though. We didn't end up using this feature in 
> our products, and I don't recall being approached by people using it.
> 
> I suggest deprecating this feature and removing it in Pike. The rough plan is 
> as follows:
> 
> I. Ocata:
> * Deprecate the configuration option enabling this feature.
> * Create an API version that returns HTTP 400 when this feature is requested.
> * Deprecate the associated arguments in CLI.
> * Issue a deprecating warning in IPA when this feature is used.
> 
> II. Pike:
> * Remove the feature from IPA and ironic-inspector.
> * Remove the feature from CLI.
> 
> Please respond with your comments and/or objects to this thread. I'll soon 
> prepare a patch on which you'll also be able to comment.
> 

I agree with deprecating this version of this feature. I do see the potential 
for credential rotation as a thing Ironic could handle in the future, but it 
would need to be handled in a periodic fashion vs being done once at startup.

I’m +2 to what’s proposed.

-Jay

> Dmitry.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-12-02 Thread Jay Faulkner

> On Dec 2, 2016, at 3:44 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:
> 
> On 11/28/2016 04:46 PM, Jay Faulkner wrote:
>> 
>>> On Nov 28, 2016, at 7:36 AM, Yolanda Robla Mota <yrobl...@redhat.com> wrote:
>>> 
>>> Hi, good afternoon
>>> 
>>> I wanted to start an email thread about how to properly setup kernel 
>>> parameters on local boot, for our overcloud images on TripleO.
>>> These parameters may vary depending on the needs of our end users, and even 
>>> can be different ( for different roles ) per deployment. As an example, we 
>>> need it for:
>>> - enable FIPS kernel in terms of security 
>>> (https://bugs.launchpad.net/tripleo/+bug/1640235)
>>> - enable functionality for DPDK/SR-IOV 
>>> (https://review.openstack.org/#/c/331564/)
>>> - enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
>>> - etc..
>>> 
>>> So far, the solutions we got were on several directions:
>>> 
>>> 1. Update the golden overcloud-full image with virt-customize, modifying 
>>> /etc/default/grub settings according to our needs: this is a manual 
>>> process, not really driven by TripleO. End users will want to avoid manual 
>>> steps as much as possible. Also if we announce that OpenStack ships 
>>> features in TripleO like DPDK, SR-IOV... doesn't make sense to tell end 
>>> users that if they want to consume that feature, they need to do manual 
>>> updates on the image. It shall be natively supported, or configurable per 
>>> TripleO environments.
>>> 
>>> 2. Create our own images using diskimage-builder and custom elements: in 
>>> this case, we have the problem that the partners will loose support, as 
>>> building their own images is good for upstream, but not accepted into the 
>>> OSP environment. Also the combination of images needed can be huge, that 
>>> can be a blocker for QA.
>>> 
>>> 3. Add Ironic support for it. Images can be uploaded to glance, and some 
>>> properties can be set on metadata, like a json with kernel parameters. 
>>> Ironic will modify these kernel parameters when deploying the image (in a 
>>> similar way that when it installs bootloader, or generates partitions).
>>> 
>> 
>> This has been proposed before in ironic-specs 
>> (https://review.openstack.org/#/c/331564/) and was rejected, as it would 
>> require Ironic to reach out and modify image contents, which traditionally 
>> has been considered out of scope for Ironic. I would personally recommend 
>> #4, as post-boot automation is the safest way to configure node-specific 
>> options inside an image.
> 
> I'm still a bit divided about our decision back then.. On one hand, this does 
> seem somewhat out of scope. On the other, I quite understand why reboot is 
> suboptimal. I wonder if the ongoing deploy steps work will actually solve it 
> by allowing hardware managers to provide additional deploy steps.
> 

I’m not really of two minds on this at all. Modifying the filesystem directly 
would expose Ironic to a whole new world of complexity, including security 
issues, dealing with multiple incompatible filesystems, and the like. I’m 
obviously OK if anyone wants to use a customization point to do stuff that’d 
typically be outside of Ironic’s scope, but I don’t think this is a use case we 
should encourage.

The realm of configuring a machine beyond laying down the image has to lie in 
configuration management software, or else we open up to a huge scope increase 
and get away from our core mission.

-Jay


> Yolanda, you may want to check the spec 
> https://review.openstack.org/#/c/382091/ as it lays the foundation for the 
> deploy steps idea.
> 
>> 
>> Thanks,
>> Jay Faulkner
>> OSIC
>> 
>> 
>>> 4. Configure it post-deployment: there can be some puppet element that 
>>> updates kernel parameters. But it will need a node reboot to be applied, 
>>> and it's very far from being optimal and acceptable for the end users. 
>>> Reboots are slow, they can be a problem depending on the number of 
>>> nodes/hardware, and also the timing of reboot shall be totally controlled 
>>> (after all puppet has been applied properly).
>>> 
>>> 
>>> In the first three cases, we also hit the problem that TripleO only accepts 
>>> one single overcloud image for all deployments - there is no way to 
>>> instruct TripleO to upload and use several images, depending on the node 
>>> type (although Ironic supports it). Also, we are worried about upgrade 
>>> paths i

Re: [openstack-dev] [tripleo] [ironic] Need to update kernel parameters on local boot

2016-11-28 Thread Jay Faulkner

> On Nov 28, 2016, at 7:36 AM, Yolanda Robla Mota <yrobl...@redhat.com> wrote:
> 
> Hi, good afternoon
> 
> I wanted to start an email thread about how to properly setup kernel 
> parameters on local boot, for our overcloud images on TripleO.
> These parameters may vary depending on the needs of our end users, and even 
> can be different ( for different roles ) per deployment. As an example, we 
> need it for:
> - enable FIPS kernel in terms of security 
> (https://bugs.launchpad.net/tripleo/+bug/1640235)
> - enable functionality for DPDK/SR-IOV 
> (https://review.openstack.org/#/c/331564/)
> - enable rd.iscsi.firmware=1 flag (this for the ramdisk image)
> - etc..
> 
> So far, the solutions we got were on several directions:
> 
> 1. Update the golden overcloud-full image with virt-customize, modifying 
> /etc/default/grub settings according to our needs: this is a manual process, 
> not really driven by TripleO. End users will want to avoid manual steps as 
> much as possible. Also if we announce that OpenStack ships features in 
> TripleO like DPDK, SR-IOV... doesn't make sense to tell end users that if 
> they want to consume that feature, they need to do manual updates on the 
> image. It shall be natively supported, or configurable per TripleO 
> environments.
> 
> 2. Create our own images using diskimage-builder and custom elements: in this 
> case, we have the problem that the partners will loose support, as building 
> their own images is good for upstream, but not accepted into the OSP 
> environment. Also the combination of images needed can be huge, that can be a 
> blocker for QA.
> 
> 3. Add Ironic support for it. Images can be uploaded to glance, and some 
> properties can be set on metadata, like a json with kernel parameters. Ironic 
> will modify these kernel parameters when deploying the image (in a similar 
> way that when it installs bootloader, or generates partitions).
> 

This has been proposed before in ironic-specs 
(https://review.openstack.org/#/c/331564/) and was rejected, as it would 
require Ironic to reach out and modify image contents, which traditionally has 
been considered out of scope for Ironic. I would personally recommend #4, as 
post-boot automation is the safest way to configure node-specific options 
inside an image.

Thanks,
Jay Faulkner
OSIC


> 4. Configure it post-deployment: there can be some puppet element that 
> updates kernel parameters. But it will need a node reboot to be applied, and 
> it's very far from being optimal and acceptable for the end users. Reboots 
> are slow, they can be a problem depending on the number of nodes/hardware, 
> and also the timing of reboot shall be totally controlled (after all puppet 
> has been applied properly).
> 
> 
> In the first three cases, we also hit the problem that TripleO only accepts 
> one single overcloud image for all deployments - there is no way to instruct 
> TripleO to upload and use several images, depending on the node type 
> (although Ironic supports it). Also, we are worried about upgrade paths if we 
> do image customizations. We need a clear way to move forward on it.
> 
> So, we'd like to discuss the possible options there and the action items to 
> take (raise bugs, create some blueprints...). To summarize, our end goal is 
> the following:
> 
> - need to map overcloud-full images to roles
> - need to be done in an automated way, no manual steps enforced, and in a way 
> that can pass properly quality controls
> - reboots are sub-optimal
> 
> What are your thoughts there?
> 
> Best,
> 
> 
> Yolanda Robla
> yrobl...@redhat.com
> Principal Software Engineer - NFV Partner Engineer
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] New and next-gen libraries (a BCN followup)

2016-11-03 Thread Jay Faulkner

> On Nov 3, 2016, at 11:27 AM, Joshua Harlow <harlo...@fastmail.com> wrote:
> 
> Just as a followup from the summit,
> 
> One of the sessions (the new lib one) had a few proposals:
> 
> https://etherpad.openstack.org/p/ocata-oslo-bring-ideas
> 
> And I wanted to try to get clear owners for each part (there was some 
> followup work for each); so just wanted to start this email to get the 
> thoughts going on what to do for next steps.
> 
> *A hash ring library*
> 
> So this one it feels like we need at least a tiny oslo-spec for and for 
> someone to write down the various implementations, what they share, what they 
> do not share (talking to swift, nova, ironic and others? to figure this out). 
> I think alexis was thinking he might want to work through some of that but 
> I'll leave it for him to chime in on that (or others feel free to also).
> 
> This one doesn't seem very controversial and the majority of the work is 
> probably on doing some analysis of what exists and then picking a library 
> name and coding that up, testing it, and then integrating (pretty standard).
> 

Ironic and Nova both share a hash ring implementation currently 
(ironic-conductor and nova-compute driver for ironic). It would be sensible to 
reuse this implementation, oslo-ify it, and have that code shared. 

I question the value of re-implementing something like this from scratch though.

Thanks,
Jay Faulkner
OSIC

> *Failure capturing/formation/(de)serialization library*
> 
> This one I've just decided to push through, though more comments on the spec 
> are always welcome @ https://review.openstack.org/#/c/229194/ the repo where 
> I started just doing this work (while in the airports traveling back) is @ 
> https://github.com/harlowja/failure
> 
> Ideally once that is in a slightly better state we should be able to start to 
> converge the various (at least 3 similar kinds of implementations) to that 
> one and ideally get less duplicated (or slightly same code) out of the 
> various libraries and projects that have copied/recreated it.
> 
> Anyone desiring to help in that is more than welcome to jump in :)
> 
> *Next-gen oslo.service replacement*
> 
> This one may require a little more of a plan on how to make it work, but the 
> gist is that medhi (and others) has created 
> https://github.com/sileht/cotyledon which is a nice replacement for 
> oslo.service that ceilometer is using (and others?) and the idea was to start 
> to figure out how to move away from (or replace with?) olso.service with that 
> library.
> 
> I'd like to see a spec with some of the details/thoughts on how that could be 
> possible, what changes would still be needed. I think from that session that 
> the following questions were raised:
> 
> - Can multiprocessing (or subprocess?) be used (instead of os.fork)
> - What to do about windows?
> - Is it possible to create a oslo.service compat layer that preserves the 
> oslo.service API but uses cotyledon under the covers to smooth the 
> transition/adoption of other projects to cotyledon
>   - Perhaps in general we should write how an adoption could happen for a 
> consuming project (maybe just writing down how ceilometer made the switch 
> would be a good start, what issues were encountered, how they were 
> resolved...)
> - Something else that people forgot to write down in the etherpad here :-P
> 
> I think that was the majority of thoughts coming out of that session (there 
> were a few others, but those were not especially loud and may have just been 
> me rambling, ha). Anything I forgot feel free to add in :)
> 
> Whose in to make the above happen?!?!
> 
> -Josh
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] When should a project be under Ironic's governance?

2016-10-17 Thread Jay Faulkner

> On Oct 17, 2016, at 1:27 PM, Michael Turek <mjtu...@linux.vnet.ibm.com> wrote:
> 
> Hello ironic!
> 
> At today's IRC meeting, the questions "what should and should not be a 
> project be under Ironic's governance" and "what does it mean to be under 
> Ironic's governance" were raised. Log here:
> 
> http://eavesdrop.openstack.org/meetings/ironic/2016/ironic.2016-10-17-17.00.log.html#l-176
> 
> See http://governance.openstack.org/reference/projects/ironic.html for a list 
> of projects currently under Ironic's governance.
> 
> Is it as simple as "any project that aides in openstack baremetal deployment 
> should be under Ironic's governance"? This is probably too general (nova 
> arguably fits here) but it might be a good starting point.
> 
> Another angle to look at might be that a project belongs under the Ironic 
> governance when both Ironic (the main services) and the candidate subproject 
> would benefit from being under the same governance. A hypothetical example of 
> this is when Ironic and the candidate project need to release together.
> 
> Just some initial thoughts to get the ball rolling. What does everyone else 
> think?
> 

I think there were a lot of people in the meeting who were confused by what 
being under governance means. As I understand it, in the strictest sense, it 
means:
- Project contributors can vote for TC/PTL
- Project has access to cross-project resources
- Access to summit/PTG time (at PTL’s discretion)

However, I get the impression some folks attach additional connotations to 
this; such as the Ironic core team gaining an implied responsibility to the 
code or it being seen as a “seal of approval” from Ironic. This means that the 
primary question at hand to be answered is what does it matter, specifically 
/in the Baremetal project/ to be included in our governance. Is it simply the 
benefits provided at a high level by OpenStack, or does it imply additional 
things. This is the question we have to answer to make a decision about what 
projects should be under Ironic’s governance and what exactly it means.

Unless there’s more to it than I understand right now, I’d prefer an open-arms 
approach to projects being in bare metal governance: as long as they’re willing 
to follow the 4 opens, and are working toward the goals of the Baremetal 
project, I’d rather have those projects and their contributors as part of our 
team than not. 

Thanks,
Jay Faulkner


> Thanks,
> Mike Turek
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Jay Faulkner

> On Oct 12, 2016, at 5:01 AM, Dmitry Tantsur  wrote:
> 
> Hi folks!
> 
> I'd like to propose a plan on how to simultaneously extend the coverage of 
> our jobs and reduce their number.
> 
> Currently, we're running one instance per job. This was reasonable when the 
> coreos-based IPA image was the default, but now with tinyipa we can run up to 
> 7 instances (and actually do it in the grenade job). I suggest we use 6 fake 
> bm nodes to make a single CI job cover many scenarios.
> 
> The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool) to 
> be more in sync with how 3rd party CI does it. A special configuration option 
> will be used to enable multi-instance testing to avoid breaking 3rd party CI 
> systems that are not ready for it.
> 
> To ensure coverage, we'll only leave a required number of nodes "available", 
> and deploy all instances in parallel.
> 
> In the end, we'll have these jobs on ironic:
> gate-tempest-ironic-pxe_ipmitool-tinyipa
> gate-tempest-ironic-agent_ipmitool-tinyipa
> 
> Each job will cover the following scenarious:
> * partition images:
> ** with local boot:
> ** 1. msdos partition table and BIOS boot
> ** 2. GPT partition table and BIOS boot
> ** 3. GPT partition table and UEFI boot  <*>
> ** with netboot:
> ** 4. msdos partition table and BIOS boot <**>
> * whole disk images:
> * 5. with msdos partition table embedded and BIOS boot
> * 6. with GPT partition table embedded and UEFI boot  <*>
> 
> <*> - in the future, when we figure our UEFI testing
> <**> - we're moving away from defaulting to netboot, hence only one scenario
> 
> I suggest creating the jobs for Newton and Ocata, and starting with Xenial 
> right away.
> 
> Any comments, ideas and suggestions are welcome.
> 

+1 I'm completely on-board with this. 

Have you considered mixing in multiple drivers in a single test? Given we can 
set drivers per node, is there's a reason (other than maybe just size/duration 
of job) that we couldn't test both pxe_* and agent_* deploy methodologies at 
the same time?

Thanks,
Jay

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [ironic] Admin guide in-tree?

2016-10-11 Thread Jay Faulkner
We are eager to improve our documentation, but I think quite a few of us who 
work on Ironic documentation have a strong preference to keeping those 
documents in-tree. This allows us to enforce contributors having documentation 
changes or additions merge at the same time or in close proximity to the code 
changes and is much easier for us to interact with. I have the willingness to 
help implement admin guides in-tree sooner, but if the docs team doesn't want 
this yet obviously there's nothing to help with.

As for what to do today today; I agree with Ruby on this entirely. I'm adding 
an item to Monday's Ironic meeting agenda to reach consensus on what our 
project will do in the meantime. My recommendation will be option B as listed 
by Ruby; build out a better admin-guide in tree as part of our developer docs, 
and migrate them over, just like we did for the install guide, when the work to 
allow in-tree admin guides is complete.

Thanks,
Jay Faulkner

On Oct 11, 2016, at 9:38 AM, Ruby Loo 
<opensr...@gmail.com<mailto:opensr...@gmail.com>> wrote:

>From my point of view, the rush is so that we can be more efficient with all 
>of our time/efforts. In ironic, we have a bit of a mess. We now have 
>duplicated (and perhaps out-of-sync) admin-related information in our 
>developer docs [1] as well as in the official admin guide [2] -- the latter 
>content was added but I am unaware of that knowledge/coordination being done 
>with the ironic community :-(

Should we :
A. move our admin-related information from our developer docs to the existing 
admin guide; then move the admin guide to the new in-tree solution later

B. replace what is in the existing admin guide with a pointer to the developer 
docs; then move the admin content to the new in-tree solution later

C. status quo until the new in-tree solution is available

--ruby

[1] http://docs.openstack.org/developer/ironic/
[2] http://docs.openstack.org/admin-guide/baremetal.html


On Mon, Oct 10, 2016 at 7:07 PM, Lana Brindley 
<openst...@lanabrindley.com<mailto:openst...@lanabrindley.com>> wrote:
On 10/10/16 16:25, Andreas Jaeger wrote:
> On 2016-10-10 01:37, Steve Martinelli wrote:
>> On Oct 9, 2016 6:57 PM, "Lana Brindley" 
>> <openst...@lanabrindley.com<mailto:openst...@lanabrindley.com>
>> <mailto:openst...@lanabrindley.com<mailto:openst...@lanabrindley.com>>> 
>> wrote:
>>>
>>> Why the rush?
>>
>> I think its more eagerness than rush. Project teams made a lot of head
>> way with the API ref and install guides being in-tree that they want to
>> keep the momentum with the admin guide.
>
> Those teams are more than welcome to contribute today to the
> openstack-manuals repository! Is there anything we can help these?
>
> Andreas
>

Yes, Andreas makes a good point. If there's content you want in the guides now, 
we can help you with that.

Lana

--
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com<http://lanabrindley.com/>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] Admin guide in-tree?

2016-10-07 Thread Jay Faulkner
I'm not necessarily just asking for it to be done -- I'm curious what the scope 
of work is because I'm potentially willing to help (and find folks to help) get 
it done sooner. If the docs team does not support the admin guide in-tree until 
Pike, It'll be Q or later before other projects can utilize it. That's over a 
year from today, and I'd rather not have to wait that long to get a better 
admin guide up for Ironic.

Thanks,
Jay Faulkner
OSIC

> On Oct 6, 2016, at 8:04 PM, Lana Brindley <openst...@lanabrindley.com> wrote:
> 
> (Adding the dev list to cc, as I think this conversation deserves a wider 
> audience)
> 
> Thanks for this feedback. I'm really glad that the new Install Guide model is 
> working out well for people!
> 
> Since our new Install Guides have only just been published, at this stage I'm 
> intending to gather some data on how projects and users are using the 
> project-specific Install Guides during the next cycle. I'm also intending to 
> spend some time in the Ocata cycle on improving that index page. It's pretty 
> ugly right now, and I think there's some  serious UX improvements to be done. 
> Since Ocata is a short cycle, I'm also conscious of how much the docs team 
> might realistically be able to achieve.
> 
> All that said, you are certainly not the first to ask if this model can be 
> extended! I think it's something that the docs community would like to see, 
> and it seems as though it has broad support amongst developers and projects 
> as well. So, in short, I think this is a thing that will happen, but probably 
> not in Ocata. I'm tentatively willing to tell you that Pike is a possibility 
> though ;)
> 
> Lana
> 
> On 07/10/16 12:43, Steve Martinelli wrote:
>> FWIW, the keystone team would also be interested in this model
>> 
>> On Thu, Oct 6, 2016 at 11:40 AM, Jay Faulkner <j...@jvf.cc 
>> <mailto:j...@jvf.cc>> wrote:
>> 
>>Hi all,
>> 
>>For those of you who don't know me, I'm Jay Faulkner and I work on Ironic 
>> as the Docs liaison as one of my hats.
>> 
>>Ironic launched our install-guide right after newton closed, and 
>> backported it, thanks to the changes to make the install-guide available in 
>> tree. We're a huge fan of this model, and I'm curious if there's any plans 
>> to make this happen for the admin-guide. If not, can someone help me 
>> understand the scope of work, presuming it's something that the docs group 
>> is interested in.
>> 
>>If we can get the technical infrastructure in place to do admin-guide in 
>> tree, I'd expect Ironic to quickly adopt it, like we did for the install 
>> guide.
>> 
>>Thanks,
>>Jay Faulkner
>>OSIC
>> 
>> 
>>___
>>OpenStack-docs mailing list
>>openstack-d...@lists.openstack.org 
>> <mailto:openstack-d...@lists.openstack.org>
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs 
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs>
>> 
>> 
>> 
>> 
>> ___
>> OpenStack-docs mailing list
>> openstack-d...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs
>> 
> 
> --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com
> 
> ___
> OpenStack-docs mailing list
> openstack-d...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][OpenStackClient] two openstack commands for the same operation?

2016-08-29 Thread Jay Faulkner

On Aug 29, 2016, at 8:19 AM, Dean Troyer 
<dtro...@gmail.com<mailto:dtro...@gmail.com>> wrote:

On Mon, Aug 29, 2016 at 9:41 AM, Loo, Ruby 
<ruby@intel.com<mailto:ruby@intel.com>> wrote:
I did this because 'passthrough' is more English than 'passthru' and I thought 
that was the 'way to go' in osc. But some folks wanted it to be 'passthru' 
because in ironic, we've been calling them 'passthru' since day 2.

Our default rule is to use proper spellings and not abbreviations[0].  The 
exceptions we have made are due to either a) significant existing practice in 
the industry (outside OpenStack, mostly in the network area so far); and b) 
when the user experience is clearly improved.

To be clear; thru is a valid english word, in just about every dictionary I've 
checked. In fact, some evidence shows it predates "through" as a word. I agree 
with other folks who have posted on the mailing list that keeping "passthru" is 
going to be more clear to operators of ironic than changing it to "passthrough" 
in this single context.

Thanks,
Jay Faulkner
OSIC


You might notice that calling out prior OpenStack usage is absent from that 
list.  One of the tenets of OSC from the start is to look first at user 
experience and identifying a _single_ set of terminology.  An existing practice 
can fall under (b) when it is compelling overall, and is an easier case to make 
when there is no competing OSC usage, or other OSC usage matches.

Unfortunately, I wasn't able to make everyone happy because someone else thinks 
that we shouldn't be providing two different openstack commands that provide 
the same functionality. (They're fine with either one, just not both.)

I agree with not aliasing commands out of the box.  We'll do that for 
deprecations, and are looking at a generalize alias method for other reasons, 
but on initial implementation I would prefer to not do this.

What do the rest of the folks think? Some guidance from the OpenStackClient 
folks would be greatly appreciated.

I would suggest you pick the one that lines up with usage outside OpenStack, in 
the sorts of ways that our users would be familiar with[1].  In this case, a 
grep of help output of even 'passthr' will find the match.

Hopefully this all makes enough sense that we can add it as a guideline to the 
OSC docs.  Feedback welcome.

Thanks
dt

[0] Where 'proper' is usually North American English, for whatever definition 
of that we have. This is totally due to me not thinking far enough ahead 4 
years ago...

[1] Cases like "all other clouds use this term" or "it is the common way to 
refer to this resource in the networking field" have been used in the past.

--
Dean Troyer
dtro...@gmail.com<mailto:dtro...@gmail.com>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] My openstack account is invalid after change it

2016-08-24 Thread Jay Faulkner
Is this the typical way we do OpenStack account support? This email honestly 
sounds a little phishy :).

-Jay

> On Aug 24, 2016, at 8:00 AM, Jimmy Mcarthur  wrote:
> 
> Hi -
> 
> I'd be happy to assist you with your login problem. Please email me directly 
> and we'll get you going.
> 
> Thank you!
> Jimmy McArthur
> 
> I changed my openstack account to my new email address because my old email 
> address is abandoned, now i can not login with my new email address, it 
> prompts my new email address is not verified, every time I click verify my 
> new account,
> It prompts " your request was successfully processed! please verify your 
> INBOX", but I never receive email about it.
> 
> I guess it sends verification email to my our email address, what should I do 
> because my old email address can not be used?
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Abandoning specs without recent updates

2016-08-05 Thread Jay Faulkner
They're available in review.openstack.org<http://review.openstack.org>, if you 
filter by ironic-specs, and status:open.

https://review.openstack.org/#/q/project:openstack/ironic-specs+status:open

Since six months ago would be 2/5/2016, pretty much you're looking at the specs 
older than that.

To be clear; abandoning a spec can be undone by a proposer simply by pushing a 
button. These are specs that all have negative feedback, would not cleanly 
merge, and need attention that they haven't gotten in the last six months.

Thanks,
Jay Faulkner
OSIC
On Aug 5, 2016, at 8:00 AM, milanisko k 
<vetri...@gmail.com<mailto:vetri...@gmail.com>> wrote:

Hi Jay,

I think it might be useful to share the list of those specs in here.

Cheers,
milan

čt 4. 8. 2016 v 21:41 odesílatel Jay Faulkner <j...@jvf.cc<mailto:j...@jvf.cc>> 
napsal:
Hi all,

I'd like to abandon any ironic-specs reviews that haven't had any updates in 6 
months or more. This works out to about 27 patches. The primary reason for this 
is to get items out of the review queue that are old and stale.

I'll be performing this action next week unless there's objections posted here.

Thanks,
Jay Faulkner
OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Jay Faulkner

On Aug 4, 2016, at 12:43 PM, Fox, Kevin M 
<kevin@pnnl.gov<mailto:kevin@pnnl.gov>> wrote:

The problem is, OpenStack is a very fractured landscape. It takes significant 
amounts of time for an operator to deploy "one more service".

So, I spent a while deploying Trove, got it 90% working, then discovered Trove 
didn't work with RadosGW. RadosGW was a done deal long ago, and couldn't be 
re-evaluated at that point. (Plus you cant have more then one swift endpoint in 
a cloud...). So, for now, I'm supporting a 90% functional Trove.

If I went and installed Ironic tomorrow, would it work with the radosgw I 
already have? I have no idea. The, "it supports swift" implies but doesn't 
answer. If I want to consider deploying it now, I have to block out even more 
time to experiment in order to try. and then do a bunch of manual testing to 
verify.


Ironic does have radosgw support, and it's documented here: 
http://docs.openstack.org/developer/ironic/deploy/radosgw.html -- clearly it's 
not "first class" as we don't validate it in CI like we do with swift, but the 
code exists and I believe we have users out in the wild.

I know this is orthogonal to the discussion, but I wanted someone seeing this 
thread to know it does work :).

Thanks,
Jay Faulkner
OSIC

This kind of thing makes it even harder on operators to deploy new services.

Yes, it could be solved at the Ceph level, where they deploy a complete 
OpenStack with all the advanced services and test everything, but OpenStack is 
already doing that. It is significantly easier for OpenStack to test it instead 
of Ceph.

Thanks,
Kevin





From: Ben Swartzlander [b...@swartzlander.org<mailto:b...@swartzlander.org>]
Sent: Thursday, August 04, 2016 12:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On 08/04/2016 03:02 PM, Fox, Kevin M wrote:
Nope. The incompatibility was for things that never were in radosgw, not things 
that regressed over time. tmpurls differences and the namespacing things were 
there since the beginning first introduced.

At the last summit, I started with the DefCore folks and worked backwards until 
someone said, no we won't ever add tests for compatibility for that because 
radosgw is not an OpenStack project and we only test OpenStack.

Yes, I think thats a terrible thing. I'm just relaying the message I got.

I don't see how this is terrible at all. If someone were to start up a
clone of another OpenStack project (say, Cinder) which aimed for 100%
API compatibility with Cinder, but outside the tent, and then they
somehow failed to achieve true compatibility because of Cinder's
undocumented details, nobody would proclaim that the this was somehow
our (the OpenStack community's) fault.

I think the Radosgw people probably have a legitimate beef with the
Swift team about the lack of an official API spec that they can code do,
but that's a choice for the Swift community to make. If users of Swift
are satisfied with a the-code-is-the-spec stance then I say good luck to
them.

If the user community cares enough about interoperability between
swift-like things they will demand an API spec and conformance tests and
someone will write those and then radosgw will have something to conform
to. None of this has anything to do with the governance model for Ceph
though.

-Ben Swartzlander



Thanks,
Kevin

From: Ben Swartzlander [b...@swartzlander.org<mailto:b...@swartzlander.org>]
Sent: Thursday, August 04, 2016 10:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On 08/04/2016 11:57 AM, Fox, Kevin M wrote:
Ok. I'll play devils advocate here and speak to the other side of this, because 
you raised an interesting issue...

Ceph is outside of the tent. It provides a (mostly) api compatible 
implementation of the swift api (radosgw), and it is commonly used in OpenStack 
deployments.

Other OpenStack projects don't take it into account because its not a big tent 
thing, even though it is very common. Because of some rules about only testing 
OpenStack things, radosgw is not tested against even though it is so common.

I call BS on this assertion. We test things that outside the tent in the
upstream gate all the time -- the only requirement is that they be
released. We won't test against unreleased stuff that's outside the big
tent and the reason for that should be obvious.

This causes odd breakages at times that could easily be prevented, but for 
procedural things around the Big Tent.

The only way I can see for "odd breakages" to sneak in is on the Ceph
side, if they aren't testing their changes against OpenStack and they
introduce a regression, then that's their fault (assuming of course that
we have good test c

[openstack-dev] [ironic] Abandoning specs without recent updates

2016-08-04 Thread Jay Faulkner
Hi all,

I'd like to abandon any ironic-specs reviews that haven't had any updates in 6 
months or more. This works out to about 27 patches. The primary reason for this 
is to get items out of the review queue that are old and stale.

I'll be performing this action next week unless there's objections posted here.

Thanks,
Jay Faulkner
OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DIB] [Ironic] [TripleO] Moving IPA element out of DIB tree

2016-07-15 Thread Jay Faulkner
One more note I missed in the previous email.

On Jul 15, 2016, at 1:46 PM, Ben Nemec 
<openst...@nemebean.com<mailto:openst...@nemebean.com>> wrote:

I think this probably makes sense, but some more thoughts inline.

On 07/15/2016 03:13 PM, Stephane Miller wrote:
To better support diskimage-builder based IPA images going forward, we'd
like to move the ironic-agent element into the ironic-python-agent
repository. This will involve:

- Improving support for having multiple copies of an element, so that we
may deprecate the diskimage-builder repository copy of the element. See
this change and related: https://review.openstack.org/#/c/334785
- Moving the element into the repository. This change has been proposed
as https://review.openstack.org/#/c/335583/
- Deprecating the diskimage-builder copy of the element (TBD)
- Adding tests to gate IPA changes on DIB builds (TBD)

We could potentially add tripleo-ci to the IPA repo, which would take
care of this.  As an added bonus, it could cover both the introspection
and deployment use cases for IPA.

On the other hand, if a separate Ironic job were added to cover this,
tripleo could stop ever building new IPA images in CI except in the
promote jobs when we bump our version of IPA.  This would delay our
finding problems with IPA element changes, but realistically I'm not
sure how many of those are happening these days anyway.  I'd expect that
most changes are happening in IPA itself, which we don't currently CI.


We already have a well-established pattern for testing multiple ramdisks under 
IPA, and this "workflow" already basically works for DIB, however, is extremely 
awkward in terms of co-gating changes (such as needing to add dependencies to 
utilize new IPA features). Here's a basic breakdown:

We set IRONIC_BUILD_DEPLOY_RAMDISK to true, which instructs the Ironic devstack 
plugin to build a new ramdisk instead of downloading and using a prebuilt 
ramdisk from tarballs.openstack.org<http://tarballs.openstack.org>. Which 
ramdisk gets built is determined by IRONIC_RAMDISK_TYPE. For DIB, we already 
have all the code for a job to run this way (in fact; it'd be interesting to go 
ahead and add a non-voting version of this job). 
https://github.com/openstack/ironic/blob/master/devstack/lib/ironic#L1185

There's no need for new approaches to how IPA does CI to test this; the desire 
to have the DIB element in-tree is simply to allow parallelism with the other 
build methods (which maintain their build and dependencies in-tree) and to 
prevent a situation where IPA changes are blocked on DIB element changes 
merging into another repo. This is the same approach used for Ironic in 
devstack (plugins-in-tree) and that is being worked on for Ironic in tempest, 
and I don't want to repeat mistakes of CI past of having us blocked on merging 
packages on another project's core team. (Regardless of how friendly or 
responsive you are :D).

Thanks,
Jay Faulkner
OSIC

- Add upload of DIB-built images to 
tarballs.openstack.org<http://tarballs.openstack.org>
<http://tarballs.openstack.org> (TBD)

We would also need to resolve https://review.openstack.org/#/c/334042/

I'm not clear why, but the ironic-agent element was given special
treatment in disk-image-create (which is evil, but what's done is done)
and we'd need to figure out why and a solution that wouldn't require
referencing an out-of-tree element in diskimage-builder.


Many IPA deployers currently use DIB based IPA images using the
ironic-agent element. However, IPA does not officially support DIB - IPA
changes are not tested against DIB, nor are DIB-built images published.

tripleo-ci actually does publish images, but they aren't well publicized
at this point, and it only does so when we promote a repo.


This has the following disadvantages:

- The DIB element is not versioned along with IPA, resulting in
potential version mismatch and breakage
- ironic-agent element changes are not tested with DIB prior to merge

This isn't true today.  tripleo-ci runs against all diskimage-builder
changes and uses an IPA ramdisk.  The version mismatch is a legit
problem with the current setup, although I'm not aware of any actual
breakages that have happened (which doesn't necessarily mean they
haven't :-).


Understandably, tripleo and other projects may have concerns with regard
to this change. I hope to start a discussion here so that those concerns
can be addressed. Further in-depth discussion of this issue can be found
in the relevant launchpad bug:
https://bugs.launchpad.net/ironic-python-agent/+bug/1590935

Thanks,
Stephane


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_

[openstack-dev] [ironic] [oslo] Stevedore 1.16 breaks IPA gate

2016-07-15 Thread Jay Faulkner
Hi all,

I wanted to get a little more light on this bug discovered: 
https://bugs.launchpad.net/stevedore/+bug/1603542. Jim posted a summary of it 
today in #openstack-oslo and didn't get a response, and it's currently breaking 
IPA's gate. If someone familiar with stevedore could please triage the bug and 
start moving it towards a resolution, I'd greatly appreciate it, as IPA's gate 
is broken and stuck until it is fixed.

While it's a big hammer I'd like to avoid using, but if there's no moment on 
this bug by Monday, I'll propose a global-requirements change to blacklist this 
version of stevedore until a solution is determined since IPA is completely 
blocked from merging changes.

This breakage does not impact the Ironic gate, as Ironic uses prebuilt TinyIPA 
ramdisk images, and no image has been published with the new breaking stevedore 
since these problems were caught in CI.

Thanks,
Jay Faulkner
OSIC

P.S. Kudos to ironic-inspector folks for writing a great unit test to catch 
this behavior in our CI rather than having it break users!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DIB] [Ironic] [TripleO] Moving IPA element out of DIB tree

2016-07-15 Thread Jay Faulkner
e IPA element
in to the IPA tree makes a lot of sense from this standpoint.

As for breakages from not co-gating - all of the dib + ironic breakages
I remember were when we used the old ramdisk element which had a lot
more ironic specific logic in the element. Now that IPA is a thing and
isn't a bunch of bash inside of DIB the surface area for DIB to break
Ironic is actually pretty low (which is awesome).


The typical way we gate Ironic+IPA with ramdisks is like this:

1) IPA gates against all supported ramdisk images; today that's our TinyIPA and 
CoreOS-based ramdisks.
2) Ironic uses the ramdisk best suited for CI (TinyIPA, as it requires the 
least resources in the gate) for all of its tests.

This is a good thing; it means we can isolate the larger part of Ironic against 
DIB changes potentially breaking the IPA image. I personally think I'm OK with 
the risk of a DIB change breaking the IPA gate. Some of these "untested 
dependencies" already exist, as seen today by stevedore's latest release 
breaking IPA unit tests (https://bugs.launchpad.net/stevedore/+bug/1603542).

Thanks,
Jay Faulkner
OSIC

Understandably, tripleo and other projects may have concerns with regard
to this change. I hope to start a discussion here so that those concerns
can be addressed. Further in-depth discussion of this issue can be found
in the relevant launchpad bug:
https://bugs.launchpad.net/ironic-python-agent/+bug/1590935

Thanks,
Stephane


Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mascot/logo for your project

2016-07-11 Thread Jay Faulkner

On Jul 11, 2016, at 8:51 AM, Steve Martinelli 
> wrote:

The keystone project was one of the first to have a logo and I'm more than 
happy to give it up for the sake of a consistent message across all OpenStack 
projects.

I think it's fine if ironic and tripleo want to stick with their current 
animals (luckily they are using logos that meet the new criteria proposed by 
the foundation), but I don't think it's unreasonable if the foundation proposes 
a re-design of existing logos that is more consistent with the rest of the new 
ones.

The logos we use today are too different from each other, i don't think anyone 
would disagree with that. If each project goes off and does their own thing it 
comes off as inconsistent.


Just a little strange for a community-driven open source project to be 
replacing community-created mascots with top-down designs. I'm not terribly 
opposed to it, but would have definitely preferred if this had come across as a 
request instead of a mandate. I think it's cool and unique that the Ironic 
mascot was designed/drawn by one of our more prolific contributors (Lucas), and 
we'll lose some of that shine with a less-unique logo.

I think we could all be easily bribed if the foundation provided copious 
amounts of stickers to contributors with the new mascot designs on them. :P

Thanks,
Jay

stevemar

On Mon, Jul 11, 2016 at 11:33 AM, Steven Hardy 
> wrote:
On Mon, Jul 11, 2016 at 08:00:29AM -0700, Heidi Joy Tretheway wrote:
>The Foundation would like to help promote OpenStack projects in the big
>tent with branding and marketing services. The idea is to create a family
>of logos for OpenStack projects that are unique, yet immediately
>identifiable as part of OpenStack. Weâ**ll be using these logos to promote
>your project on the OpenStack website, at the Summit and in marketing
>materials.
>Weâ**re asking project teams to choose a mascot to represent as their
>logo. Your team can select your own mascot, and then weâ**ll have an
>illustrator create the logo for you (and we also plan to print some
>special collateral for your team in Barcelona).
>If your team already has a logo based on a mascot from nature, youâ**ll
>have first priority to keep that mascot, and the illustrator will restyle
>it to be consistent with the other projects. If you have a logo that
>doesnâ**t have a mascot from nature, we encourage your team to choose a
>mascot.
>Hereâ**s an FAQ and examples of what the logos can look like:
>http://www.openstack.org/project-mascots
>Weâ**ve also recorded a quick video with an overview of the project:
>https://youtu.be/LOdsuNr2T-o
>You can get in touch with your PTL to participate in the logo choice
>discussion. If you have more questions, Iâ**m happy to help. :-)

TripleO has had some discussion already around a project mascot, and we've
settled on the owl logo displayed on tripleo.org and our 
launchpad org:

http://tripleo.org/
https://bugs.launchpad.net/tripleo/

(There is also a hi-res version or SVG around, I just can't find it atm)

This was discussed in the community and accepted here:

http://lists.openstack.org/pipermail/openstack-dev/2016-March/089043.html

Which was in turn based on a previous design discussed here:

http://lists.openstack.org/pipermail/openstack-dev/2015-September/075649.html

So, I think it's likely (unless anyone objects) we'll stick with that
current owl theme for our official mascot.

Overall I like the idea of encouraging official mascots/logos for projects,
quite a few have done so informally and I think it's a fun way to reinforce
project/team identity within the OpenStack community.

Thanks!

Steve Hardy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] UFCG OneView CI comments missing recheck command

2016-06-28 Thread Jay Faulkner
It was on this patch (which you should totally review if you're reading this 
message, I'm really tired of rebasing it ;p): 
https://review.openstack.org/#/c/263842/

It appeared to fail early in the setup process.

-Jay

On Jun 28, 2016, at 4:59 PM, Thiago Paiva 
<thia...@lsd.ufcg.edu.br<mailto:thia...@lsd.ufcg.edu.br>> wrote:

Hi Jay,

Sorry about that. The comment should be "recheck oneview" to test again. I'll 
patch the failure message with instructions, thanks for the warning.

About being broken, we experience some transient failures due to concurrency on 
our physical resources and/or timeouts. We'll be fine tuning this as soon as we 
get Ironic Tempest passing, but could you point me to the specific case you're 
seeing so I can double check the failure?

Thanks.

Regards,

Thiago Paiva Brito
Lead Software Engineer
OneView Drivers for Openstack Ironic

- Mensagem original -
De: "Jay Faulkner" <j...@jvf.cc<mailto:j...@jvf.cc>>
Para: 
openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Cc: ufcg-oneview...@lsd.ufcg.edu.br<mailto:ufcg-oneview...@lsd.ufcg.edu.br>
Enviadas: Terça-feira, 28 de junho de 2016 20:53:25
Assunto: [openstack-dev] [ironic] UFCG OneView CI comments missing recheck 
command

Hi all,

The new UFCG OneView CI is posting on changes now, and doesn't have any 
information about how to perform rechecks. It also appears to be broken, but 
given it's new that's not surprising.

Can someone get the message updated with a recheck command?

Thanks,
Jay Faulkner
OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] UFCG OneView CI comments missing recheck command

2016-06-28 Thread Jay Faulkner
Hi all,

The new UFCG OneView CI is posting on changes now, and doesn't have any 
information about how to perform rechecks. It also appears to be broken, but 
given it's new that's not surprising.

Can someone get the message updated with a recheck command?

Thanks,
Jay Faulkner
OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][cross-project] Standardized role names and policy

2016-06-27 Thread Jay Faulkner
Is this spec still alive? I'm working on the spec for Ironic integration of 
Keystone policy, and like some of the items in the draft, but obviously they 
aren't binding and I can't really reference them unless the spec merges or at 
least shows progress towards merging.

Thanks,
Jay Faulkner
OSIC

On Jan 31, 2016, at 6:15 PM, Adam Young 
<ayo...@redhat.com<mailto:ayo...@redhat.com>> wrote:

On 01/30/2016 08:24 PM, Henry Nash wrote:

On 30 Jan 2016, at 21:55, Adam Young 
<<mailto:ayo...@redhat.com>ayo...@redhat.com<mailto:ayo...@redhat.com>> wrote:

On 01/30/2016 04:14 PM, Henry Nash wrote:
Hi Adam,

Fully support this kind of approach.

I am still concerned over the scope check, since we do have examples of when 
there is more than one (target) scope check, e.g.: an API that might operate on 
an object that maybe global, domain or project specific - in which case you 
need to “match up with scope checks with the object in question”, for example 
for a given API:

If cloud admin, allow the API
If domain admin and the object is domain or project specific, then allow the API
If project admin and the object is project specific then allow the API

Today we can (and do with keystone) encode this in policy rules. I’m not clear 
how the “scope check in code” will work in this kind of situation.
I originally favored an approach that a user would need to get a token scoped 
to a resource in order to affect change on that resource, and admin users could 
get tokens scoped to anything,  but I know that makes things harder for 
Administrators trying to fix broken deployments. So I backed off on that 
approach.

I think the right answer would be that the role check would set some value to 
indicate it was an admin override.  So long as the check does not need the 
actual object from the database, t can perform whatever logic we like.

The policy check deep in the code can be as strict or permissive as it desires. 
 If there is a need to re-check the role for an admin check there, policy can 
still do so.  A role check that passes at the Middleware level can still be 
blocked at the in-code level.

"If domain admin and the object is domain or project specific, then allow the 
API" is trh tricky one, but I don't think we even have a solution for that now. 
 Domain1->p1->p2->p3 type hierarchies don't allow operations on p3 with a token 
scoped to Domain1.

So we do actually support things like that, e.g. (from the domain specific role 
additions):

”identity:some_api": role:admin and project_domain_id:%(target.role.domain_id)s 
   (which means I’m project admin and the domain specific role I am going to 
manipulate is specific to my domain)

….and although we don’t have this in our standard policy, you could also write

”identity:some_api": role:admin and domain_id:%(target.project.domain_id)s
(which means I’m domain admin and I can do some operation on any project in my 
domain)

Yeah, we do some things like this in the Keystone policy file, but not in 
remote services, yet, and it would only work for Domain of the project, not for 
any arbitrary project in the chain under Domain1:  roles on p1 or P2 would have 
to be inherited in order to affect any change on resources in 3.



I think that in those cases, I would still favor the user getting a token from 
Keystone scoped to p3, and use the inherited-role-assignment approach.



Henry

On 30 Jan 2016, at 17:44, Adam Young 
<ayo...@redhat.com<mailto:ayo...@redhat.com>> wrote:

I'd like to bring people's attention to a Cross Project spec that has the 
potential to really strengthen the security story for OpenStack in a scalable 
way.

"A common policy scenario across all projects" 
<https://review.openstack.org/#/c/245629/> 
https://review.openstack.org/#/c/245629/

The summary version is:

Role name or patternExplanation or example
-:--
admin:  Overall cloud admin
service  :  for service users only, not real humans
{service_type}_admin :  identity_admin, compute_admin, 
network_admin etc.
{service_type}_{api_resource}_manager: identity_user_manager,
   compute_server_manager, 
network_subnet_manager
observer :  read only access
{service_type}_observer  : identity_observer, image_observer


Jamie Lennox originally wrote the spec that got the ball rolling, and Dolph 
Matthews just took it to the next level.  It is worth a read.

I think this is the way to go.  There might be details on how to get there, but 
the granularity is about right.
If we go with that approach, we might want to rethink about how we enforce 
policy.  Specifically, I think we should split the policy enforcement up into 
two stages:

1.  Role check

Re: [openstack-dev] [ironic] Gate troubleshooting howto

2016-06-27 Thread Jay Faulkner
The date that has the strongest consensus appears to be Wednesday, July 13, at 
1500 UTC. I'll send out more details about how to connect and watch later.

If you were one of the people who will be unable to attend, I'll ensure this is 
recorded, and if we get enough folks interested even with the recording, I can 
do another live session. 

Thanks,
Jay Faulkner
OSIC


> On Jun 23, 2016, at 5:16 PM, Jay Faulkner <j...@jvf.cc> wrote:
> 
> I dropped this Friday as an option based on the results. If you're interested 
> this will still be happening, but in mid-July.
> 
> Thanks,
> Jay Faulkner
> OSIC
> 
> 
>> On Jun 22, 2016, at 12:39 PM, Jay Faulkner <j...@jvf.cc> wrote:
>> 
>> There was a request at the mid-cycle for a presentation on
>> troubleshooting Ironic gate failures. I'd be willing to share some of my
>> knowledge about this to interested folks.
>> 
>> I've created a doodle with a few possible times; note that one option is
>> this Friday, but the others are in mid-July, as I'll be moving over the
>> gap of time; so I can do before or after the move.
>> 
>> Please vote here: http://doodle.com/poll/44whfnwkkm4vcgn4
>> 
>> 
>> Thanks,
>> Jay Faulkner
>> OSIC
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introduction

2016-06-27 Thread Jay Faulkner
Hi and welcome

https://wiki.openstack.org/wiki/How_To_Contribute will have some information on 
different ways to contribute. If you have a specific project in mind, I'd 
suggest searching on their bugtracker for 'low-hanging-fruit'. Either one of 
those bugs, or any issues/bugs you see in developer documentation are great 
choices for getting started. If you have any trouble, asking in IRC is often a 
quick way to get over a problem, but most of it's documented pretty well so 
make sure to exhaust the docs first.

Good luck + thanks,
Jay Faulkner
OSIC

On Jun 27, 2016, at 9:10 AM, Nalaka Rajamanthri 
<nalaka1...@gmail.com<mailto:nalaka1...@gmail.com>> wrote:

Hi,
I am new to the open source community. I would like to contribute to the 
openstack. So I would like to help this project by solving bugs.

Best Regards,
Nalaka Rajamanthri,
University of Peradeniya (undergraduate)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Separate maintenance mode for Ironic-found faults

2016-06-24 Thread Jay Faulkner
Hi all,

At the design summit in Austin, I took an action item after the anomaly 
resolution to file a spec to separate operator-set and ironic-set maintenance. 
Now that CI testing is done and we're making progress on other priorities, I 
took time today to submit the RFE bug and a draft spec.

https://bugs.launchpad.net/ironic/+bug/1596107
https://review.openstack.org/334113

I want to make sure we have a consensus on the path forward before investing 
more time in the spec. I've defined some parts of the problem and a potential 
solution as I see it, but would appreciate input, especially from someone with 
more experience modifying existing API objects.

Thanks in advance,
Jay Faulkner
OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Gate troubleshooting howto

2016-06-23 Thread Jay Faulkner
I dropped this Friday as an option based on the results. If you're interested 
this will still be happening, but in mid-July.

Thanks,
Jay Faulkner
OSIC


> On Jun 22, 2016, at 12:39 PM, Jay Faulkner <j...@jvf.cc> wrote:
> 
> There was a request at the mid-cycle for a presentation on
> troubleshooting Ironic gate failures. I'd be willing to share some of my
> knowledge about this to interested folks.
> 
> I've created a doodle with a few possible times; note that one option is
> this Friday, but the others are in mid-July, as I'll be moving over the
> gap of time; so I can do before or after the move.
> 
> Please vote here: http://doodle.com/poll/44whfnwkkm4vcgn4
> 
> 
> Thanks,
> Jay Faulkner
> OSIC
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Gate troubleshooting howto

2016-06-22 Thread Jay Faulkner
There was a request at the mid-cycle for a presentation on
troubleshooting Ironic gate failures. I'd be willing to share some of my
knowledge about this to interested folks.

I've created a doodle with a few possible times; note that one option is
this Friday, but the others are in mid-July, as I'll be moving over the
gap of time; so I can do before or after the move.

Please vote here: http://doodle.com/poll/44whfnwkkm4vcgn4


Thanks,
Jay Faulkner
OSIC



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-22 Thread Jay Faulkner


On 6/22/16 12:01 PM, Sean Dague wrote:
> On 06/22/2016 02:37 PM, Chris Hoge wrote:
>>> On Jun 22, 2016, at 11:24 AM, Sean Dague  wrote:
>>>
>>> On 06/22/2016 01:59 PM, Chris Hoge wrote:
> On Jun 20, 2016, at 5:10 AM, Sean Dague  > wrote:
>
> On 06/14/2016 07:19 PM, Chris Hoge wrote:
>>> On Jun 14, 2016, at 3:59 PM, Edward Leafe >> > wrote:
>>>
>>> On Jun 14, 2016, at 5:50 PM, Matthew Treinish >> > wrote:
>>>
 But, if we add another possible state on the defcore side like
 conditional pass,
 warning, yellow, etc. (the name doesn't matter) which is used to
 indicate that
 things on product X could only pass when strict validation was
 disabled (and
 be clear about where and why) then my concerns would be alleviated.
 I just do
 not want this to end up not being visible to end users trying to
 evaluate
 interoperability of different clouds using the test results.
>>> +1
>>>
>>> Don't fail them, but don't cover up their incompatibility, either.
>>> -- Ed Leafe
>> That’s not my proposal. My requirement is that vendors who want to do
>> this
>> state exactly which APIs are sending back additional data, and that this
>> information be published.
>>
>> There are different levels of incompatibility. A response with
>> additional data
>> that can be safely ignored is different from a changed response that
>> would
>> cause a client to fail.
> It's actually not different. It's really not.
>
> This idea that it's safe to add response data is based on an assumption
> that software versions only move forward. If you have a single deploy of
> software, that's fine.
>
> However as noted, we've got production clouds on Juno <-> Mitaka in the
> wild. Which means if we want to support horizontal transfer between
> clouds, the user experienced timeline might be start on a Mitaka cloud,
> then try to move to Juno. So anything added from Juno -> Mitaka without
> signaling has exactly the same client breaking behavior as removing
> attributes.
>
> Which is why microversions are needed for attribute adds.
 I’d like to note that Nova v2.0 is still a supported API, which
 as far as I understand allows for additional attributes and
 extensions. That Tempest doesn’t allow for disabling strict
 checking when using a v2.0 endpoint is a problem.

 The reporting of v2.0 in the Marketplace (which is what we do
 right now) is also a signal to a user that there may be vendor
 additions to the API.

 DefCore doesn’t disallow the use of a 2.0 endpoint as part
 of the interoperability standard.
>>> This is a point of confusion.
>>>
>>> The API definition did not allow that. The implementation of the API
>>> stack did.
>> And downstream vendors took advantage of that. We may
>> not like it, but it’s a reality in the current ecosystem.
> And we started saying "stop it" 2 years ago. And we've consistently been
> saying stop it all along. And now it's gone.
>
> And yes, for people that did not get ahead of this issue and engage the
> community, it now hurts. But this has been a quite long process.
I don't wanna wade fully into this discussion, but a question about this
here as there seems to be somewhat of a double standard. I know
upstream, we generally "pay the price" for bad API decisions almost
indefinitely, because we don't want to break users. Is it reasonable to
expecting a public/vendor cloud, who will typically has even more
change-averse users, to change that API out from under users without a
version bump?

-Jay
>>> In Liberty the v2.0 API is optionally provided by a different backend
>>> stack that doesn't support extensions.
>>> In Mitaka it is default v2.0 API on a non extensions backend
>>> In Newton the old backend is deleted.
>>>
>>> From Newton forward there is still a v2.0 API, but all the code hooks
>>> that provided facilities for extensions are gone.
>> It’s really important that the current documentation reflect the
>> code and intent of the dev team. As of writing this e-mail, 
>>
>> "• v2 (SUPPORTED) and v2 extensions (SUPPORTED) (Will
>> be deprecated in the near future.)”[1]
>>
>> Even with this being removed in Newton, DefCore still has
>> to allow for it in every supported version.
> The v2 extensions link there, you will notice, is upstream extensions.
> All of which default on for the new code stack.
>
> Everything documented there still works on the new code stack. The v2 +
> v2 extensions linked there remains supported in Newton.
>
> The wording on this page should be updated, it is in the Nova developer
> docs, intended for people working on Nova upstream. They lag a bit from
> where 

Re: [openstack-dev] [nova] ability to set metadata on instances (but config drive is not updated)

2016-06-21 Thread Jay Faulkner



On 6/17/16 3:04 PM, Joshua Harlow wrote:

Hi folks,

I was noticing that its possible to do something like:

$ nova meta josh-testr3 set "e=f"

Then inside the VM I can do the following to eventually see that this 
changes shows up in the instance metadata exposed at the following:


$ curl -s http://169.254.169.254/openstack/latest/meta_data.json | 
python -mjson.tool


{
...
"hostname": "josh-testr3.cloud.phx3.gdg",
"launch_index": 0,
"meta": {
...
"e": "f",
..
 }
 ...
}

Now if I am using the configdrive instead of the metadata server at 
that special/magic ip that same metadata never seems to change (I 
assume the configdrive would have to be 'ejected' and then a new 
configdrive created and then that configdrive 'reinserted'); was 
anyone aware of a bug that would solve this (it does appear to be a 
feature difference that could/should? be solved)?


I would be -1 to instituting this change as well, as it would be 
impossible for some hypervisors/drivers (such as Ironic) to implement. 
Additionally, how could you ensure the tenant OS didn't have the 
configdrive mounted or otherwise in use?


Thanks,
Jay

Why this is something useful (from my view) is that we (at godaddy) 
have a cron job that polls that metadata periodically and it generates 
a bunch of polling traffic (especially when each VM does this) and 
that traffic could be removed if such a 'eject' and 'reinsert' happens 
instead (since then the cron job could become a small program that 
listens for devices being inserted/removed and does the needed actions 
then, which is better than polling endlessly for data that hasn't 
changed).


-Josh

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Grenade non-voting test results

2016-06-17 Thread Jay Faulkner
+1 lets get it voting. Feel free to add me as a reviewer to the project-config 
patch to make the change if you want me to vote officially :).


Thanks,

Jay


From: Villalovos, John L 
Sent: Friday, June 17, 2016 10:49:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Ironic] Grenade non-voting test results

TL;DR: In my opinion Grenade job is performing reliably.

Using the table at:
http://ci-watch.tintri.com/project?project=ironic=7+days
Note: Unable to extract the data out of the web page to do more thorough data 
evaluation.

The Grenade job appears to be performing successfully. On Thursday evening it 
may appear that grenade was failing without reason, but the cause is: 
https://bugs.launchpad.net/ironic/+bug/1590139

This bug was fixed in master, but the patch to stable/mitaka had not yet 
landed. And since Grenade runs tests on stable/mitaka it continued to fail. 
This morning the patch to fix stable/mitaka landed and the Grenade job is 
passing again.

Unfortunately https://bugs.launchpad.net/ironic/+bug/1590139 (which started 
around 6-June-2016) would cause random Ironic jobs to fail, as only some jobs 
would get sent to the new Zuul builders. Any job to the new Zuul builders would 
fail.  So some jobs would pass and some fail for the same patch.

I did my best to take all of that into account and in my opinion the grenade 
job is performing reliably. If I can figure out how to extract better 
statistics I will update this email.

Please let me know if you have questions or if I'm wrong :)

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] IPA DIB Ramdisk needs CI, Documentation

2016-06-16 Thread Jay Faulkner

Hey all,

I recently tried to do some testing with the DIB ramdisk in devstack, 
and found there were several bugs (I filed three yesterday) in the build 
process, and additionally, no documentation or guidance on how to build 
or test the images devstack in IPA developer docs (although inspector 
docs have some information). Also, there is no DIB image published by 
IPA (we publish CoreOS and TinyIPA images already today), and no CI for 
DIB images.


This is particularly concerning given some of our third-party CI systems 
are using DIB images that we don't even test for basic functionality 
upstream. This could lead to failures inside their CI jobs that aren't 
related whatsoever to the hardware drivers they are designed to test.


I filed https://bugs.launchpad.net/ironic-python-agent/+bug/1590935 
about getting working CI for DIB, and by extension, official support. 
I'd like to generally request more attention from those who use the DIB 
driver in getting this working reliably in devstack, documented, and 
tested. I'm willing to assist with troubleshooting and will review any 
patches related to this effort if you add me as a reviewer.


Thanks,
Jay Faulkner
OSIC



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-14 Thread Jay Faulkner
I committed a small change to systemd-nspawn to ensure kernel modules 
could be loaded by IPA hardware managers in an older version of our 
CoreOS-based ramdisk (note that we don't use systemd-nspawn anymore, but 
instead use a chroot inside the CoreOS ramdisk).


Thanks,
Jay Faulkner
OSIC

On 6/13/16 12:11 PM, Doug Hellmann wrote:

I'm trying to pull together some information about contributions
that OpenStack community members have made *upstream* of OpenStack,
via code, docs, bug reports, or anything else to dependencies that
we have.

If you've made a contribution of that sort, I would appreciate a
quick note.  Please reply off-list, there's no need to spam everyone,
and I'll post the summary if folks want to see it.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] IPA example hardware managers

2016-06-10 Thread Jay Faulkner

Hi all,

I've been promising to do more knowledge sharing on IPA Hardware 
Managers, specifically in the form of a presentation. However, I wanted 
to go a different route that would be more likely to stand the test of 
time and be more self-service.


To that end, I've created a couple of well-commented example hardware 
managers here: 
https://github.com/jayofdoom/ipa-example-hardware-managers. If you've 
been wanting to know about how to write additional IPA Hardware 
Managers, between this and the already-existing inline documentation in 
IPA itself there should be more than enough information to get started. 
PRs and feedback accepted.


As a note, my hope is that we can find a reasonable place for this to 
live in the openstack namespace. I only created this in github so I 
could spend my time this afternoon writing the examples rather than 
getting repositories created.


While working on this, however, I came to a realization -- despite our 
documentation telling folks to subclass *either* 
hardware.HardwareManager or hardware.GenericHardwareManager 
(http://docs.openstack.org/developer/ironic-python-agent/#how-can-i-build-a-custom-hardwaremanager), 
after a lot of thought (and some time spent trying to drum up a use case 
for an example where it's useful), I think we should change this, and 
only encourage subclassing HardwareManager for out of tree hardware 
managers. I don't believe there's a technical way to prevent it, so it 
shouldn't technically be an API break, but I wanted to get a consensus 
before moving forward and making that change.


I hope the information is useful!

Thanks,
Jay Faulkner
OSIC


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][infra][qa] Ironic grenade work nearly complete

2016-06-09 Thread Jay Faulkner

A quick update:

The devstack-gate patch is currently merging.

There was some discussion about whether or not the Ironic grenade job 
should be in the check pipeline (even as -nv) for grenade, so I split 
that patch into two pieces so the less controversial part (adding the 
grenade-nv job to Ironic's check pipeline) could merge more easily.


https://review.openstack.org/#/c/319336/ - project-config
Make grenade-dsvm-ironic non voting (in the check queue for Ironic only)

https://review.openstack.org/#/c/327985/ - project-config
Make grenade-dsvm-ironic non voting (in the check queue for grenade)

Getting working upgrade testing will be a huge milestone for Ironic. 
Thanks to those who have already helped us make progress and those who 
will help us land these and see it at work.


Thanks in advance,
Jay Faulkner
OSIC

On 6/9/16 8:28 AM, Jim Rollenhagen wrote:

Hi friends,

We're two patches away from having grenade passing in our check queue!
This is a huge step forward for us, many thanks go to the numerous folks
that have worked on or helped somehow with this.

I'd love to push this across the line today as it's less than 10 lines
of changes between the two, and we have a bunch of work nearly done that
we'd like upgrade testing running against before merging.

So we need infra cores' help here.

https://review.openstack.org/#/c/316662/ - devstack-gate
Allow to pass OS_TEST_TIMEOUT for grenade job
1 line addition with an sdague +2.

https://review.openstack.org/#/c/319336/ - project-config
Make grenade-dsvm-ironic non voting (in the check queue)
+7,-1 with an AJaeger +2.

Thanks in advance. :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] [qa] Graphs of how long jobs take?

2016-06-08 Thread Jay Faulkner

Thanks a bunch Mikhail, this was very helpful!

I've started a dashboard to track Ironic tempest job speeds as well as 
IPA -src job speeds; it's here; 
http://graphite.openstack.org/dashboard/#ironic-job-duration


Please feel free to improve upon it or add additional useful metrics. 
Given duration of execution is a frequent complaint about our CI, it 
seems like a good thing to graph!



Thanks again,
Jay

On 6/8/16 5:20 PM, Mikhail Medvedev wrote:

Hi Jay,

On Wed, Jun 8, 2016 at 5:56 PM, Jay Faulkner <j...@jvf.cc 
<mailto:j...@jvf.cc>> wrote:


Hey all,

As you may recall, recently Ironic was changed to use iPXE and
TinyIPA in the jobs, as part of an attempt to get the jobs to use
less ram and perhaps even run more quickly in the short run.
However, when I tried to make a graph at graphite.openstack.org
<http://graphite.openstack.org> showing the duration of the jobs,
it doesn't look like that metric was available
(stats.zuul.pipeline.check.job.check-tempest-dsvm-ironic-pxe_ssh.*
appears to only track the job result).


I did find two metrics that seem to be what you are looking for:

stats.timers.nodepool.job.gate-tempest-dsvm-ironic-pxe_ssh.master.ubuntu-trusty.runtime.mean
stats.timers.zuul.pipeline.check.job.gate-tempest-dsvm-ironic-pxe_ssh.*.mean 




Is there a common or documented way or tool to graph duration of
jobs so I can see the real impact of this change?


Thanks a bunch,

Jay Faulkner

OSIC


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Mikhail Medvedev
IBM


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [infra] [qa] Graphs of how long jobs take?

2016-06-08 Thread Jay Faulkner

Hey all,

As you may recall, recently Ironic was changed to use iPXE and TinyIPA 
in the jobs, as part of an attempt to get the jobs to use less ram and 
perhaps even run more quickly in the short run. However, when I tried to 
make a graph at graphite.openstack.org showing the duration of the jobs, 
it doesn't look like that metric was available 
(stats.zuul.pipeline.check.job.check-tempest-dsvm-ironic-pxe_ssh.* 
appears to only track the job result).


Is there a common or documented way or tool to graph duration of jobs so 
I can see the real impact of this change?



Thanks a bunch,

Jay Faulkner

OSIC


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changes to Ramdisk and iPXE defaults in Devstack and many gate jobs

2016-06-02 Thread Jay Faulkner
These changes have all merged and taken effect. Ironic and IPA gate jobs 
are now operating as mentioned below, with one change; during review it 
was decided to lower the amount of ram per node to 384mb instead of 
512mb of RAM. This will ensure that we don't add additional bloat to 
TinyIPA ramdisks without gate jobs indicating it via failure.



Thanks to everyone's hard work on getting TinyIPA support working. This 
is a big step towards making our CI work faster and use less resources.


Thanks,
Jay Faulkner (JayF)
OSIC

On 5/12/16 8:54 AM, Jay Faulkner wrote:


Hi all,


A change (https://review.openstack.org/#/c/313035/) to Ironic 
devstack is in the gate, changing the default ironic-python-agent 
(IPA) ramdisk from CoreOS to TinyIPA, and changing iPXE to default 
enabled.



As part of the work to improve and speed up gate jobs, we determined 
that using iPXE speeds up deployments and makes them more reliable by 
using http to transfer ramdisks instead of tftp. Additionally, 
the TinyIPA image, in development over the last few months, uses less 
ram and is smaller, allowing faster transfers and more simultaneous 
VMs to run in the gate.



In addition to changing the devstack default, there's also a patch up: 
https://review.openstack.org/#/c/313800/ to change most Ironic jobs to 
use iPXE and TinyIPA. This change will make IPA have voting check jobs 
and tarball publishing jobs for supported ramdisks (CoreOS and 
TinyIPA). Ironic (and any other projects other than IPA) will use the 
publicly published tinyipa image.



In summary:

*- Devstack changes (merging now):*

**- Defaults to TinyIPA ramdisk

  - Defaults to iPXE enabled

*- Gate changes (needs review at: 
https://review.openstack.org/#/c/313800/ )*


  - Ironic-Python-Agent

- Voting CoreOS + TinyIPA source (ramdisk built on the fly jobs)

  - Ironic

- Change all jobs (except bash ramdisk pxe_ssh job) to TinyIPA

- Change all jobs but one to use iPXE

- Change all gate jobs to use 512mb of ram


If there are any questions or concerns, feel free to ask here or in 
#openstack-ironic.



P.S. I welcome users of the DIB ramdisk to help make a job to run 
against IPA. All supported ramdisks should be checked in IPA's gate to 
avoid breakage as IPA is inherently dependent on its environment.




Thanks,

Jay Faulkner (JayF)

OSIC



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Completed: Move to TinyIPA Ramdisk and iPXE defaults in gate jobs

2016-06-02 Thread Jay Faulkner
These changes have all merged and taken effect. Ironic and IPA gate jobs 
are now operating as mentioned below, with one change; during review it 
was decided to lower the amount of ram per node to 384mb instead of 
512mb of RAM. This will ensure that we don't add additional bloat to 
TinyIPA ramdisks without gate jobs indicating it via failure. If anyone 
sees any strange behavior in the gate as a result, feel free to ping me 
on IRC.



The remaining pending change is https://review.openstack.org/#/c/323994/ 
in order to enable similar changes for ironic-inspector jobs. Your 
review attention on this is appreciated; it's a simple one word change.


Thanks to everyone's hard work on getting TinyIPA support working. This 
is a big step towards making our CI work faster and use less resources.


Thanks,
Jay Faulkner (JayF)
OSIC

On 5/12/16 8:54 AM, Jay Faulkner wrote:


Hi all,


A change (https://review.openstack.org/#/c/313035/) to Ironic 
devstack is in the gate, changing the default ironic-python-agent 
(IPA) ramdisk from CoreOS to TinyIPA, and changing iPXE to default 
enabled.



As part of the work to improve and speed up gate jobs, we determined 
that using iPXE speeds up deployments and makes them more reliable by 
using http to transfer ramdisks instead of tftp. Additionally, 
the TinyIPA image, in development over the last few months, uses less 
ram and is smaller, allowing faster transfers and more simultaneous 
VMs to run in the gate.



In addition to changing the devstack default, there's also a patch up: 
https://review.openstack.org/#/c/313800/ to change most Ironic jobs to 
use iPXE and TinyIPA. This change will make IPA have voting check jobs 
and tarball publishing jobs for supported ramdisks (CoreOS and 
TinyIPA). Ironic (and any other projects other than IPA) will use the 
publicly published tinyipa image.



In summary:

*- Devstack changes (merging now):*

**- Defaults to TinyIPA ramdisk

  - Defaults to iPXE enabled

*- Gate changes (needs review at: 
https://review.openstack.org/#/c/313800/ )*


  - Ironic-Python-Agent

- Voting CoreOS + TinyIPA source (ramdisk built on the fly jobs)

  - Ironic

- Change all jobs (except bash ramdisk pxe_ssh job) to TinyIPA

- Change all jobs but one to use iPXE

- Change all gate jobs to use 512mb of ram


If there are any questions or concerns, feel free to ask here or in 
#openstack-ironic.



P.S. I welcome users of the DIB ramdisk to help make a job to run 
against IPA. All supported ramdisks should be checked in IPA's gate to 
avoid breakage as IPA is inherently dependent on its environment.




Thanks,

Jay Faulkner (JayF)

OSIC



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Tooling for recovering nodes

2016-06-01 Thread Jay Faulkner

Some comments inline.


On 5/31/16 12:26 PM, Devananda van der Veen wrote:

On 05/31/2016 01:35 AM, Dmitry Tantsur wrote:

On 05/31/2016 10:25 AM, Tan, Lin wrote:

Hi,

Recently, I am working on a spec[1] in order to recover nodes which get stuck
in deploying state, so I really expect some feedback from you guys.

Ironic nodes can be stuck in
deploying/deploywait/cleaning/cleanwait/inspecting/deleting if the node is
reserved by a dead conductor (the exclusive lock was not released).
Any further requests will be denied by ironic because it thinks the node
resource is under control of another conductor.

To be more clear, let's narrow the scope and focus on the deploying state
first. Currently, people do have several choices to clear the reserved lock:
1. restart the dead conductor
2. wait up to 2 or 3 minutes and _check_deploying_states() will clear the lock.
3. The operator touches the DB to manually recover these nodes.

Option two looks very promising but there are some weakness:
2.1 It won't work if the dead conductor was renamed or deleted.
2.2 It won't work if the node's specific driver was not enabled on live
conductors.
2.3 It won't work if the node is in maintenance. (only a corner case).

We can and should fix all three cases.

2.1 and 2.2 appear to be a bug in the behavior of _check_deploying_status().

The method claims to do exactly what you suggest in 2.1 and 2.2 -- it gathers a
list of Nodes reserved by *any* offline conductor and tries to release the lock.
However, it will always fail to update them, because objects.Node.release()
raises a NodeLocked exception when called on a Node locked by a different 
conductor.

Here's the relevant code path:

ironic/conductor/manager.py:
1259 def _check_deploying_status(self, context):
...
1269 offline_conductors = self.dbapi.get_offline_conductors()
...
1273 node_iter = self.iter_nodes(
1274 fields=['id', 'reservation'],
1275 filters={'provision_state': states.DEPLOYING,
1276  'maintenance': False,
1277  'reserved_by_any_of': offline_conductors})
...
1281 for node_uuid, driver, node_id, conductor_hostname in node_iter:
1285 try:
1286 objects.Node.release(context, conductor_hostname, node_id)
...
1292 except exception.NodeLocked:
1293 LOG.warning(...)
1297 continue


As far as 2.3, I think we should change the query string at the start of this
method so that it includes nodes in maintenance mode. I think it's both safe and
reasonable (and, frankly, what an operator will expect) that a node which is in
maintenance mode, and in DEPLOYING state, whose conductor is offline, should
have that reservation cleared and be set to DEPLOYFAILED state.


This is an excellent idea -- and I'm going to extend it further. If I 
have any nodes in a *ING state, and they are put into maintenance, it 
should force a failure. This is potentially a more API-friendly way of 
cleaning up nodes in bad states -- an operator would need to maintenance 
the node, and once it enters the *FAIL state, troubleshoot why it 
failed, unmaintenance, and return to production.


I obviously strongly desire an "override command" as an operator, but I 
really think this could handle a large percentage of the use cases that 
made me desire it in the first place.



--devananda


Definitely we should improve the option 2, but there are could be more issues
I didn't know in a more complicated environment.
So my question is do we still need a new command to recover these node easier
without accessing DB, like this PoC [2]:
   ironic-noderecover --node_uuids=UUID1,UUID2
--config-file=/etc/ironic/ironic.conf

I'm -1 to anything silently removing the lock until I see a clear use case which
is impossible to improve within Ironic itself. Such utility may and will be 
abused.

I'm fine with anything that does not forcibly remove the lock by default.
I agree such a utility could be abused. I don't think that's a good 
argument for not writing it for operators. However, I agree that any 
utility we write that could or would modify a lock should not do so by 
default, and should warn before doing so, but there are cases where 
getting a lock cleared is desirable and necessary.


A good example of this would be an ironic-conductor failing while a node 
is locked, and being brought up with a different hostname. Today, 
there's no way to get that lock off that node again.


Even if you force operators to replace a conductor with one with an 
identical hostname, during the time this replacement was occurring any 
nodes locked would remain locked.


Thanks,
Jay Faulkner

Best Regards,

Tan


[1] https://review.openstack.org/#/c/319812
[2] https://review.openstack.org/#/c/311273/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstac

Re: [openstack-dev] [ironic] Tooling for recovering nodes

2016-06-01 Thread Jay Faulkner

Hey Tan, some comments inline.


On 5/31/16 1:25 AM, Tan, Lin wrote:

Hi,

Recently, I am working on a spec[1] in order to recover nodes which get stuck 
in deploying state, so I really expect some feedback from you guys.

Ironic nodes can be stuck in 
deploying/deploywait/cleaning/cleanwait/inspecting/deleting if the node is 
reserved by a dead conductor (the exclusive lock was not released).
Any further requests will be denied by ironic because it thinks the node 
resource is under control of another conductor.

To be more clear, let's narrow the scope and focus on the deploying state 
first. Currently, people do have several choices to clear the reserved lock:
1. restart the dead conductor
2. wait up to 2 or 3 minutes and _check_deploying_states() will clear the lock.
3. The operator touches the DB to manually recover these nodes.
I actually like option #3 being optionally integrated into a tool to 
clear nodes stuck in *ing state. If specified, it would clear the lock 
on the deploy as it moved it from DEPLOYING -> DEPLOYFAILED. Obviously, 
for cleaning this could be dangerous, and should be documented as so -- 
imagine clearing a lock mid-firmware flash and having a power action 
taken to brick the node.


Given this is tooling intended to handle many cases, I think it's better 
to give the operator the choice to take more dramatic action if they wish.



Thanks,
Jay Faulkner

Option two looks very promising but there are some weakness:
2.1 It won't work if the dead conductor was renamed or deleted.
2.2 It won't work if the node's specific driver was not enabled on live 
conductors.
2.3 It won't work if the node is in maintenance. (only a corner case).

Definitely we should improve the option 2, but there are could be more issues I 
didn't know in a more complicated environment.
So my question is do we still need a new command to recover these node easier 
without accessing DB, like this PoC [2]:
   ironic-noderecover --node_uuids=UUID1,UUID2  
--config-file=/etc/ironic/ironic.conf

Best Regards,

Tan


[1] https://review.openstack.org/#/c/319812
[2] https://review.openstack.org/#/c/311273/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] looking for documentation liaison

2016-06-01 Thread Jay Faulkner

I don't love writing docs but I've spent more than my share of reading them :). 
I'm very willing to help out here as docs liason.

Thanks,
Jay Faulkner

From: Loo, Ruby <ruby@intel.com>
Sent: Tuesday, May 31, 2016 10:23:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [ironic] looking for documentation liaison

Hi,

We¹re looking for a documentation liaison [1]. If you love (Œlike¹ is also 
acceptable) documentation, care that ironic has great documentation, and would 
love to volunteer, please let us know.

The position would require you to:

- attend the weekly doc team meetings [2] (or biweekly, depending on which 
times work for you), and represent ironic
- attend the weekly ironic meetings[3] and report (via the subteam reports) on 
anything that may impact ironic
- open bugs/whatever to track getting any documentation-related work done. You 
aren¹t expected to do the work yourself although please do if you¹d like!
- know the general status of ironic documentation
- see the expectations mentioned at [1]

Please let me know if you have any questions. Thanks and may the best candidate 
win ?

--ruby

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
[2] https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting
[3] https://wiki.openstack.org/wiki/Meetings/Ironic





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [oslo] Template to follow for policy support?

2016-05-31 Thread Jay Faulkner
Hi all,


During this cycle, on behalf of OSIC, I'll be working on implementing proper 
oslo.policy support for Ironic. The reasons this is needed probably don't need 
to be explained here, so I won't :).


I have two requests for the list regarding this though:


1) Is there a general guideline to follow when designing policy roles? There 
appears to have been some discussion around this already here: 
https://review.openstack.org/#/c/245629/, but it hasn't moved in over a month. 
I want Ironic's implementation of policy to be as 'standard' as possible; but 
I've had trouble finding any kind of standard.


2) A general call for contributors to help make this happen in Ironic. I want, 
in the next week, to finish up the research and start on a spec. Anyone willing 
to help with the design or implementation let me know here or in IRC so we can 
work together.


Thanks in advance,

Jay Faulkner


P.S. Yes, I am aware of 
http://specs.openstack.org/openstack/oslo-specs/specs/newton/policy-in-code.html
 and will ensure whatever Ironic does follows this specification.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Wiki signups (was: Re: [ironic] third party CI systems - vendor requirement milestones status)

2016-05-26 Thread Jay Faulkner
Infra team + Pablo,


I'm not sure if this is a pattern infra folks want to follow or not (I added 
[infra] to the title), but I know recently they did re-enable wiki signups for 
a short period of time to allow a couple of folks in IRC to get accounts. I''d 
try to ask in #openstack-infra and see if they can do the same for you.


This is a recurring issue though, and it affects new contributors 
disproportionately to those of us who have been in the community longer. Are 
there any plans in place to make it so wiki signups can occur in a more normal 
manner, or should projects move data that needs to be widely modifyable off the 
wiki?

Thanks,
Jay Faulkner


From: Pavlo Shchelokovskyy <pshchelokovs...@mirantis.com>
Sent: Wednesday, May 25, 2016 11:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ironic] third party CI systems - vendor 
requirement milestones status

Hi,

on a side-note concerning editing the Wiki, apparently registration of new 
accounts on OpenStack Wiki is disabled (closed some time ago due to bot spam), 
so if you do not have an account already, you won't be able to edit the page. 
The solution is only to ping people with valid accounts and ask them to make 
the changes for you.

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com<http://www.mirantis.com>

On Wed, May 25, 2016 at 7:56 PM, Kurt Taylor 
<kurt.r.tay...@gmail.com<mailto:kurt.r.tay...@gmail.com>> wrote:
We are in the final stretch for requiring CI testing for ironic drivers. I have 
organized the CI teams that I know about and their current status into the 
following wiki page:
https://wiki.openstack.org/wiki/Ironic/Drivers#3rd_Party_CI_required_implementation_status

I have already heard from a few folks with edits, but please review this info 
and let me know if you have any changes. You can make needed changes yourself, 
but let me know so I can keep track.

Thanks!
Kurt Taylor (krtaylor)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Changes to Ramdisk and iPXE defaults in Devstack and many gate jobs

2016-05-12 Thread Jay Faulkner
Hi all,


A change (https://review.openstack.org/#/c/313035/) to Ironic devstack is in 
the gate, changing the default ironic-python-agent (IPA) ramdisk from CoreOS to 
TinyIPA, and changing iPXE to default enabled.


As part of the work to improve and speed up gate jobs, we determined that using 
iPXE speeds up deployments and makes them more reliable by using http to 
transfer ramdisks instead of tftp. Additionally, the TinyIPA image, in 
development over the last few months, uses less ram and is smaller, allowing 
faster transfers and more simultaneous VMs to run in the gate.


In addition to changing the devstack default, there's also a patch up: 
https://review.openstack.org/#/c/313800/ to change most Ironic jobs to use iPXE 
and TinyIPA. This change will make IPA have voting check jobs and tarball 
publishing jobs for supported ramdisks (CoreOS and TinyIPA). Ironic (and any 
other projects other than IPA) will use the publicly published tinyipa image.


In summary:

- Devstack changes (merging now):

  - Defaults to TinyIPA ramdisk

  - Defaults to iPXE enabled

- Gate changes (needs review at: https://review.openstack.org/#/c/313800/ )

  - Ironic-Python-Agent

- Voting CoreOS + TinyIPA source (ramdisk built on the fly jobs)

  - Ironic

- Change all jobs (except bash ramdisk pxe_ssh job) to TinyIPA

- Change all jobs but one to use iPXE

- Change all gate jobs to use 512mb of ram


If there are any questions or concerns, feel free to ask here or in 
#openstack-ironic.


P.S. I welcome users of the DIB ramdisk to help make a job to run against IPA. 
All supported ramdisks should be checked in IPA's gate to avoid breakage as IPA 
is inherently dependent on its environment.



Thanks,

Jay Faulkner (JayF)

OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stackalytics] Gaming the Stackalytics stats

2016-04-08 Thread Jay Faulkner
I know a lot of folks explicitly avoid a +0 vote with a comment because you 
don't get "credit" for it in statistics. Whether or not that should matter is 
another discussion, but there is a significant disincentive to no-voting right 
now.


-

Jay Faulkner



From: Dolph Mathews <dolph.math...@gmail.com>
Sent: Friday, April 8, 2016 1:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][stackalytics] Gaming the Stackalytics stats



On Friday, April 8, 2016, John Dickinson <m...@not.mn<mailto:m...@not.mn>> 
wrote:


On 8 Apr 2016, at 13:35, Jeremy Stanley wrote:

> On 2016-04-08 19:42:18 +0200 (+0200), Dmitry Tantsur wrote:
>> There are many ways to game a simple +1 counter, such as +1'ing changes
>> that already have at least 1x +2, or which already approved, or which need
>> rechecking...
> [...]
>
> The behavior which baffles me, and also seems to be on the rise
> lately, is random +1 votes on changes whose commit messages and/or
> status clearly indicate they should not merged and do not need to be
> reviewed. I suppose that's another an easy way to avoid the dreaded
> "disagreements" counter?
> --
> Jeremy Stanley


I have been told that some OpenStack on boarding teaches new members of the 
community to do reviews. And they say, effectively, "muddle through as you can. 
You won't understand it all at first, but do your best. When you're done, add a 
+1 and move to the next one"

I advocate for basically this, but instead of a +1, leave a +0 and ask 
questions. The new reviewer will inevitably learn something and the author will 
benefit by explaining their change (teaching is the best way to learn).


I've been working to correct this when I've seen it, but +1 reviews with no 
comments might not be people trying to game. It might simply be people trying 
to get involved that don't know any better yet.

--John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Julia Kreger for core reviewer

2016-03-24 Thread Jay Faulkner
+1, I've worked with Julia for a while and she's deserving of core status. 
Congratulations!

-Jay


From: Jim Rollenhagen 
Sent: Thursday, March 24, 2016 12:08 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ironic] Nominating Julia Kreger for core reviewer

Hey all,

I'm nominating Julia Kreger (TheJulia in IRC) for ironic-core. She runs
the Bifrost project, gives super valuable reviews, is beginning to lead
the boot from volume efforts, and is clearly an expert in this space.

All in favor say +1 :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] re-introducing twisted to global-requirements

2016-01-07 Thread Jay Faulkner
It's also worth noting that the mimic team, along with other Rackers who work 
on Twistd, all worked to get python 3 support for mimic and associated 
dependencies in order to get this into OpenStack. I think it's safe to say this 
is a very friendly upstream and will help resolve any issues we might suss out.


Thanks,

Jay Faulkner


From: Dmitry Tantsur <divius.ins...@gmail.com>
Sent: Thursday, January 7, 2016 11:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] re-introducing twisted to global-requirements

2016-01-07 20:09 GMT+01:00 Jim Rollenhagen 
<j...@jimrollenhagen.com<mailto:j...@jimrollenhagen.com>>:
Hi all,

A change to global-requirements[1] introduces mimic, which is an http
server that can mock various APIs, including nova and ironic, including
control of error codes and timeouts. The ironic team plans to use this
for testing python-ironicclient without standing up a full ironic
environment.

Here's the catch - mimic is built on twisted. I know twisted was
previously removed from OpenStack (or at least people said "pls no", I
don't know the full history). We didn't intend to stealth-introduce
twisted back into g-r, but it was pointed out to me that it may appear
this way, so here I am letting everyone know. lifeless pointed out that
when tests are failing, people may end up digging into mimic or twisted
code, which most people in this community aren't familiar with AFAIK,
which is a valid point though I hope it isn't required often.

Btw, I've spent some amount of time (5 years?) with twisted on my previous 
jobs. While my memory is no longer fresh on it, I can definitely be pinged to 
help with it, if problems appear.


So, the primary question here is: do folks have a problem with adding
twisted here? We're holding off on Ironic changes that depend on this
until this discussion has happened, but aren't reverting the g-r change
until we decide one way or another.

// jim

[1] https://review.openstack.org/#/c/220268/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-09 Thread Jay Faulkner
+1


From: Jim Rollenhagen 
Sent: Thursday, October 8, 2015 2:47 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ironic] Nominating two new core reviewers

Hi all,

I've been thinking a lot about Ironic's core reviewer team and how we might
make it better.

I'd like to grow the team more through trust and mentoring. We should be
able to promote someone to core based on a good knowledge of *some* of
the code base, and trust them not to +2 things they don't know about. I'd
also like to build a culture of mentoring non-cores on how to review, in
preparation for adding them to the team. Through these pieces, I'm hoping
we can have a few rounds of core additions this cycle.

With that said...

I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
have been super high quality, and the quantity is ever-increasing. He's
also started helping out with some smaller efforts (full tempest, for
example), and I'd love to see that continue with larger efforts.

I'd also like to nominate John Villalovos (jlvillal). John has been
reviewing a ton of code and making a real effort to learn everything,
and keep track of everything going on in the project.

Ironic cores, please reply with your vote; provided feedback is positive,
I'd like to make this official next week sometime. Thanks!

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

2015-09-24 Thread Jay Faulkner
Hi Ionut,

I like the idea -- I think there's only going to be one potential hiccup with 
getting this upstream: the use of an additional external database.

My suggestion is to go ahead and post what you have up to Gerrit -- even if 
there's no spec and it's not ready to merge, everyone will be able to see what 
you're working on. If it's important for you to merge this upstream, I'd 
suggest starting on a spec for Ironic 
(https://wiki.openstack.org/wiki/Ironic/Specs_Process). 

Also as always, feel free to drop by #openstack-ironic on Freenode and chat 
about this as well. It sounds like you have a big use case for Ironic and we'd 
love to have you in the IRC community.

Thanks,
Jay Faulkner


From: Ionut Balutoiu <ibalut...@cloudbasesolutions.com>
Sent: Thursday, September 24, 2015 8:38 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

Hello, guys!

I'm starting a new implementation for a dhcp provider,
mainly to be used for Ironic standalone. I'm planning to
push it upstream. I'm using isc-dhcp-server service from
Linux. So, when an Ironic node is started, the ironic-conductor
writes in the config file the MAC-IP reservation for that node and
reloads dhcp service. I'm using a SQL database as a backend to store
the dhcp reservations (I think is cleaner and it should allow us
to have more than one DHCP server). What do you think about my
implementation ?
Also, I'm not sure how can I scale this out to provide HA/failover.
Do you guys have any idea ?

Regards,
Ionut Balutoiu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] proposing rameshg87 to ironic-core

2015-03-09 Thread Jay Faulkner
I am not core reviewer on all the projects in question, but I’m +1 to this 
addition. Thanks Ramakrishnan for all the good reviews.

-Jay

On Mar 9, 2015, at 3:03 PM, Devananda van der Veen 
devananda@gmail.commailto:devananda@gmail.com wrote:

Hi all,

I'd like to propose adding Ramakrishnan (rameshg87) to ironic-core.

He's been consistently providing good code reviews, and been in the top five 
active reviewers for the last 90 days and top 10 for the last 180 days. Two 
cores have recently approached me to let me know that they, too, find his 
reviews valuable.

Furthermore, Ramakrishnan has made significant code contributions to Ironic 
over the last year. While working primarily on the iLO driver, he has also done 
a lot of refactoring of the core code, touched on several other drivers, and 
maintains the proliantutils library on stackforge. All in all, I feel this 
demonstrates a good and growing knowledge of the codebase and architecture of 
our project, and feel he'd be a valuable member of the core team.

Stats, for those that want them, are below the break.

Best Regards,
Devananda



http://stackalytics.com/?release=allmodule=ironic-groupuser_id=rameshg87

http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt
http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Move agent built tools into openstack/coreos-image-builder

2015-02-13 Thread Jay Faulkner
Hi all,

At the Ironic mid-cycle sprint in San Francisco, we talked a bit about ramdisk 
image building and where it should be located. Currently the ramdisk build 
tooling for the ironic-python-agent CoreOS ramdisk lives entirely in the 
ironic-python-agent repo. This isn’t ideal, as we’d like to reuse this code to 
create ramdisks for ironic-discoverd and potentially other things.

To that end; I’ve proposed https://review.openstack.org/#/c/155868/ to add 
openstack/coreos-image-builder. The initial repository is a direct 
filter-branch from the imagebuild/ directory in ironic-python-agent, but I 
expect to quickly refactor it into something more generic (would like to do 
that work in gerrit; hence proposing it now) in order for other projects to be 
able to consume this at their leisure.

Thanks all!

-Jay Faulkner

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] FFE Request: Proxy neutron configuration to guest instance

2015-02-12 Thread Jay Faulkner
Hi Nova cores,

We’d like to request an FFE for this added nova feature. It gives a real 
interface - a JSON file - to network data inside the instance. This is a patch 
Rackspace carries downstream, and we’ve had lots of interested users, including 
the OpenStack Infra team and upstream cloud-init. We’d love to get this in for 
Kilo so all can benefit from the better interface.

There are a few small patches remaining to implement this functionality:
https://review.openstack.org/#/c/155116/ Updates the testing portion of the 
spec to reflect we can’t tempest test this, and will instead add functional 
tests to Nova for it.

Core Functionality
https://review.openstack.org/#/c/143755/ - Adds IPv6 support to Nova’s network 
unit tests so we can test the functionality in IPv6.
https://review.openstack.org/#/c/102649/ - Builds and prepares the neutron 
network data to expose
https://review.openstack.org/#/c/153097/ - Exposes the Neutron network data 
built in the last patch to Configdrive/Metadata service

VLAN Support
As a note; while we’d like all these patches to be merged, it’s clear the VLAN 
support is a bit more complex than the other patches, and we’d be OK with the 
other patches receiving an FFE without this one (although obviously we’d prefer 
get everything in K).

https://review.openstack.org/#/c/152703/ - Adds VLAN support for Neutron 
network data generation.

Please let me or Josh know if you have any questions.

Thanks,
Jay Faulkner (JayF)  Josh Gachnang (JoshNang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Jay Faulkner

On Jan 29, 2015, at 2:52 PM, Kevin Benton 
blak...@gmail.commailto:blak...@gmail.com wrote:

Oh, I understood it a little differently. I took parsing of error messages 
here is not the way we’d like to solve this problem as meaning that parsing 
them in their current ad-hoc, project-specific format is not the way we want to 
solve this (e.g. the way tempest does it). But if we had a structured way like 
the EC2 errors, it would be a much easier problem to solve.

So either way we are still parsing the body, the only difference is that the 
parser no longer has to understand how to parse Neutron errors vs. Nova errors. 
It just needs to parse the standard OpenStack error format that we come up 
with.


This would be especially helpful for things like haproxy or other load 
balancers, as you could then have them put up a static, openstack-formatted 
JSON error page for their own errors and trust the clients could parse them 
properly.

-Jay



On Thu, Jan 29, 2015 at 12:04 PM, John Dickinson 
m...@not.mnmailto:m...@not.mn wrote:
I think there are two points. First, the original requirement (in the first 
email on this thread) is not what's wanted:

...looking at the response body and HTTP response code an external system 
can’t understand what exactly went wrong. And parsing of error messages here is 
not the way we’d like to solve this problem.

So adding a response body to parse doesn't solve the problem. The request as I 
read it is to have a set of well-defined error codes to know what happens.

Second, my response is a little tongue-in-cheek, because I think the IIS 
response codes are a perfect example of extending a common, well-known protocol 
with custom extensions that breaks existing clients. I would hate to see us do 
that.

So if we can't subtly break http, and we can't have error response documents, 
then we're left with custom error codes in the particular response-code class. 
eg 461 SecurityGroupNotFound or 462 InvalidKeyName (from the original examples)


--John






 On Jan 29, 2015, at 11:39 AM, Brant Knudson 
 b...@acm.orgmailto:b...@acm.org wrote:



 On Thu, Jan 29, 2015 at 11:41 AM, Sean Dague 
 s...@dague.netmailto:s...@dague.net wrote:
 Correct. This actually came up at the Nova mid cycle in a side
 conversation with Ironic and Neutron folks.

 HTTP error codes are not sufficiently granular to describe what happens
 when a REST service goes wrong, especially if it goes wrong in a way
 that would let the client do something other than blindly try the same
 request, or fail.

 Having a standard json error payload would be really nice.

 {
  fault: ComputeFeatureUnsupportedOnInstanceType,
  messsage: This compute feature is not supported on this kind of
 instance type. If you need this feature please use a different instance
 type. See your cloud provider for options.
 }

 That would let us surface more specific errors.

 Today there is a giant hodgepodge - see:

 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L412-L424

 https://github.com/openstack/tempest-lib/blob/master/tempest_lib/common/rest_client.py#L460-L492

 Especially blocks like this:

 if 'cloudServersFault' in resp_body:
 message =
 resp_body['cloudServersFault']['message']
 elif 'computeFault' in resp_body:
 message = resp_body['computeFault']['message']
 elif 'error' in resp_body:
 message = resp_body['error']['message']
 elif 'message' in resp_body:
 message = resp_body['message']

 Standardization here from the API WG would be really great.

 -Sean

 On 01/29/2015 09:11 AM, Roman Podoliaka wrote:
  Hi Anne,
 
  I think Eugeniya refers to a problem, that we can't really distinguish
  between two different  badRequest (400) errors (e.g. wrong security
  group name vs wrong key pair name when starting an instance), unless
  we parse the error description, which might be error prone.
 
  Thanks,
  Roman
 
  On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
  annegen...@justwriteclick.commailto:annegen...@justwriteclick.com wrote:
 
 
  On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
  ekudryash...@mirantis.commailto:ekudryash...@mirantis.com wrote:
 
  Hi, all
 
 
  Openstack APIs interact with each other and external systems partially by
  passing of HTTP errors. The only valuable difference between types of
  exceptions is HTTP-codes, but current codes are generalized, so external
  system can’t distinguish what actually happened.
 
 
  As an example two different failures below differs only by error message:
 
 
  request:
 
  POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1
 
  Host: 192.168.122.195:8774http://192.168.122.195:8774/
 
  X-Auth-Project-Id: demo
 
  Accept-Encoding: gzip, deflate, compress
 
  Content-Length: 189
 
  Accept: 

[openstack-dev] [Ironic] [Agent] Breaking HardwareManager API Change proposed

2014-12-31 Thread Jay Faulkner
Hi all,

I proposed https://review.openstack.org/#/c/143193 to ironic-python-agent, in 
an attempt to make Hardware Manager loading more sane. As it works today, the 
most specific hardware manager is the only one chosen. This means in order to 
use a mix of hardware managers, you have to compose a custom interface. This is 
not the way I originally thought it worked, and not the way Josh and I 
presented it at the summit[1].

This change makes it so we will try each method, in priority order (from most 
specific to least specific hardware manager). If the method exists and doesn’t 
throw NotImplementedError, it will be allowed to complete and errors bubble up. 
If an AttributeError or NotImplementedError is thrown, the next most generic 
method is called until all methods have been attempted (in which case we fail) 
or a method does not raise the exceptions above.

The downside to this is that it will change behavior for anyone using hardware 
managers downstream. As of today, the only hardware manager that I know of 
external to Ironic is the one we use at Rackspace for OnMetal[2]. I’m sending 
this email to check and see if anyone has objection to this interface changing 
in this way, and generally asking for comment.

Thanks,
Jay Faulkner

1: https://www.youtube.com/watch?v=2Oi2T2pSGDU
2: https://github.com/rackerlabs/onmetal-ironic-hardware-manager
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] thoughts on the midcycle

2014-12-30 Thread Jay Faulkner

On Dec 29, 2014, at 2:45 PM, Devananda van der Veen 
devananda@gmail.commailto:devananda@gmail.com wrote:

That being said, I'd also like to put forth this idea: if we had a second 
gathering (with the same focus on writing code) the following week (let's say, 
Feb 11 - 13) in the SF Bay area -- who would attend? Would we be able to get 
the other half of the core team together and get more work done? Is this a 
good idea?


+1 I’d be willing and able to attend this.


-
Jay Faulkner
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-10 Thread Jay Faulkner
Often times I find myself in need of going the other direction — which IRC nick 
goes to which person. Does anyone know how to do that with the Foundation 
directory?

Thanks,
Jay

 On Dec 10, 2014, at 2:30 AM, Matthew Gilliard matthew.gilli...@gmail.com 
 wrote:
 
 So, are we agreed that http://www.openstack.org/community/members/ is
 the authoritative place for IRC lookups? In which case, I'll take the
 old content out of https://wiki.openstack.org/wiki/People and leave a
 message directing people where to look.
 
 I don't have the imagination to use anything other than my real name
 on IRC but for people who do, should we try to encourage putting the
 IRC nick in the gerrit name?
 
 On Tue, Dec 9, 2014 at 11:56 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Angus Salkeld's message of 2014-12-09 15:25:59 -0800:
 On Wed, Dec 10, 2014 at 5:11 AM, Stefano Maffulli stef...@openstack.org
 wrote:
 
 On 12/09/2014 06:04 AM, Jeremy Stanley wrote:
 We already have a solution for tracking the contributor-IRC
 mapping--add it to your Foundation Member Profile. For example, mine
 is in there already:
 
http://www.openstack.org/community/members/profile/5479
 
 I recommend updating the openstack.org member profile and add IRC
 nickname there (and while you're there, update your affiliation history).
 
 There is also a search engine on:
 
 http://www.openstack.org/community/members/
 
 
 Except that info doesn't appear nicely in review. Some people put their
 nick in their Full Name in
 gerrit. Hopefully Clint doesn't mind:
 
 https://review.openstack.org/#/q/owner:%22Clint+%27SpamapS%27+Byrum%22+status:open,n,z
 
 
 Indeed, I really didn't like that I'd be reviewing somebody's change,
 and talking to them on IRC, and not know if they knew who I was.
 
 It also has the odd side effect that gerritbot triggers my IRC filters
 when I 'git review'.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Do we want to remove Nova-bm support?

2014-12-04 Thread Jay Faulkner

 On Dec 4, 2014, at 11:04 AM, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Steve Kowalik's message of 2014-12-03 20:47:19 -0800:
 Hi all,
 
I'm becoming increasingly concerned about all of the code paths
 in tripleo-incubator that check $USE_IRONIC -eq 0 -- that is, use
 nova-baremetal rather than Ironic. We do not check nova-bm support in
 CI, haven't for at least a month, and I'm concerned that parts of it
 may be slowly bit-rotting.
 
I think our documentation is fairly clear that nova-baremetal is
 deprecated and Ironic is the way forward, and I know it flies in the
 face of backwards-compatibility, but do we want to bite the bullet and
 remove nova-bm support?
 
 Has Ironic settled on a migration path/tool from nova-bm? If yes, then
 we should remove nova-bm support and point people at the migration
 documentation.
 

Clint,

I believe this is the migration document: 
https://wiki.openstack.org/wiki/Ironic/NovaBaremetalIronicMigration. As it was 
required for graduation, as far as I’m aware this is all the work that’s going 
to be done by Ironic for nova-bm migration.

FWIW, I’m +1 to removing this support from TripleO as other uses of nova-bm 
have been deprecated across the Juno release, and this, IMO, should follow the 
same pattern.

Thanks,
Jay


 If Ironic decided not to provide one, then we should just remove support
 as well.
 
 If Ironic just isn't done yet, then removing nova-bm in TripleO is
 premature and we should wait for them to finish.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposing new meeting times

2014-11-17 Thread Jay Faulkner


From: Devananda van der Veen [mailto:devananda@gmail.com]
Sent: Monday, November 17, 2014 5:00 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ironic] Proposing new meeting times

Hi all,

As discussed in Paris and at today's IRC meeting [1] we are going to be 
alternating the time of the weekly IRC meetings to accommodate our contributors 
in EMEA better. No time will be perfect for everyone, but as it stands, we 
rarely (if ever) see our Indian, Chinese, and Japanese contributors -- and it's 
quite hard for any of the AU / NZ folks to attend.

I'm proposing two sets of times below. Please respond with a -1 vote to an 
option if that option would cause you to miss ALL meetings, or a +1 vote if 
you can magically attend ALL the meetings. If you can attend, without 
significant disruption, at least one of the time slots in a proposal, please do 
not vote either for or against it. This way we can identify a proposal which 
allows everyone to attend at a minimum 50% of the meetings, and preferentially 
weight towards one that allows more contributors to attend two meetings.

This link shows the local times in some major coutries / timezones around the 
world (and you can customize it to add your own).
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20141125p1=224p2=179p3=78p4=367p5=44p6=33p7=248p8=5

For reference, the current meeting time is 1900 UTC.

Option #1: alternate between Monday 1900 UTC  Tuesday 0900 UTC.  I like this 
because 1900 UTC spans all of US and western EU, while 0900 combines EU and 
EMEA. Folks in western EU are in the middle and can attend all meetings.

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014month=11day=24hour=19min=0sec=0p1=224p2=179p3=78p4=367p5=44p6=33p7=248p8=5

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014month=11day=25hour=9min=0sec=0p1=224p2=179p3=78p4=367p5=44p6=33p7=248p8=5


Option #2: alternate between Monday 1700 UTC  Tuesday 0500 UTC. I like this 
because it shifts the current slot two hours earlier, making it easier for 
eastern EU to attend without excluding the western US, and while 0500 UTC is 
not so late that US west coast contributors can't attend (it's 9PM for us), it 
is harder for western EU folks to attend. There's really no one in the middle 
here, but there is at least a chance for US west coast and EMEA to overlap, 
which we don't have at any other time.

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014month=11day=24hour=17min=0sec=0p1=224p2=179p3=78p4=367p5=44p6=33p7=248p8=5


+1, I’d be able to attend both these meetings, I believe.

-Jay Faulkner

I'll collate all the responses to this thread during the week, ahead of next 
week's regularly-scheduled meeting.

-Devananda

[1] 
http://eavesdrop.openstack.org/meetings/ironic/2014/ironic.2014-11-17-19.00.log.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-16 Thread Jay Faulkner

On Oct 16, 2014, at 10:02 AM, Anita Kuno 
ante...@anteaya.infomailto:ante...@anteaya.info wrote:

Heh, yeah didn't expect to take over the thread on this, sorry about
that. Perhaps we should form a group, vegetarians in Paris?

I know myself and at least one other Summit attendee who has similarly annoying 
dietary restrictions — no gluten. Which from what I understand is very 
difficult in France.

If there’s anyone else who has similar restrictions would like to team up so we 
can all find places to eat without getting sick, feel free to email me off list 
or ping me on IRC (JayF).

-
Jay Faulkner
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Suggestions for students final year project

2014-10-07 Thread Jay Faulkner
On Oct 7, 2014, at 10:38 AM, Adam Young ayo...@redhat.com wrote:

 We should come up with a published list of intern and senior projects 
 proposals

I know there is a low-hanging-fruit tag in the bug tracker, and this summer 
when we had two interns on our team working on Openstack we had them both 
“onboard” to the development process by having them find and resolve 
low-hanging-fruit tagged bugs in IPA and Ironic.

I’d strongly suggest this list start with us being more vigilant about tagging 
bugs as low-hanging-fruit and then having those act as a gateway into the 
community. At Open Source Bridge this year, in a session about getting 
newcomers interested in open source, easy/l-h-f bugs were indicated as a 
strongly preferred way to get someone involved. Even to the level of suggesting 
a more senior developer could make the bug report better, with more breadcrumbs 
for a new person, to make it even easier to get started.

Just a thought :).

Thanks,
Jay Faulkner



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Suggestions for students final year project

2014-10-07 Thread Jay Faulkner

On Oct 7, 2014, at 4:56 PM, Adam Young ayo...@redhat.com wrote:

 On Oct 7, 2014, at 10:38 AM, Adam Young ayo...@redhat.com wrote:
 
 We should come up with a published list of intern and senior projects 
 proposals
 I know there is a low-hanging-fruit tag in the bug tracker, and this summer 
 when we had two interns on our team working on Openstack we had them both 
 “onboard” to the development process by having them find and resolve 
 low-hanging-fruit tagged bugs in IPA and Ironic.
 There is a difference between low hanging fruit (onboarding) and stand alone 
 presentable senior thesis topics.  Both are important.
 

I guess my thought was that before someone decided to do a long-term project 
for Openstack, they might want to be onboarded to how we do software 
development, code reviews, practices, community friendliness, etc BEFORE taking 
on a long project working with us :).

-Jay



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Jay Faulkner

On Sep 25, 2014, at 9:23 AM, Lucas Alvares Gomes lucasago...@gmail.com wrote:

 Hi,
 
 Today we have hit the problem of having an outdated sample
 configuration file again[1]. The problem of the sample generation is
 that it picks up configuration from other projects/libs
 (keystoneclient in that case) and this break the Ironic gate without
 us doing anything.
 
 So, what you guys think about removing the test that compares the
 configuration files and makes it no longer gate[2]?
 
 We already have a tox command to generate the sample configuration
 file[3], so folks that needs it can generate it locally.
 

+1

In a perfect world, one would be generated and put somewhere for easy access 
without a development environment setup. However I think the impact from having 
this config file break pep8 non-interactively is important enough to do it now 
and worry about generating one for docs later. :)

-
Jay Faulkner

 Does anyone disagree?
 
 [1] https://review.openstack.org/#/c/124090/
 [2] https://github.com/openstack/ironic/blob/master/tox.ini#L23
 [3] https://github.com/openstack/ironic/blob/master/tox.ini#L32-L34
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Log Rationalization -- Bring it on!

2014-09-17 Thread Jay Faulkner
Comments inline.

 -Original Message-
 From: Monty Taylor [mailto:mord...@inaugust.com]
 Sent: Wednesday, September 17, 2014 7:34 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Log Rationalization -- Bring it on!
 
 On 09/17/2014 04:42 PM, Rochelle.RochelleGrober wrote:
  TL;DR:  I consider the poor state of log consistency a major
  impediment for more widespread adoption of OpenStack and would like to
  volunteer to own this cross-functional process to begin to unify and
  standardize logging messages and attributes for Kilo while dealing
  with the most egregious issues as the community identifies them.
 
 
 I fully support this, and I, for one, welcome our new log-standardization
 overlords.
 

Something that could be interesting is to see if we can emit metrics everytime 
a loggable event happens. There's already a spec+code being drafted for Ironic 
in Kilo (https://review.openstack.org/#/c/100729/  
https://review.openstack.org/#/c/103202/) that we're using downstream to emit 
metrics from Ironic.

If we have good organization of logging events, and levels, perhaps there's 
possibly a way to make it easy for metrics to be emitted at that time as well.

-
Jay Faulkner

 
 
  Recap from some mail threads:
 
 
 
  From Sean Dague on Kilo cycle goals:
 
  2. Consistency in southbound interfaces (Logging first)
 
 
 
  Logging and notifications are south bound interfaces from OpenStack
  providing information to people, or machines, about what is going on.
 
  There is also a 3rd proposed south bound with osprofiler.
 
 
 
  For Kilo: I think it's reasonable to complete the logging standards
  and implement them. I expect notifications (which haven't quite kicked
  off) are going to take 2 cycles.
 
 
 
  I'd honestly *really* love to see a unification path for all the the
  southbound parts, logging, osprofiler, notifications, because there is
  quite a bit of overlap in the instrumentation/annotation inside the
  main code for all of these.
 
 
  And from Doug Hellmann: 1. Sean has done a lot of analysis and started
  a spec on standardizing logging guidelines where he is gathering input
  from developers, deployers, and operators [1].
  Because it is far enough for us to see real progress, it's a good
  place for us to start experimenting with how to drive cross-project
  initiatives involving code and policy changes from outside of a single
  project. We have a couple of potentially related specs in Oslo as part
  of the oslo.log graduation work [2] [3], but I think most of the work
  will be within the applications.
 
  [1] https://review.openstack.org/#/c/91446/ [2]
  https://blueprints.launchpad.net/oslo.log/+spec/app-agnostic-logging-p
  arameters
 
 
 [3] https://blueprints.launchpad.net/oslo.log/+spec/remove-context-
 adapter
 
 
 
  And from James Blair:
 
  1) Improve log correlation and utility
 
 
 
  If we're going to improve the stability of OpenStack, we have to be
  able to understand what's going on when it breaks.  That's both true
  as developers when we're trying to diagnose a failure in an
  integration test, and it's true for operators who are all too often
  diagnosing the same failure in a real deployment.  Consistency in
  logging across projects as well as a cross-project request token would
  go a long way toward this.
 
  While I am not currently managing an OpenStack deployment, writing
  tests or code, or debugging the stack, I have spent many years doing
  just that.  Through QA, Ops and Customer support, I have come to revel
  in good logging and log messages and curse the holes and vagaries in
  many systems.
 
  Defining/refining logs to be useful and usable is a cross-functional
  effort that needs to include:
 
  · Operators
 
  · QA
 
  · End Users
 
  · Community managers
 
  · Tech Pubs
 
  · Translators
 
  · Developers
 
  · TC (which provides the forum and impetus for all the
  projects to cooperate on this)
 
  At the moment, I think this effort may best work under the auspices of
  Oslo (oslo.log), I'd love to hear other proposals.
 
  Here is the beginnings of my proposal of how to attack and subdue the
  painful state of logs:
 
 
  · Post this email to the MLs (dev, ops, enduser) to get
  feedback, garner support and participants in the process (Done;-)
 
  · In parallel:
 
  o   Collect up problems, issues, ideas, solutions on an etherpad
  https://etherpad.openstack.org/p/Log-Rationalization where anyone in
  the communities can post.
 
  o   Categorize  reported Log issues into classes (already identified
  classes):
 
  §  Format Consistency across projects
 
  §  Log level definition and categorization across classes
 
  §  Time syncing entries across tens of logfiles
 
  §  Relevancy/usefulness of information provided within messages
 
  §  Etc (missing a lot here, but I'm sure folks will speak up)
 
  o   Analyze existing log message

Re: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and ready state orchestration

2014-09-15 Thread Jay Faulkner
Steven,

It's important to note that two of the blueprints you reference: 

https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery

are both very unlikely to land in Ironic -- these are configuration and 
discovery pieces that best fit inside a operator-deployed CMDB, rather than 
Ironic trying to extend its scope significantly to include these type of 
functions. I expect the scoping or Ironic with regards to hardware 
discovery/interrogation as well as configuration of hardware (like I will 
outline below) to be hot topics in Ironic design summit sessions at Paris.

A good way of looking at it is that Ironic is responsible for hardware *at 
provision time*. Registering the nodes in Ironic, as well as hardware 
settings/maintenance/etc while a workload is provisioned is left to the 
operators' CMDB. 

This means what Ironic *can* do is modify the configuration of a node at 
provision time based on information passed down the provisioning pipeline. For 
instance, if you wanted to configure certain firmware pieces at provision time, 
you could do something like this:

Nova flavor sets capability:vm_hypervisor in the flavor that maps to the Ironic 
node. This would map to an Ironic driver that exposes vm_hypervisor as a 
capability, and upon seeing capability:vm_hypervisor has been requested, could 
then configure the firmware/BIOS of the machine to 'hypervisor friendly' 
settings, such as VT bit on and Turbo mode off. You could map multiple 
different combinations of capabilities as different Ironic flavors, and have 
them all represent different configurations of the same pool of nodes. So, you 
end up with two categories of abilities: inherent abilities of the node (such 
as amount of RAM or CPU installed), and configurable abilities (i.e. things 
than can be turned on/off at provision time on demand) -- or perhaps, in the 
future, even things like RAM and CPU will be dynamically provisioned into nodes 
at provision time. 

-Jay Faulkner


From: Steven Hardy sha...@redhat.com
Sent: Monday, September 15, 2014 4:44 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and 
ready state orchestration

All,

Starting this thread as a follow-up to a strongly negative reaction by the
Ironic PTL to my patches[1] adding initial Heat-Ironic integration, and
subsequent very detailed justification and discussion of why they may be
useful in this spec[2].

Back in Atlanta, I had some discussions with folks interesting in making
ready state[3] preparation of bare-metal resources possible when
deploying bare-metal nodes via TripleO/Heat/Ironic.

The initial assumption is that there is some discovery step (either
automatic or static generation of a manifest of nodes), that can be input
to either Ironic or Heat.

Following discovery, but before an undercloud deploying OpenStack onto the
nodes, there are a few steps which may be desired, to get the hardware into
a state where it's ready and fully optimized for the subsequent deployment:

- Updating and aligning firmware to meet requirements of qualification or
  site policy
- Optimization of BIOS configuration to match workloads the node is
  expected to run
- Management of machine-local storage, e.g configuring local RAID for
  optimal resilience or performance.

Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
of these steps possible, but there's no easy way to either encapsulate the
(currently mostly vendor specific) data associated with each step, or to
coordinate sequencing of the steps.

What is required is some tool to take a text definition of the required
configuration, turn it into a correctly sequenced series of API calls to
Ironic, expose any data associated with those API calls, and declare
success or failure on completion.  This is what Heat does.

So the idea is to create some basic (contrib, disabled by default) Ironic
heat resources, then explore the idea of orchestrating ready-state
configuration via Heat.

Given that Devananda and I have been banging heads over this for some time
now, I'd like to get broader feedback of the idea, my interpretation of
ready state applied to the tripleo undercloud, and any alternative
implementation ideas.

Thanks!

Steve

[1] https://review.openstack.org/#/c/104222/
[2] https://review.openstack.org/#/c/120778/
[3] http://robhirschfeld.com/2014/04/25/ready-state-infrastructure/
[4] https://blueprints.launchpad.net/ironic/+spec/drac-management-driver
[5] https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
[6] https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list

Re: [openstack-dev] [Ironic] Exceptional approval request for Cisco Driver Blueprint

2014-08-07 Thread Jay Faulkner
Hey,

I agree with Dmitry that this spec has a huge scope. If you resubmitted one 
with only the power interface, that could be considered for an exception.

A few specific reasons:

1) Auto-enrollment -- should probably be held off at the moment
 - This is something that was talked about extensively at mid-cycle meetup and 
will be a topic of much debate in Ironic for Kilo. Whatever comes out of that 
debate, if it ends up being considered within scope, would be what your spec 
would want to integrate with.
 - I'd suggest you come in IRC, say hello, and work with us as we go into kilo 
figuring out if auto-enrollment belongs in Ironic and if so, how your hardware 
could integrate with that system.

2) Power driver
 - If you split this out into another spec and resubmitted, it'd be at least a 
small enough scope to be considered.  Just as a note though; Ironic has very 
specific priorities for Juno, the top of which is getting graduated. This means 
some new features have fallen aside in favor of graduation requirements.

Thanks,
Jay

From: Dmitry Tantsur dtant...@redhat.com
Sent: Thursday, August 07, 2014 4:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic] Exceptional approval request for Cisco 
Driver Blueprint

Hi!

Didn't read the spec thoroughly, but I'm concerned by it's huge scope.
It's actually several specs squashed into one (not too detailed). My
vote is splitting it into a chain of specs (at least 3: power driver,
discovery, other configurations) and seek exception separately.
Actually, I'm +1 on making exception for power driver, but -0 on the
others, until I see a separate spec for them.

Dmitry.

On Thu, 2014-08-07 at 09:30 +0530, GopiKrishna Saripuri wrote:
 Hi,


 I've submitted Ironic Cisco driver blueprint post proposal freeze
 date. This driver is critical for Cisco and few customers to test as
 part of their private cloud expansion. The driver implementation is
 ready along with unit-tests. Will submit the code for review once
 blueprint is accepted.


 The Blueprint review link: https://review.openstack.org/#/c/110217/


 Please let me know If its possible to include this in Juno release.



 Regards
 GopiKrishna S
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Infra] Devstack and Testing for ironic-python-agent``

2014-08-07 Thread Jay Faulkner
Hi all,


At the recent Ironic mid-cycle meetup, we got the first version of the 
ironic-python-agent (IPA) driver merged. There are a few reviews we need merged 
(and their dependencies) across a few other projects in order to begin testing 
it automatically. We would like to eventually gate IPA and Ironic with tempest 
testing similar to what the pxe driver does today.


For IPA to work in devstack (openstack-dev/devstack repo):

 - https://review.openstack.org/#/c/112095 Adds swift temp URL support to 
Devstack

 - https://review.openstack.org/#/c/108457 Adds IPA support to Devstack



Docs on running IPA in devstack (openstack/ironic repo):

 - https://review.openstack.org/#/c/112136/



For IPA to work in the devstack-gate environment (openstack-infra/config  
openstack-infra/devstack-gate repos):

 - https://review.openstack.org/#/c/112143 Add IPA support to devstack-gate

 - https://review.openstack.org/#/c/112134 Consolidate and rename Ironic jobs

 - https://review.openstack.org/#/c/112693 Add check job for IPA + tempest


Once these are all merged, we'll have IPA testing via a nonvoting check job, 
using the IPA-CoreOS deploy ramdisk, in both the ironic and ironic-python-agent 
projects. This will be promoted to voting once proven stable.


However, this is only one of many possible IPA deploy ramdisk images. We're 
currently publishing a CoreOS ramdisk, but we also have an effort to create a 
ramdisk with diskimage-builder (https://review.openstack.org/#/c/110487/) , as 
well as plans for an ISO image (for use with things like iLo). As we gain 
additional images, we'd like to run those images through the same suite of 
tests prior to publishing them, so that images which would break IPA's gate 
wouldn't get published. The final state testing matrix should look something 
like this, with check and gate jobs in each project covering the variations 
unique to that project, and one representative test in consuming project's test 
pipelines.


IPA:

 - tempest runs against Ironic+agent_ssh with CoreOS ramdisk

 - tempest runs against Ironic+agent_ssh with DIB ramdisk

 - (other IPA tests)



IPA would then, as a post job, generate and publish the images, as we currently 
do with IPA-CoreOS ( 
http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz ). 
Because IPA would gate on tempest tests against each image, we'd avoid ever 
publishing a bad deploy ramdisk.


Ironic:

 - tempest runs against Ironic+agent_ssh with most suitable ramdisk (due to 
significantly decreased ram requirements, this will likely be an image created 
by DIB once it exists)

 - tempest runs against Ironic+pxe_ssh

 - (what ever else Ironic runs)



Nova and other integrated projects will continue to run a single job, using 
Ironic with its default deploy driver (currently pxe_ssh).





Using this testing matrix, we'll ensure that there is coverage of each 
cross-project dependency, without bloating each project's test matrix 
unnecessarily. If, for instance, a change in Nova passes the Ironic pxe_ssh job 
and lands, but then breaks the agent_ssh job and thus blocks Ironic's gate, 
this would indicate a layering violation between Ironic and its deploy drivers 
(from Nova's perspective, nothing should change between those drivers). 
Similarly, if IPA tests failed against the CoreOS image (due to Ironic OR Nova 
change), but the DIB image passed in both Ironic and Nova tests, then it's 
almost certainly an *IPA* bug.


Thanks so much for your time, and for the Openstack Ironic community being 
welcoming to us as we have worked towards this alternate deploy driver and work 
towards improving it even further as Kilo opens.


--

Jay Faulkner
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Proposal for slight change in our spec process

2014-08-06 Thread Jay Faulkner
Similarly, I appreciated this idea when we discussed it at the mid-cycle and 
doing so here.

+1

-Jay Faulkner


From: Lucas Alvares Gomes lucasago...@gmail.com
Sent: Wednesday, August 06, 2014 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic] Proposal for slight change in our spec
process

Already agreed with the idea at the midcycle, but just making it public: +1

On Tue, Aug 5, 2014 at 8:54 PM, Roman Prykhodchenko
rprikhodche...@mirantis.com wrote:
 Hi!

 I think this is a nice idea indeed. Do you plan to use this process starting
 from Juno or as soon as possible?

It will start in Kilo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Updates for SPF Record needed

2014-04-30 Thread Jay Faulkner
Hi all,

Ever since the gerrit upgrade, emails from rev...@openstack.org have
been going into my Junk folder, so I started looking at the headers and
related information to see if I could find any problems.

One thing I encountered is that the current SPF record:

$ host -t TXT openstack.org
openstack.org descriptive text v=spf1 include:sendgrid.net ~all

fails anything but mail sent via sendgrid. This excludes mail sent from
rev...@openstack.org directly off the gerrit server, and causes SPF to
softfail. Note that this SPF record does *not* impact the mailing lists,
as those are on a separate domain (lists.openstack.org) which has no SPF
record set whatsoever.

AFAICT, there are a limited number of servers that send mail with From:
addresses containing openstack.org, these include: emailsrvr.com (the MX
provider for openstack.org) and review.openstack.org. jeblair mentioned
on IRC that there may also be an 'openstackid-dev' email sending
account, but I was unable to find any email in my personal account from
that server.

There are two possible solutions:

1) Remove or drastically open the SPF record. Removing the record would
cause all email to resolve spf=none (like lists.o.o does currently), but
prevent openstack.org from gaining any protection against malicious
senders via SPF. Drastically opening the SPF record would be changing
the ~all to a +all which would cause all sent email to pass SPF.

2) Make the SPF record accurate: v=spf1 include:emailsrvr.com
include:sendgrid.net a:review.openstack.org ~all. For any additional
services that send mail for openstack.org, an additional
a:my.host.name.openstack.org would be added to the SPF record. Using
a: syntax for the records also ensures that in the case of something
like the recent gerrit migration, the SPF record would remain valid
without any modification.

There's obviously also a hybrid approach, where we add the known senders
of mail but change ~all to +all.

I strongly recommend we pursue option 2 -- this would mean if you know
of any other devices sending mail to @openstack.org, please reply to
this thread with the information so we can draft a valid SPF record.


Thanks,
Jay Faulkner



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Jay Faulkner
Can you guys publish the ssh host key we should expect from the new
gerrit server?

Thanks,
Jay Faulkner

On 4/25/14, 10:08 AM, James E. Blair wrote:
 Hi,

 This is the third and final reminder that next week Gerrit will be
 unavailable for a few hours starting at 1600 UTC on April 28th.

 You may read about the changes that will impact you as a developer
 (please note that the SSH host key change is particularly important) at
 this location:

   https://wiki.openstack.org/wiki/GerritUpgrade

 -Jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] How to solve the cgit repository browser line number misalignment in Chrome

2014-04-08 Thread Jay Faulkner
This is wonderful, thanks a bunch! It looks great on my box.

-Jay Faulkner


On 4/7/14, 6:23 PM, Zhongyue Luo wrote:
 Hi,

 I know I'm not the only person who had this problem so here's two
 simple steps to get the lines and line numbers aligned.

 1. Install the stylebot extension

 https://chrome.google.com/extensions/detail/oiaejidbmkiecgbjeifoejpgmdaleoha

 2. Click on the download icon to install the custom style for
 git.openstack.org http://git.openstack.org

 http://stylebot.me/styles/5369

 Thanks!

 -- 
 *Intel SSG/STO/DCST/CBE*
 880 Zixing Road, Zizhu Science Park, Minhang District, 200241,
 Shanghai, China
 +862161166500


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-08 Thread Jay Faulkner
Comments inline.

On 4/8/14, 11:16 AM, Josh Gachnang wrote:

 I'm more accustomed to using PDUs for this type of thing. I.e., a
 power strip you can ssh into or hit via a web API to toggle power to
 individual ports.
 Machines are configured to power up on power restore, plus PXE boot.
 You have less control than with IPMI -- all you can do is toggle power
 to the outlet -- but it works well, even for some desktop machines I
 have in a lab.
 I don't have a compelling need, but I've often wondered if such a
 driver would be useful. I can imagine it also being useful if people
 want to power up non-compute stuff, though that's probably not a top
 priority right now.


 I believe someone was talking about this yesterday in the meeting. It
 would be very possible to write an IPMI driver (possibly being renamed
 for this reason) to control the power of a node via a PDU. You could
 then plug that into the agent driver as the power driver to create
 something like AgentAndPDUDriver. The current agent driver doesn't do
 anything with IPMI except set boot device. The inability to set boot
 device would be the biggest issue with a PDU driver as far as I can
 see, but that's not insurmountable.

+1

The /agent itself/ being a power driver, as suggested earlier, seems
like it wouldn't work well though. Honestly, any situation that'd
require running the agent on the tenant, should be out of scope. This is
explicitly a ramdisk agent, and it should be optimized to run in a
ramdisk, not on resources assigned to a tenant.


 How much hardware information do we intend to store in Ironic? (Note
 that I'm genuinely asking this, not challenging your assertion.) It
 seems reasonable, but I think there's a lot of hardware information
 that could be useful (say, lspci output, per-processor flags, etc.),
 but stuffing it all in extra[] seems kind of messy.


 Right now the hardware manager on the agent is pluggable, so what
 we're storing is currently whatever you want!. I think in our
 current iteration, it is just the MACs of the NICs. We haven't fully
 fleshed this out yet.


Jim and I are working on patches to the agent to send up more
information, including ram/cpu/block devices and some information from
DMI (like serial numbers) and lldp in order to help populate neutron.

I'm of the opinion that generally more is better, as long as it's
long-lived information (such as RAM/CPUs/etc) that doesn't change except
in cases of an explicit maintenance.


-Jay
 ---
 Josh Gachnang
 Tech Blog: ServerCobra.com, @ServerCobra
 Github.com/PCsForEducation


 On Tue, Apr 8, 2014 at 10:46 AM, Matt Wagner matt.wag...@redhat.com
 mailto:matt.wag...@redhat.com wrote:

 On 08/04/14 14:04 +0400, Vladimir Kozhukalov wrote:
 snip

 0) There are a plenty of old hardware which does not have
 IPMI/ILO at all.
 How Ironic is supposed to power them off and on? Ssh? But
 Ironic is not
 supposed to interact with host OS.


 I'm more accustomed to using PDUs for this type of thing. I.e., a
 power strip you can ssh into or hit via a web API to toggle power to
 individual ports.

 Machines are configured to power up on power restore, plus PXE boot.
 You have less control than with IPMI -- all you can do is toggle power
 to the outlet -- but it works well, even for some desktop machines I
 have in a lab.

 I don't have a compelling need, but I've often wondered if such a
 driver would be useful. I can imagine it also being useful if people
 want to power up non-compute stuff, though that's probably not a top
 priority right now.


 1) We agreed that Ironic is that place where we can store
 hardware info
 ('extra' field in node model). But many modern hardware
 configurations
 support hot pluggable hard drives, CPUs, and even memory. How
 Ironic will
 know that hardware configuration is changed? Does it need to
 know about
 hardware changes at all? Is it supposed that some monitoring
 agent (NOT
 ironic agent) will be used for that? But if we already have
 discovering
 extension in Ironic agent, then it sounds rational to use this
 extension
 for monitoring as well. Right?


 How much hardware information do we intend to store in Ironic? (Note
 that I'm genuinely asking this, not challenging your assertion.) It
 seems reasonable, but I think there's a lot of hardware information
 that could be useful (say, lspci output, per-processor flags, etc.),
 but stuffing it all in extra[] seems kind of messy.

 I don't have an overall answer for this question; I'm curious myself.

 -- Matt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent

2014-04-04 Thread Jay Faulkner

+1

   The agent is a tool Ironic is using to take the place of a
   hypervisor to discover and prepare nodes to recieve workloads. For
   hardware, this includes more work -- such as firmware flashing, bios
   configuration, and disk imaging -- all of which must be done in an
   OOB manner. (This is also why deploy drivers that interact directly
   with the hardware when the supported - such as Seamicro or the
   proposed HP iLo driver - are good alternative approaches.)


-Jay Faulkner

On 4/4/2014 7:10 AM, Ling Gao wrote:

Hello Vladimir,
 I would prefer an agent-less node, meaning the agent is only used 
under the ramdisk OS to collect hw info, to do firmware updates and to 
install nodes etc. In this sense, the agent running as root is fine. 
Once the node is installed, the agent should be out of the picture. I 
have been working with HPC customers, in that environment they prefer 
as less memory prints as possible. Even as a ordinary tenant, I do not 
feel secure to have some agents running on my node. For the firmware 
update on the fly, I do not know how many customers will trust us 
doing it while their critical application is running. Even they do and 
ready to do it, Ironic can then send an agent to the node through 
scp/wget as admin/root and quickly do it and then kill the agent on 
the node. Just my 2 cents.


Ling Gao




From: Vladimir Kozhukalov vkozhuka...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org,

Date: 04/04/2014 08:24 AM
Subject: [openstack-dev] [Ironic][Agent]




Hello, everyone,

I'd like to involve more people to express their opinions about the 
way how we are going to run Ironic-python-agent. I mean should we run 
it with root privileges or not.


From the very beginning agent is supposed to run under ramdisk OS and 
it is intended to make disk partitioning, RAID configuring, firmware 
updates and other stuff according to installing OS. Looks like we 
always will run agent with root privileges. Right? There are no 
reasons to limit agent permissions.


On the other hand, it is easy to imagine a situation when you want to 
run agent on every node of your cluster after installing OS. It could 
be useful to keep hardware info consistent (for example, many hardware 
configurations allow one to add hard drives in run time). It also 
could be useful for on the fly firmware updates. It could be useful 
for on the fly manipulations with lvm groups/volumes and so on.


Frankly, I am not even sure that we need to run agent with root 
privileges even in ramdisk OS, because, for example, there are some 
system default limitations such as number of connections, number of 
open files, etc. which are different for root and ordinary user and 
potentially can influence agent behaviour. Besides, it is possible 
that some vulnerabilities will be found in the future and they 
potentially could be used to compromise agent and damage hardware 
configuration.


Consequently, it is better to run agent under ordinary user even under 
ramdisk OS and use rootwrap if agent needs to run commands with root 
privileges. I know that rootwrap has some performance issues 
_http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.html_but 
it is still pretty suitable for ironic agent use case.


It would be great to hear as many opinions as possible according to 
this case.



Vladimir Kozhukalov___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-26 Thread Jay Faulkner
Comments inline.

On 3/26/14, 10:28 AM, Eoghan Glynn wrote:

 On 3/25/2014 1:50 PM, Matt Wagner wrote:
 This would argue to me that the easiest thing for Ceilometer might be
 to query us for IPMI stats, if the credential store is pluggable.
 Fetch these bare metal statistics doesn't seem too off-course for
 Ironic to me. The alternative is that Ceilometer and Ironic would both
 have to be configured for the same pluggable credential store.
 There is already a blueprint with a proposed patch here for Ironic to do
 the querying:
 https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.
 Yes, so I guess there are two fundamentally different approaches that
 could be taken here:

 1. ironic controls the cadence of IPMI polling, emitting notifications
at whatever frequency it decides, carrying whatever level of
detail/formatting it deems appropriate, which are then consumed by
ceilometer which massages these provided data into usable samples

 2. ceilometer acquires the IPMI credentials either via ironic or
directly from keystone/barbican, before calling out over IPMI at
whatever cadence it wants and transforming these raw data into
usable samples

 IIUC approach #1 is envisaged by the ironic BP[1].

 The advantage of approach #2 OTOH is that ceilometer is in the driving
 seat as far as cadence is concerned, and the model is far more
 consistent with how we currently acquire data from the hypervisor layer
 and SNMP daemons.
Approach #1 permits there to be possible other systems monitoring this
information. Many organizations already have significant hardware
monitoring systems setup, and would not like to replace them with
Ceilometer in order to monitor BMCs registered with Ironic.

I think, especially for Ironic, being able to play nicely with things
outside of Openstack is essential as most users aren't going to replace
their entire datacenter management toolset with Openstack... at least
not yet :).

Thanks,
Jay
 Cheers,
 Eoghan


 [1]  https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer
  
 I think, for terms of credential storage (and, for that matter, metrics
 gathering as I noted in that blueprint), it's very useful to have things
 pluggable. Ironic, in particular, has many different use cases: bare
 metal private cloud, bare metal public cloud, and triple-o. I could
 easily see all three being different enough to call for different forms
 of credential storage.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Keystone] Move drivers credentials to Keystone

2014-03-25 Thread Jay Faulkner

On 3/25/2014 1:50 PM, Matt Wagner wrote:

This would argue to me that the easiest thing for Ceilometer might be
to query us for IPMI stats, if the credential store is pluggable.
Fetch these bare metal statistics doesn't seem too off-course for
Ironic to me. The alternative is that Ceilometer and Ironic would both
have to be configured for the same pluggable credential store. 


There is already a blueprint with a proposed patch here for Ironic to do 
the querying: 
https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer.


I think, for terms of credential storage (and, for that matter, metrics 
gathering as I noted in that blueprint), it's very useful to have things 
pluggable. Ironic, in particular, has many different use cases: bare 
metal private cloud, bare metal public cloud, and triple-o. I could 
easily see all three being different enough to call for different forms 
of credential storage.


-Jay Faulkner

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A ramdisk agent

2014-03-21 Thread Jay Faulkner
On 3/21/14, 10:18 AM, Vladimir Kozhukalov wrote:
 And here is scheme
 https://drive.google.com/a/mirantis.com/file/d/0B-Olcp4mLLbvRks0eEhvMXNPM3M/edit?usp=sharing


Vlamimir, can you recreate this drawing in a format that doesn't require
an additional browser plugin? Thanks.

-Jay





signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A ramdisk agent

2014-03-07 Thread Jay Faulkner

Vladimir,

I just put up https://review.openstack.org/#/c/79088/ for review to get 
the teeth-agent imported into openstack. I'm not sure if we want this 
merged immediately or if we want to get the outstanding non-Openstack 
dependencies settled before then, but hopefully this can help get things 
started.


--
Jay Faulkner

On 3/7/14, 12:53 PM, Vladimir Kozhukalov wrote:

Russell,

Great to hear you are going to move towards Pecan+WSME. Yesterday I 
had a look at teeth projects. Next few days I am going to start 
contributing. First of all, I think, we need to arrange all that stuff 
about pluggable architecture. I've created a wiki page about Ironic 
python agent https://wiki.openstack.org/wiki/Ironic-python-agent.


And the question about contributing. Have you managed to send pull 
request to openstack-infra in order to move this project into 
github.com/stackforge http://github.com/stackforge? Or we are 
supposed to arrange everything (werkzeug - Pecan/WSME, architectural 
questions) before we move this agent to stackforge?






Vladimir Kozhukalov


On Fri, Mar 7, 2014 at 8:53 PM, Russell Haering 
russellhaer...@gmail.com mailto:russellhaer...@gmail.com wrote:


Vladmir,

Hey, I'm on the team working on this agent, let me offer a little
history. We were working on a system of our own for managing bare
metal gear which we were calling Teeth. The project was mostly
composed of:

1. teeth-agent: an on-host provisioning agent
2. teeth-overlord: a centralized automation mechanism

Plus a few other libraries (including teeth-rest, which contains
some common code we factored out of the agent/overlord).

A few weeks back we decided to shift our focus to using Ironic. At
this point we have effectively abandoned teeth-overlord, and are
instead focusing on upstream Ironic development, continued agent
development and building an Ironic driver capable of talking to
our agent.

Over the last few days we've been removing non-OS-approved
dependencies from our agent: I think teeth-rest (and werkzeug,
which it depends on) will be the last to go when we replace it
with Pecan+WSME sometime in the next few days.

Thanks,
Russell


On Fri, Mar 7, 2014 at 8:26 AM, Vladimir Kozhukalov
vkozhuka...@mirantis.com mailto:vkozhuka...@mirantis.com wrote:

As far as I understand, there are 4 projects which are
connected with this topic. Another two projects which were not
mentioned by Devananda are
https://github.com/rackerlabs/teeth-rest
https://github.com/rackerlabs/teeth-overlord

Vladimir Kozhukalov


On Fri, Mar 7, 2014 at 4:41 AM, Devananda van der Veen
devananda@gmail.com mailto:devananda@gmail.com wrote:

All,

The Ironic team has been discussing the need for a deploy
agent since well before the last summit -- we even laid
out a few blueprints along those lines. That work was
deferred  and we have been using the same deploy ramdisk
that nova-baremetal used, and we will continue to use that
ramdisk for the PXE driver in the Icehouse release.

That being the case, at the sprint this week, a team from
Rackspace shared work they have been doing to create a
more featureful hardware agent and an Ironic driver which
utilizes that agent. Early drafts of that work can be
found here:

https://github.com/rackerlabs/teeth-agent
https://github.com/rackerlabs/ironic-teeth-driver

I've updated the original blueprint and assigned it to
Josh. For reference:

https://blueprints.launchpad.net/ironic/+spec/utility-ramdisk

I believe this agent falls within the scope of the
baremetal provisioning program, and welcome their
contributions and collaboration on this. To that effect, I
have suggested that the code be moved to a new OpenStack
project named openstack/ironic-python-agent. This would
follow an independent release cycle, and reuse some
components of tripleo (os-*-config). To keep the
collaborative momentup up, I would like this work to be
done now (after all, it's not part of the Ironic repo or
release). The new driver which will interface with that
agent will need to stay on github -- or in a gerrit
feature branch -- until Juno opens, at which point it
should be proposed to Ironic.

The agent architecture we discussed is roughly:
- a pluggable JSON transport layer by which the Ironic
driver will pass information to the ramdisk. Their initial
implementation is a REST API.
- a collection of hardware-specific utilities (python
modules, bash scripts, what