Re: [Openstack-operators] [Openstack-sigs] [all] Naming the T release of OpenStack

2018-10-18 Thread David Medberry
and any talks I give in Denver (Forum, Ops, Main) will include "sl". It's
handy in a variety of ways.

On Thu, Oct 18, 2018 at 9:39 AM David Medberry 
wrote:

> I'm fine with Train but I'm also fine with just adding it to the list and
> voting on it. It will win.
>
> Also, for those not familiar with the debian/ubuntu command "sl", now is
> the time to become so.
>
> apt install sl
> sl -Flea #ftw
>
> On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds 
> wrote:
>
>> Hello all,
>> As per [1] the nomination period for names for the T release have
>> now closed (actually 3 days ago sorry).  The nominated names and any
>> qualifying remarks can be seen at2].
>>
>> Proposed Names
>>  * Tarryall
>>  * Teakettle
>>  * Teller
>>  * Telluride
>>  * Thomas
>>  * Thornton
>>  * Tiger
>>  * Tincup
>>  * Timnath
>>  * Timber
>>  * Tiny Town
>>  * Torreys
>>  * Trail
>>  * Trinidad
>>  * Treasure
>>  * Troublesome
>>  * Trussville
>>  * Turret
>>  * Tyrone
>>
>> Proposed Names that do not meet the criteria
>>  * Train
>>
>> However I'd like to suggest we skip the CIVS poll and select 'Train' as
>> the release name by TC resolution[3].  My think for this is
>>
>>  * It's fun and celebrates a humorous moment in our community
>>  * As a developer I've heard the T release called Train for quite
>>sometime, and was used often at the PTG[4].
>>  * As the *next* PTG is also in Colorado we can still choose a
>>geographic based name for U[5]
>>  * If train causes a problem for trademark reasons then we can always
>>run the poll
>>
>> I'll leave[3] for marked -W for a week for discussion to happen before the
>> TC can consider / vote on it.
>>
>> Yours Tony.
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html
>> [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals
>> [3]
>> https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53
>> [4] https://twitter.com/vkmc/status/1040321043959754752
>> [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z
>> ___
>> openstack-sigs mailing list
>> openstack-s...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Openstack-sigs] [all] Naming the T release of OpenStack

2018-10-18 Thread David Medberry
I'm fine with Train but I'm also fine with just adding it to the list and
voting on it. It will win.

Also, for those not familiar with the debian/ubuntu command "sl", now is
the time to become so.

apt install sl
sl -Flea #ftw

On Thu, Oct 18, 2018 at 12:35 AM Tony Breeds 
wrote:

> Hello all,
> As per [1] the nomination period for names for the T release have
> now closed (actually 3 days ago sorry).  The nominated names and any
> qualifying remarks can be seen at2].
>
> Proposed Names
>  * Tarryall
>  * Teakettle
>  * Teller
>  * Telluride
>  * Thomas
>  * Thornton
>  * Tiger
>  * Tincup
>  * Timnath
>  * Timber
>  * Tiny Town
>  * Torreys
>  * Trail
>  * Trinidad
>  * Treasure
>  * Troublesome
>  * Trussville
>  * Turret
>  * Tyrone
>
> Proposed Names that do not meet the criteria
>  * Train
>
> However I'd like to suggest we skip the CIVS poll and select 'Train' as
> the release name by TC resolution[3].  My think for this is
>
>  * It's fun and celebrates a humorous moment in our community
>  * As a developer I've heard the T release called Train for quite
>sometime, and was used often at the PTG[4].
>  * As the *next* PTG is also in Colorado we can still choose a
>geographic based name for U[5]
>  * If train causes a problem for trademark reasons then we can always
>run the poll
>
> I'll leave[3] for marked -W for a week for discussion to happen before the
> TC can consider / vote on it.
>
> Yours Tony.
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html
> [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals
> [3]
> https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53
> [4] https://twitter.com/vkmc/status/1040321043959754752
> [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Fwd: Feedback on Ops Meetup Planning meeting today after the fact

2018-07-03 Thread David Medberry
I missed the ops meetup but have the scrollback of what was discussed.

I'd definitely like to see upgrades (FFUpgrades, etc) and LTS should be on
the Ops Agenda and socialized so that the distros and other concerned
parties come join us. And I'd put them at the start of day two. [Looks like
the actual plan may be to have us join the SIGs that have formed around
those ideas so that would be fine too as long as we're not double booked.]

I will be socializing an OpenStack Meetup the first night (as I think PTG
generally do a bigger event on Tuesday night so those arriving for Wed can
attend.)  But I'm open to any social events. I'm not sure what venue I will
be able to arrange for that (the normal one is up in Superior near
Boulder.) Stay tuned.

Also, reiterate what others said: Not a big venue. The only "commute"
between sessions that's more than 90 seconds is 3rd to 1st or visa versa.
And it's difficult to defeat the elevator system for that. So figure 3-4
minutes for that

-dave
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Proposing no Ops Meetups team meeting this week

2018-05-29 Thread David Medberry
Good plan. I'm just getting on email now and hadn't even considered IRC
yet. :^)

On Tue, May 29, 2018 at 5:53 AM, Erik McCormick 
wrote:

>
>
> On Tue, May 29, 2018, 7:15 AM Chris Morgan  wrote:
>
>> Some of us will be only just returning to work today after being away all
>> week last week for the (successful) OpenStack Summit, therefore I propose
>> we skip having a meeting today but regroup next week?
>>
>
> +1
>
>
>> Chris
>>
>> --
>> Chris Morgan 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> -Erik
>
>>
>>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Fwd: Follow Up: Private Enterprise Cloud Issues

2018-05-23 Thread David Medberry
There was a great turnout at the Private Enterprise Cloud Issues session
here in Vancouver. I'll propose a follow-on discussion for Denver PTG as
well as trying to sift the data a bit and pre-populate. Look for that
sifted data soon.

For folks unable to participate locally, the etherpad is here:

https://etherpad.openstack.org/p/YVR-private-enterprise-cloud-issues

(and I've cached a copy offline in case it gets reset/etc.)

-- 
-dave
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Session Proposals for Vancouver Forum

2018-04-10 Thread David Medberry
Dropped in 2¢ worth.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Asking for ask.openstack.org

2018-04-04 Thread David Medberry
Hi Jimmy,

I tend to jump on things but only those that go to my Inbox and don't
otherwise get filtered. I'll see if I'm sub'd to ask.o.o and if so, I'll
put that into my Inbox instead of it going into one of my myriad google
filtered folders. OTOH, if it is truly just a web site I need to manually
monitor, I'll be candid and say it won't happen (as it won't.)

Not sure if this idea helps or hinders others but maybe it will elicit some
other personal workflow discussions or improvements.

-dave medberry

On Wed, Apr 4, 2018 at 4:15 PM, Jimmy McArthur  wrote:

> Hi everyone!
>
> We have a very robust and vibrant community at ask.openstack.org.  There
> are literally dozens of posts a day. However, many of them don't receive
> knowledgeable answers.  I'm really worried about this becoming a vacuum
> where potential community members get frustrated and don't realize how to
> get more involved with the community.
>
> I'm looking for thoughts/ideas/feelings about this tool as well as
> potential admin volunteers to help us manage the constant influx of
> technical and not-so-technical questions around OpenStack.
>
> For those of you already contributing there, Thank You!  For those that
> are interested in becoming a moderator (instant AUC status!) or have some
> additional ideas around fostering this community, please respond.
>
> Looking forward to your thoughts
>
> Thanks!
> Jimmy
> irc: jamesmcarthur
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Meetups team - minutes of team meeting 3/20/2018

2018-03-20 Thread David Medberry
On Tue, Mar 20, 2018 at 9:16 AM, Chris Morgan  wrote:

>
> On that note, please stand by for some exciting news and discussion about
> the future of Ops Meetups and OpenStack PTG, as there seems to be
> increasing support for combining the two events into one. I expect an email
> thread about this any minute now, here on openstack-operators!
>
> Chris
>
>
So glad to see the foundation in favor of rounding up the community! Happy
to have it happen (and even likely to attend). Thanks Chris, thanks
Foundation!

-d
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback

2018-03-20 Thread David Medberry
On Tue, Mar 20, 2018 at 10:03 AM, Thierry Carrez 
wrote:

>
> Personally, I'm not a big fan of separate branding (or "co-location").
> If the "PTG" name is seen as too developer-centric, I'd rather change
> the event name (and clearly make it a work event for anyone contributing
> to OpenStack, whatever the shape of their group). Otherwise we just
> perpetuate the artificial separation by calling it an ops event
> co-located with a dev event. It's really a single "contributor" event.
>
> --
> Thierry Carrez (ttx)
>

Amen. What Thierry says. I wasn't in Dublin but I really got the feel from
twitter, blogs, and emails it was more than just the PTG going on. Let's
acknowledge that with a rename and have the Ops join in not as a "wannabes"
but as Community members in full.

Thanks all to suggesting/offering to do this. MAKE IT SO.

-dave
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] nova 17.0.1 released (queens)

2018-03-07 Thread David Medberry
Thanks for the headsup Matt.

On Wed, Mar 7, 2018 at 4:57 PM, Matt Riedemann  wrote:

> I just wanted to give a heads up to anyone thinking about upgrading to
> queens that nova has released a 17.0.1 patch release [1].
>
> There are some pretty important fixes in there that came up after the
> queens GA so if you haven't upgraded yet, I recommend going straight to
> that one instead of 17.0.0.
>
> [1] https://review.openstack.org/#/c/550620/
>
> --
>
> Thanks,
>
> Matt
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] diskimage-builder: prepare ubuntu 17.x images

2018-02-08 Thread David Medberry
Subscribe to this bug and click the "This affects me." link near the top.

https://bugs.launchpad.net/cloud-images/+bug/1585233

On Thu, Feb 8, 2018 at 1:53 PM, Volodymyr Litovka  wrote:

> Hi colleagues,
>
> does anybody here know how to prepare Ubuntu Artful (17.10) image using
> diskimage-builder?
>
> diskimage-builder use the following naming style for download -
> $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz
>
> and while "-root" names are there for trusty/amd64 and xenial/amd64
> distros, these archives for artful (and bionic) are absent on
> cloud-images.ubuntu.com. There are just different kinds of images, not
> source tree as in -root archives.
>
> I will appreciate any ideas or knowledge how to customize 17.10-based
> image using diskimage-builder or in diskimage-builder-like fashion.
>
> Thanks!
>
> --
> Volodymyr Litovka
>   "Vision without Execution is Hallucination." -- Thomas Edison
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Mid Cycle in Tokyo Mar 7-8 2018

2018-01-16 Thread David Medberry
Hi all,

Broad distribution to make sure folks are aware of the upcoming Ops Meetup
in Tokyo.

You can help "steer" this meetup by participating in the planning meetings
or more practically by editing this page (respectfully):
https://etherpad.openstack.org/p/TYO-ops-meetup-2018

Sign up for the meetup is here:https://goo.gl/HBJkPy

We'll see you there!

-dave
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] thierry's longer dev cycle proposal

2017-12-13 Thread David Medberry
On Wed, Dec 13, 2017 at 2:35 PM, Thierry Carrez <thie...@openstack.org>
wrote:

> David Medberry wrote:
> While it may have desirable side-effects on the ops side (something I'm
> not convinced of), the main reason for it is imho to align our rhythm
> with our current development pace / developer capabilities. I felt like
> we were self-imposing too many deadlines, events and coordination
> processes for limited gain.
>

I think this is probably the right "feeling" and hopefully the change in
release cycles is an appropriate response.

Thanks for the background Thierry.

>
> --
> Thierry Carrez (ttx)
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] thierry's longer dev cycle proposal

2017-12-13 Thread David Medberry
Just saw some of your comments (great, thanks), and I'll weigh in if I come
up with a cogent input.

I'd really like to see Bloomberg, RAX, and Cirrus7, Huawei, and other ops
folks respond.

I suspect this is already a fait accompli but there are many details (as
you mentioned already in one posting about mid-cycles) to work out.

On Wed, Dec 13, 2017 at 11:53 AM, Sean McGinnis <sean.mcgin...@gmx.com>
wrote:

> Would be great to get ops-side input. I didn't want to cross-post because
> I'm
> sure this is going to be a big thread and go on for a while. But I would
> encourage anyone with input to jump in on that thread. We could also
> discuss it
> separately here and I can try to answer questions or feed that input back
> in to
> the -dev side.
>
> Sean
>
> On Wed, Dec 13, 2017 at 11:48:01AM -0700, David Medberry wrote:
> > Hi all,
> >
> > Please read Thierry's email to the openstack-dev list this morning and
> > follow the thread (getting long already just two hours in.)
> >
> > This references some ideas and concerns that have come from the Ops
> > community, but this is specifically a -dev thread (but I suspect a lot of
> > ramifications for ops as well.)
> >
> > http://lists.openstack.org/pipermail/openstack-dev/2017-
> December/125473.html
> >
> > title is:
> > Switching to longer development cycles
> > (and if you are sub'd to openstack-dev you will find it in there.)
> >
> >
> > The thread of emails is at least 10+ deep already with folks weighing in
> on
> > all sides of the aisle.
> >
> > -dave
>
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] thierry's longer dev cycle proposal

2017-12-13 Thread David Medberry
Hi all,

Please read Thierry's email to the openstack-dev list this morning and
follow the thread (getting long already just two hours in.)

This references some ideas and concerns that have come from the Ops
community, but this is specifically a -dev thread (but I suspect a lot of
ramifications for ops as well.)

http://lists.openstack.org/pipermail/openstack-dev/2017-December/125473.html

title is:
Switching to longer development cycles
(and if you are sub'd to openstack-dev you will find it in there.)


The thread of emails is at least 10+ deep already with folks weighing in on
all sides of the aisle.

-dave
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Fwd: [Openstack-sigs] [First Contact] New SIG: Let's get this party started!

2017-11-30 Thread David Medberry
Kendall indicates she'd like to see Operators involved as much as devs. I
kind of agree--we're going to keep growing OpenStack with operations so
there will always be newbies to First Contact.

-dave
-- Forwarded message --
From: Kendall Nelson 
Date: Thu, Nov 30, 2017 at 3:49 PM
Subject: [Openstack-sigs] [First Contact] New SIG: Let's get this party
started!
To: openstack-s...@lists.openstack.org


In support of Zhipeng's first email[1] and a chat he and I had the other
day[2]. I went ahead and started to formalize this SIG.

I created an entry in the SIG table[3]. I also created a specific wiki page
for the First Contact SIG[4].

At this stage it would be great to get a list going of people interested in
being points of contact for newcomers and what timezones those people work
in so its easier to figure out who they should talk to.

I think it would also be beneficial to start collecting links and
resources- contributor portal, Mentoring wiki, Upstream Institute,
Outreachy(?) etc to link to on the First Contact wiki page.

As we discussed in the forum session around this SIG[5], it might also be
good to get representatives and a chair more from the ops side of things so
that we have more than one type of involvement- not just code/doc
contribution- covered.

As far as a mission statement sort of thing, I just threw something
together and am 100% interested in feedback. Pasted here from the SIG Wiki
and First Contact Wiki:

To provide a place and group of people for new contributors to come to for
information and advice. New contributors are the future of OpenStack and
the surrounding community. Its important to make sure they feel welcome and
give them the tools to succeed.

-Kendall Nelson (diablo_rojo)

[1] http://lists.openstack.org/pipermail/openstack-sigs/2017-
October/000123.html
[2] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%
23openstack-dev.2017-11-28.log.html#t2017-11-28T00:05:21
[3] https://wiki.openstack.org/wiki/OpenStack_SIGs
[4] https://wiki.openstack.org/wiki/First_Contact_SIG
[5] https://etherpad.openstack.org/p/SYD-first-contact-SIG



___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Meetups team minutes + main topics

2017-11-21 Thread David Medberry
Jon,

I think the Foundation staff were very very wary of extending the PTG or
doing dual sites simultaneously due to not saving a thing logistically.
Yes, it would conceivably save travel for folks that need to go to two
separate events (as would the other colo options on the table) but not
saving a thing logistically over two separate events as we have now. A six
or seven day sprint/thing/ptg would also mean encroaching on one or both
weekends (above and beyond travel dates) and that may really limit venue
choices as private parties (weddings, etc) tend to book those locales on
weekends.

On Tue, Nov 21, 2017 at 10:06 AM, Jonathan Proulx  wrote:

> :On Tue, Nov 21, 2017 at 9:15 AM, Chris Morgan 
> wrote:
>
> :> The big topic of debate, however, was whether subsequent meetups should
> be
> :> co-located with OpenStack PTG. This is a question for the wider
> OpenStack
> :> operators community.
>
> For people who attend both I thnik this would be a big win, if they
> were in the same location (city anyway) but held in series.  One
> (longer) trip but no scheduling conflict.
>
> Downside I see is that makes scheduling constraints pretty tight
> either for having two sponsorslocation available in a coordinated time
> and place or making a much bigger ask of a single location.
>
> Those are my thoughts, not sure if the amount to an opinion.
>
> -Jon
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Meetups team minutes + main topics

2017-11-21 Thread David Medberry
I'm actually pushing this out to a broader list and modifying the title as
well. We need to try and get all operators viewing this (even if they are
also devels, deployers, sigs.)

Feel free to reply to me about spamming lists, but I think we need lots of
eyes on this.



On Tue, Nov 21, 2017 at 9:15 AM, Chris Morgan  wrote:

> We had a busy meeting today. Here are the minutes
>
> Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/
> 2017/ops_meetup_team.2017-11-21-14.00.html
> 10:01 AM Minutes (text): http://eavesdrop.openstack.
> org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-11-21-14.00.txt
> 10:01 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/
> 2017/ops_meetup_team.2017-11-21-14.00.log.html
>
> The next Operators focused meeting is to be in Tokyo next March. Planning
> is going well (more on this later).
>
> The big topic of debate, however, was whether subsequent meetups should be
> co-located with OpenStack PTG. This is a question for the wider OpenStack
> operators community.
>
> The three broad options as I see it are
>
> 1. continue completely separate events
> 2. 2 events under one roof (PTG and ops meet ups at same venue, same days,
> but distinct)
> 3. redefine ops meetups (and to some extent PTG) so that days 1 and 2 are
> community inter-working between devs and operators.
>
> I would like to request a full and open discussion on this because the
> meetups team alone can't decide something like this, obviously, and yet
> we'll need to know what the community as a whole would like to do if future
> events are to succeed. Some of the issues around this are captured here
> (thanks to Melvin Hillman for starting this doc) :
>
> https://docs.google.com/document/d/1BgICOa-Mct9pKwjUEuYp_BSD1V_GdRLL_IM-
> BU0iPUw
>
> Thanks!
>
> Chris
>
> --
> Chris Morgan 
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Sydney Takeaways - UC

2017-11-13 Thread David Medberry
I took an action to work on this bit:

   1. "stackalytics" for user community

Closely tied to #5. Assists with non-developers being able to show their
impact in the community and justify travel amongst other things.

   - We should discuss how to make this happen and prioritize
  - Started a document - https://docs.google.com/document/d/
  1P4b8A9ybBaEYCu7xwVFt37jcZYzs9taxfNprwKlbdh0
  


and will work with Jimmy McArthur. Though maybe Melvin also took an item in
a separate meeting...

On Mon, Nov 13, 2017 at 10:59 AM, Melvin Hillsman 
wrote:

> Hey everyone,
>
> Wanted to start a thread for UC specific takeaways from the Summit. Here
> are the things that stood out and would love additions:
>
> UC Top 5 - https://etherpad.openstack.org/p/SYD-forum-uc - we have
> discussed previously having goals as a UC and in Sydney we decided to put
> together a top 5 list; mirroring what the TC has done. We should be able to
> provide this to the Board, Staff, and TC as needed and continuously update
> the community on progress. We have not decided on any of these of course
> but here is what is currently listed.
>
>1. LTS
>   - Conversation is going well via ML
>   - Our discussion was to allow the folks who want to work on LTS the
>   chance to do it as they see fit; no exact way but discussion actively 
> going
>   on, get involved
>2. Operator midcycle integration into larger event/feedback ecosystem
>   - We have as a community made significant changes to ensure user
>   feedback getting in front of those working on the code earlier than 
> before
>   and we have to be sure that we tie into that process as it is designed.
>   - Mexico City was an edge case of the past few midcycles that have
>   been community-led but gave great insight into worse case scenario; what
>   have we learned and what can we do better?
>   - How specifically can the Foundation Staff help - again the
>   biggest need is to ensure tying into bigger ecosystem
>3. More operator proposed forum sessions
>   - Very much tied to #2.
>   4. Vision casting exercise
>   - TC and documentation team has gone through this exercise; Doug
>   Hellmann agreed to be available either F2F or via video conference to do
>   this with UC
>   5. Company view of non-developer community participation
>   - It is easy for companies to hire/organize developers as FTEs for
>   community work and impact is readily available via quantitative output 
> from
>   the community; commits, blueprints, etc.
>   - It is equally important for companies to allow non-developers -
>   project managers, product managers, system administrators, developers,
>   devops engineers, etc - some percentage if not 100% to community work. 
> It
>   should not all be personal/volunteer time in order to increase the 
> velocity
>   of user community growth and impact.
>   6. "stackalytics" for user community
>   - Closely tied to #5. Assists with non-developers being able to
>   show their impact in the community and justify travel amongst other 
> things.
>   - We should discuss how to make this happen and prioritize
>   - Started a document - https://docs.google.com/document/d/
>   1P4b8A9ybBaEYCu7xwVFt37jcZYzs9taxfNprwKlbdh0
>   
> 
>7. Joint TC/UC meetings
>   - Ensuring there is more parity between TC and UC.
>   - Carrying forward from Boston Forum
>  - Feel free to crash each others meetings
>
> User Survey
>
>- One important detail I took away from the User Survey session was a
>request to move the survey to being held once a year in terms of analysis
>and compilation.
>- Survey is always available to take. Suggestion was to prod community
>every quarter for example to take the survey while only compiling yearly
>report.
>
> OpenLab
>
>- Initiated by OpenStack Foundation, Huawei, and Intel, currently
>involved companies include Deutsche Telekom and VEXXHOST. OpenLab currently
>is focusing on SDKs moving to define, stabilize, and support official
>OpenStack SDKs. Gophercloud is currently using OpenLab and work is being
>done to add the OpenStack provider for Terraform into the system.
>- Another phase of OpenLab is to tightly integrate with or learn
>heavily from OPNFV's XCI. Currently discussing this with the lead engineer
>of XCI and could possibly lead to re-branding XCI as OpenLab increasing the
>scope, collaboration, and integration of OpenLab components and Open Source
>communities.
>- As the UC, OpenLab is important to us, as it targets the user
>community of OpenStack and other user communities of
>tools/components/applications that work with 

[Openstack-operators] Ops Meetup Meeting Tomorrow? 14:00 UTC

2017-11-13 Thread David Medberry
I presume we're still on track for a meeting tomorrow at 14:00 UTC (which
will shift it an hour earlier if your timezone had previously recognized
DST in the northern hemisphere.)

See you there #openstack-operators on Freenode.

-dave
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Fast Forward Upgrades

2017-11-05 Thread David Medberry
Some good discussion at the Summit (right now) about Fast Forward Upgrades.

Including some concerns that nova-compute INTENTFULLY fails if greater than
N-1.

More details are appearing in the etherpad.

This convo is VERY well attended by MANY openstack devs. Very heavy dev
room.

https://etherpad.openstack.org/p/SYD-forum-fast-forward-upgrades

and this is in the Operator session in the Dev Forum.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Technical Committee (TC) learnings, updates

2017-10-27 Thread David Medberry
Hi,

I've volunteered to try and track what the TC is doing for the next few
months and report back to Ops. So, all of this is pretty much done in the
open (on IRC) so don't expect anything here you can't get elsewhere[1].

Of note, they are moving away from "meetings" as meetings are too
regional-centric (and possibly too English centric) and going to "office
hours" where you can expect one or more TC members to be on-line/watching:

== Office hours ==

To be more inclusive of all timezones and more mindful of people for
which English is not the primary language, the Technical Committee
dropped its dependency on weekly meetings. So that you can still get
hold of TC members on IRC, we instituted a series of office hours on
#openstack-tc:

* 09:00 UTC on Tuesdays
* 01:00 UTC on Wednesdays
* 15:00 UTC on Thursdays

For the coming week, I expect the main topic of discussion to be Summit
preparations !

and little activity expected (as stated) between now and the end of the
Sydney OpenStack summit.

The TC does meet amongst themselves (before and tailing) at the summit and
will be very busy in many more meetings/talks throughout the week.

Holler if you need more info.

-dave

[1] emails go out to openstack-...@lists.openstack.org with [TC] in the
subject line (which I think is Technical Committee, not Thierry Carrez but
I could be wrong.)

[2] They have an IRC channel that gets logged:
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/
irc://chat.freenode.net/#openstack-tc

The topic of this channel is full of useful nuggets:
OpenStack Technical Committee office hours: Tuesdays at 09:00 UTC,
Wednesdays at 01:00 UTC, and Thursdays at 15:00 UTC |
https://governance.openstack.org/tc/ | channel logs
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/


[3] Governance:  https://governance.openstack.org/tc
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceph Session at the Forum

2017-09-29 Thread David Medberry
Looks like Blair Bethwaite already has one in there called CEPH in
OpenStack as a BOF.

http://forumtopics.openstack.org/cfp/details/46


On Fri, Sep 29, 2017 at 10:44 AM, Edgar Magana 
wrote:

> Hello,
>
> I know Patrick Donnelly (@pjdbatrick) was very interested in this session.
> Unfortunately, I do not have his email. Should I go ahead and add the
> session in his behalf?
>
> Edgar
>
> On 9/28/17, 10:32 AM, "Erik McCormick"  wrote:
>
> Hey Ops folks,
>
> A Ceph session was put on the discussion Etherpad for the forum, and I
> know a lot of folks have expressed interest in doing one, especially
> since there's no Ceph Day going on this time around.
>
> I need a volunteer to run the session and set up an agenda. If you're
> willing and able to do it, you can either submit the session yourself
> at https://urldefense.proofpoint.com/v2/url?u=http-3A__
> forumtopics.openstack.org_=DwIGaQ=DS6PUFBBr_
> KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ=oMjA5n4jHkmjmYmFOyj7FKQ6zUX3P6
> 9y4yP_5mLxjm4=8d0MJ5TbH_7l75t5m_eSGkFTQYe-9V_ADJPAiSKI0nY= or let me
> know and I'll be happy
> to add it.
>
> Cheers,
> Erik
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.
> openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators=DwIGaQ=
> DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ=oMjA5n4jHkmjmYmFOyj7FKQ6zUX3P6
> 9y4yP_5mLxjm4=tcg5ce8INVI3hHGpR_EGKa5G75iZXhkOkNszfAWHWUM=
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceph Session at the Forum

2017-09-29 Thread David Medberry
Looks like Blair Bethwaite already has one in there called CEPH in
OpenStack as a BOF.

http://forumtopics.openstack.org/cfp/details/46


On Fri, Sep 29, 2017 at 10:44 AM, Edgar Magana 
wrote:

> Hello,
>
> I know Patrick Donnelly (@pjdbatrick) was very interested in this session.
> Unfortunately, I do not have his email. Should I go ahead and add the
> session in his behalf?
>
> Edgar
>
> On 9/28/17, 10:32 AM, "Erik McCormick"  wrote:
>
> Hey Ops folks,
>
> A Ceph session was put on the discussion Etherpad for the forum, and I
> know a lot of folks have expressed interest in doing one, especially
> since there's no Ceph Day going on this time around.
>
> I need a volunteer to run the session and set up an agenda. If you're
> willing and able to do it, you can either submit the session yourself
> at https://urldefense.proofpoint.com/v2/url?u=http-3A__
> forumtopics.openstack.org_=DwIGaQ=DS6PUFBBr_
> KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ=oMjA5n4jHkmjmYmFOyj7FKQ6zUX3P6
> 9y4yP_5mLxjm4=8d0MJ5TbH_7l75t5m_eSGkFTQYe-9V_ADJPAiSKI0nY= or let me
> know and I'll be happy
> to add it.
>
> Cheers,
> Erik
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.
> openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators=DwIGaQ=
> DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ=oMjA5n4jHkmjmYmFOyj7FKQ6zUX3P6
> 9y4yP_5mLxjm4=tcg5ce8INVI3hHGpR_EGKa5G75iZXhkOkNszfAWHWUM=
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ops meetups team meeting 9/12

2017-09-12 Thread David Medberry
Yep, here at PTG also and breakfast got i the way of the meeting. Sorry.

On Tue, Sep 12, 2017 at 8:29 AM, Chris Morgan  wrote:

> Brief meeting today since PTG is underway. Minutes below.
>
> Best bit of news is that Melvin Hillman reported from PTG that the
> Foundation is "on board" to start handling logistics for future operators
> meetups, but with the community still owning content. More news on this
> when those at PTG can get a moment to report. There are also some
> interesting ether pads underway at PTG regarding skip-level upgrades of
> OpenStack and  retention of docs for older OpenStack releases (links in
> minutes below).
>
> minutes:
>
> 10:24 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/
> 2017/ops_meetup_team.2017-09-12-14.04.html
> 10:24 AM Minutes (text): http://eavesdrop.openstack.
> org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-09-12-14.04.txt
> 10:24 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/
> 2017/ops_meetup_team.2017-09-12-14.04.log.html
>
> Regards
>
> Chris
>
> --
> Chris Morgan 
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Case studies on Openstack HA architecture

2017-08-26 Thread David Medberry
I'm not aware of any studies as per se, but we have long run rabbitmq,
MySQL, and all the API endpoints on the same three nodes.



On Aug 25, 2017 6:12 PM, "Imtiaz Chowdhury" 
wrote:

> Hi Openstack operators,
>
>
>
> Most Openstack HA deployment use 3 node database cluster, 3 node rabbitMQ
> cluster and 3 Controllers. I am wondering whether there are any studies
> done that show the pros and cons of co-locating database and messaging
> service with the Openstack control services.  In other words, I am very
> interested in learning about advantages and disadvantages, in terms of ease
> of deployment, upgrade and overall API performance, of having 3 all-in-one
> Openstack controller over a more distributed deployment model.
>
>
>
> References to any work done in this area will be highly appreciated.
>
>
>
> Thanks,
> Imtiaz
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [ops meetups team] - resuming regular meetings

2017-08-22 Thread David Medberry
Thanks Chris, et al.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [dnsmasq] setting ntp-server

2017-08-18 Thread David Medberry
Cool beans. Thanks for asking/answering your own question!
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Reminder: Tuesday after Memorial Day (US) Ops Meetup Planning is one hour earlier

2017-05-26 Thread David Medberry
Just a reminder that the Tuesday after Memorial Day, bright and early (in
the US) is the Ops Meetup Planning session in #openstack-operators on
Freenode IRC at 14:00 UTC ie 10 am Eastern US, 9 am Central US, 8 am
Mountain, and 7 am Pacific. Find your timezone time here:

https://www.timeanddate.com/worldclock/fixedtime.html?msg=OpenStack+Operators+Meetup+Planning=20170530T14=1440=1

and we will stick with the 14:00 UTC time until further notice.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UTC 14:00 henceforth for Ops Meet Up planning

2017-05-23 Thread David Medberry
Ah, excellent question sean.

This is an IRC meeting that occurs weekly currently at 14:00 (starting next
week) on Tuesdays in Freenode at #openstack-operators
(the current one is just wrapping up)


On Tue, May 23, 2017 at 9:34 AM, Sean McGinnis <sean.mcgin...@gmx.com>
wrote:

> On Tue, May 23, 2017 at 09:16:13AM -0600, David Medberry wrote:
> > I have picked "Tuesday, May 30, 2017 8:00 AM (Time zone: Mountain Time)"
> as
> > final option(s) for the Doodle poll "Ops Meetup Preferred Time."
>
> Hey David,
>
> Sorry, I'm sure this was stated elsewhere, but where is this meeting held?
>
> Thanks!
> Sean
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova][ironic][scheduler][placement] IMPORTANT: Getting rid of the automated reschedule functionality

2017-05-22 Thread David Medberry
I have to agree with James

My affinity and anti-affinity rules have nothing to do with NFV. a-a is
almost always a failure domain solution. I'm not sure we have users
actually choosing affinity (though it would likely be for network speed
issues and/or some sort of badly architected need or perceived need for
coupling.)

On Mon, May 22, 2017 at 12:45 PM, James Penick  wrote:

>
>
> On Mon, May 22, 2017 at 10:54 AM, Jay Pipes  wrote:
>
>> Hi Ops,
>>
>> Hi!
>
>
>>
>> For class b) causes, we should be able to solve this issue when the
>> placement service understands affinity/anti-affinity (maybe Queens/Rocky).
>> Until then, we propose that instead of raising a Reschedule when an
>> affinity constraint was last-minute violated due to a racing scheduler
>> decision, that we simply set the instance to an ERROR state.
>>
>> Personally, I have only ever seen anti-affinity/affinity use cases in
>> relation to NFV deployments, and in every NFV deployment of OpenStack there
>> is a VNFM or MANO solution that is responsible for the orchestration of
>> instances belonging to various service function chains. I think it is
>> reasonable to expect the MANO system to be responsible for attempting a
>> re-launch of an instance that was set to ERROR due to a last-minute
>> affinity violation.
>>
>
>
>> **Operators, do you agree with the above?**
>>
>
> I do not. My affinity and anti-affinity use cases reflect the need to
> build large applications across failure domains in a datacenter.
>
> Anti-affinity: Most anti-affinity use cases relate to the ability to
> guarantee that instances are scheduled across failure domains, others
> relate to security compliance.
>
> Affinity: Hadoop/Big data deployments have affinity use cases, where nodes
> processing data need to be in the same rack as the nodes which house the
> data. This is a common setup for large hadoop deployers.
>
>
>> I recognize that large Ironic users expressed their concerns about
>> IPMI/BMC communication being unreliable and not wanting to have users
>> manually retry a baremetal instance launch. But, on this particular point,
>> I'm of the opinion that Nova just do one thing and do it well. Nova isn't
>> an orchestrator, nor is it intending to be a "just continually try to get
>> me to this eventual state" system like Kubernetes.
>>
>
> Kubernetes is a larger orchestration platform that provides autoscale. I
> don't expect Nova to provide autoscale, but
>
> I agree that Nova should do one thing and do it really well, and in my
> mind that thing is reliable provisioning of compute resources. Kubernetes
> does autoscale among other things. I'm not asking for Nova to provide
> Autoscale, I -AM- asking OpenStack's compute platform to provision a
> discrete compute resource reliably. This means overcoming common and simple
> error cases. As a deployer of OpenStack I'm trying to build a cloud that
> wraps the chaos of infrastructure, and present a reliable facade. When my
> users issue a boot request, I want to see if fulfilled. I don't expect it
> to be a 100% guarantee across any possible failure, but I expect (and my
> users demand) that my "Infrastructure as a service" API make reasonable
> accommodation to overcome common failures.
>
>
>
>> If we removed Reschedule for class c) failures entirely, large Ironic
>> deployers would have to train users to manually retry a failed launch or
>> would need to write a simple retry mechanism into whatever client/UI that
>> they expose to their users.
>>
>> **Ironic operators, would the above decision force you to abandon Nova as
>> the multi-tenant BMaaS facility?**
>>
>>
>  I just glanced at one of my production clusters and found there are
> around 7K users defined, many of whom use OpenStack on a daily basis. When
> they issue a boot call, they expect that request to be honored. From their
> perspective, if they call AWS, they get what they ask for. If you remove
> reschedules you're not just breaking the expectation of a single deployer,
> but for my thousands of engineers who, every day, rely on OpenStack to
> manage their stack.
>
> I don't have a "i'll take my football and go home" mentality. But if you
> remove the ability for the compute provisioning API to present a reliable
> facade over infrastructure, I have to go write something else, or patch it
> back in. Now it's even harder for me to get and stay current with OpenStack.
>
> During the summit the agreement was, if I recall, that reschedules would
> happen within a cell, and not between the parent and cell. That was
> completely acceptable to me.
>
> -James
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org

Re: [Openstack-operators] Policy Updates

2017-02-23 Thread David Medberry
and the 'nova-policy' command was introduced at the same time finally
found the right release notes:

ref: https://docs.openstack.org/releasenotes/nova/newton.html

The nova-policy command line is implemented as a tool to experience the
under-development feature policy discovery. User can input the credentials
infomation and the instance info, the tool will return a list of API which
can be allowed to invoke. There isn’t any contract for the interface of the
tool due to the feature still under-development.

and

The API policy defaults are now defined in code like configuration options.
Because of this, the sample policy.json file that is shipped with Nova is
empty and should only be necessary if you want to override the API policy
from the defaults in the code. To generate the policy file you can run:

oslopolicy-sample-generator --config-file=etc/nova/nova-policy-generator.conf


On Thu, Feb 23, 2017 at 3:17 PM, David Medberry <openst...@medberry.net>
wrote:

> Yep what Logan said. I'm pretty sure Sean Dague talked about this at the
> last Operator's mid-cycle.  The "blank" policy.json just means you get the
> default policies. You set a value to override the defaults.
>
> I don't see it in the Ocata relnotes but git indicates this is where it
> happened:
>
> https://github.com/openstack/nova/blob/stable/mitaka/etc/nova/policy.json
> https://github.com/openstack/nova/blob/stable/newton/etc/nova/policy.json
>
> again, no change in behavior...
>
> On Thu, Feb 23, 2017 at 3:06 PM, Logan V. <lo...@protiumit.com> wrote:
>
>> I think this actually started in Newton. Yes it ships blank, however
>> there is still a default policy implemented as before with similar
>> defaults separating the admin and user roles. The default policy is
>> implemented in the nova code base
>> (https://github.com/openstack/nova/tree/stable/newton/nova/policies)
>> and overrides can be provided using policy.json (which also accepts
>> yaml despite what the file extension would lead you to believe). The
>> difference now is that the default policy is not enumerated in a
>> policy.json file by default. You can obtain the default policy by
>> running
>> oslopolicy-sample-generator --namespace nova
>>
>> There are also several other oslopolicy-* tools like
>> oslopolicy-list-redundant - can be used to list policies defined in
>> the policy.json which are redundant to the default policy
>> oslopolicy-checker -test access against a specific policy item
>> oslopolicy-policy-generator - dump a consolidated view of the policy
>> (ie defaults combined with overrides) for use with ie. horizon's
>> policy things. One thing I found with exporting this dump from nova
>> and using it in horizon is that you must define a policy called
>> "default" (usually set to "rule:admin_or_owner") because it is not
>> included in the dump and it seemed to cause some odd behavior in
>> horizon like the instances tab not showing up under the admin panel.
>>
>>
>> On Thu, Feb 23, 2017 at 1:52 PM, Edgar Magana <edgar.mag...@workday.com>
>> wrote:
>> > Am I understanding correctly that in Ocata release, the policy.json
>> file for
>> > NOVA is blank?
>> >
>> > What does that mean for us (operators)? Everything will be open for
>> > everybody for the other way around?
>> >
>> >
>> >
>> > In any case, that sounds like an awful approach because know if we
>> upgrade
>> > we will need to be sure that we have a proper json file while in the
>> past we
>> > at least were starting from the default one.
>> >
>> >
>> >
>> > Edgar
>> >
>> >
>> >
>> > From: David Medberry <openst...@medberry.net>
>> > Date: Thursday, February 23, 2017 at 10:45 AM
>> > To: "openstack-operators@lists.openstack.org"
>> > <openstack-operators@lists.openstack.org>
>> > Subject: [Openstack-operators] Policy Updates
>> >
>> >
>> >
>> > Nova no longer ships with a fleshed-out skeleton of all policy.json. It
>> > ships blank.
>> >
>> >
>> >
>> > Discussion in here on how to help operators select specific settings to
>> > include in their policy.json via documentation.
>> >
>> >
>> >
>> > You (as an op) may want to review and comment on this. This model is
>> being
>> > proposed for all openstack projects (or at least MORE openstack
>> projects.)
>> >
>> >
>> >
>> > https://review.openstack.org/#/c/433010
>> >
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Policy Updates

2017-02-23 Thread David Medberry
Yep what Logan said. I'm pretty sure Sean Dague talked about this at the
last Operator's mid-cycle.  The "blank" policy.json just means you get the
default policies. You set a value to override the defaults.

I don't see it in the Ocata relnotes but git indicates this is where it
happened:

https://github.com/openstack/nova/blob/stable/mitaka/etc/nova/policy.json
https://github.com/openstack/nova/blob/stable/newton/etc/nova/policy.json

again, no change in behavior...

On Thu, Feb 23, 2017 at 3:06 PM, Logan V. <lo...@protiumit.com> wrote:

> I think this actually started in Newton. Yes it ships blank, however
> there is still a default policy implemented as before with similar
> defaults separating the admin and user roles. The default policy is
> implemented in the nova code base
> (https://github.com/openstack/nova/tree/stable/newton/nova/policies)
> and overrides can be provided using policy.json (which also accepts
> yaml despite what the file extension would lead you to believe). The
> difference now is that the default policy is not enumerated in a
> policy.json file by default. You can obtain the default policy by
> running
> oslopolicy-sample-generator --namespace nova
>
> There are also several other oslopolicy-* tools like
> oslopolicy-list-redundant - can be used to list policies defined in
> the policy.json which are redundant to the default policy
> oslopolicy-checker -test access against a specific policy item
> oslopolicy-policy-generator - dump a consolidated view of the policy
> (ie defaults combined with overrides) for use with ie. horizon's
> policy things. One thing I found with exporting this dump from nova
> and using it in horizon is that you must define a policy called
> "default" (usually set to "rule:admin_or_owner") because it is not
> included in the dump and it seemed to cause some odd behavior in
> horizon like the instances tab not showing up under the admin panel.
>
>
> On Thu, Feb 23, 2017 at 1:52 PM, Edgar Magana <edgar.mag...@workday.com>
> wrote:
> > Am I understanding correctly that in Ocata release, the policy.json file
> for
> > NOVA is blank?
> >
> > What does that mean for us (operators)? Everything will be open for
> > everybody for the other way around?
> >
> >
> >
> > In any case, that sounds like an awful approach because know if we
> upgrade
> > we will need to be sure that we have a proper json file while in the
> past we
> > at least were starting from the default one.
> >
> >
> >
> > Edgar
> >
> >
> >
> > From: David Medberry <openst...@medberry.net>
> > Date: Thursday, February 23, 2017 at 10:45 AM
> > To: "openstack-operators@lists.openstack.org"
> > <openstack-operators@lists.openstack.org>
> > Subject: [Openstack-operators] Policy Updates
> >
> >
> >
> > Nova no longer ships with a fleshed-out skeleton of all policy.json. It
> > ships blank.
> >
> >
> >
> > Discussion in here on how to help operators select specific settings to
> > include in their policy.json via documentation.
> >
> >
> >
> > You (as an op) may want to review and comment on this. This model is
> being
> > proposed for all openstack projects (or at least MORE openstack
> projects.)
> >
> >
> >
> > https://review.openstack.org/#/c/433010
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Policy Updates

2017-02-23 Thread David Medberry
Nova no longer ships with a fleshed-out skeleton of all policy.json. It
ships blank.

Discussion in here on how to help operators select specific settings to
include in their policy.json via documentation.

You (as an op) may want to review and comment on this. This model is being
proposed for all openstack projects (or at least MORE openstack projects.)

*https://review.openstack.org/#/c/433010
*
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Metadata service over virtio-vsock

2017-02-21 Thread David Medberry
Doesn't the virtio solution assume/require a libvirt or more exactly a
QEMU/KVM based hypervisor?

What about the N-1 other hypervisors?

I think the idea of a "hot remove, hot add" of the configdrive has some
merit (but remember it is not always ISO-9660 but could be VFAT as well to
aid in some migrations.)

On the plus side, we do actually run the md service.

On Mon, Feb 20, 2017 at 1:22 PM, Artom Lifshitz  wrote:

> We've been having a discussion [1] in openstack-dev about how to best
> expose dynamic metadata that changes over a server's lifetime to the
> server. The specific use case is device role tagging with hotplugged
> devices, where a network interface or volume is attached with a role
> tag, and the guest would like to know what that role tag is right
> away.
>
> The metadata API currently fulfills this function, but my
> understanding is that it's not hugely popular amongst operators and is
> therefore not universally deployed.
>
> Dan Berrange came up with an idea [2] to add virtio-vsock support to
> Nova. To quote his explanation, " think of this as UNIX domain sockets
> between the host and guest. [...] It'd likely address at least some
> people's security concerns wrt metadata service. It would also fix the
> ability to use the metadata service in IPv6-only environments, as we
> would not be using IP at all."
>
> So to those operators who are not deploying the metadata service -
> what are your reasons for doing so, and would those concerns be
> addressed by Dan's idea?
>
> Cheers!
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-
> February/112490.html
> [2] http://lists.openstack.org/pipermail/openstack-dev/2017-
> February/112602.html
>
> --
> Artom Lifshitz
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Please give your opinion about "openstack server migrate" command.

2017-02-17 Thread David Medberry
Replying more to the "thread" and stream of thought than a specific message.

1) Yes, it is confusing. Rikimaru's description is more or less what I
believe.
2) Because it is confusing, I continue to use NovaClient commands instead
of OpenstackClient

I don't know what drove the creation of the OpenStack Client server
commands the way that they are it might be a good deep dive of launchpad
and git to find out. i.e., I can't "guess" what drove the design as it
seems wrong and overly opaque and complex.

On Fri, Feb 17, 2017 at 3:38 AM, Rikimaru Honjo <
honjo.rikim...@po.ntts.co.jp> wrote:

> Hi Marcus,
>
>
> On 2017/02/17 15:05, Marcus Furlong wrote:
>
>> On 17 February 2017 at 16:47, Rikimaru Honjo
>>  wrote:
>>
>>> Hi all,
>>>
>>> I found and reported a unkind behavior of "openstack server migrate"
>>> command
>>> when I maintained my environment.[1]
>>> But, I'm wondering which solution is better.
>>> Do you have opinions about following my solutions by operating point of
>>> view?
>>> I will commit a patch according to your opinions if those are gotten.
>>>
>>> [1]https://bugs.launchpad.net/python-openstackclient/+bug/1662755
>>> ---
>>> [Actual]
>>> If user run "openstack server migrate --block-migration ",
>>> openstack client call Cold migration API.
>>> "--block migration" option will be ignored if user don't specify
>>> "--live".
>>>
>>> But, IMO, this is unkindly.
>>> This cause unexpected operation for operator.
>>>
>>
>> +1 This has confused/annoyed me before.
>>
>>
>>> P.S.
>>> "--shared-migration" option has same issue.
>>>
>>
>> For the shared migration case, there is also this bug:
>>
>>https://bugs.launchpad.net/nova/+bug/1459782
>>
>> which, if fixed/implemented would negate the need for
>> --shared-migration? And would fix also "nova resize" on shared
>> storage.
>>
> In my understanding, that report says about libvirt driver's behavior.
> In the other hand, my report says about the logic of openstack client.
>
> Current "openstack server migrate" command has following logic:
>
> * openstack server migrate
>+-User don't specify "--live"
>| + Call cold-migrate API.
>|   Ignore "--block-migration" and "--shard-migration" option if user
> specify those.
>|
>+-User specify "--live"
>| + Call live-migration API.
>|
>+-User specify "--live --block-migration"
>| + Call block-live-migration API.
>|
>+-User specify "--live --shared-migration"
>  + Call live-migration API.[1]
>
> [1]
> "--shared-migration" means live-migration(not block-live-migrate) in
> "server migrate" command.
> In other words, "server migrate --live" and "server migrate --live
> --shared-migration"
> are same operation.
> I'm wondering why "--shared-migration" is existed...
>
>
> Cheers,
>> Marcus.
>>
>>
> --
> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
> NTTソフトウェア株式会社
> クラウド&セキュリティ事業部 第一事業ユニット(CS1BU)
> 本上力丸
> TEL.  :045-212-7539
> E-mail:honjo.rikim...@po.ntts.co.jp
> 〒220-0012
>   横浜市西区みなとみらい4丁目4番5号
>   横浜アイマークプレイス 13階
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] FYI: live_migration_progress_timeout will default to 0 and be deprecated in Ocata

2017-02-09 Thread David Medberry
On Thu, Feb 9, 2017 at 9:29 AM, Matt Riedemann  wrote:

>
Thanks for the heads up Matt, ops appreciate.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Any serious stability and performance issues on RBD as ephemeral storage ?

2016-12-08 Thread David Medberry
We've been using it and recommending it for years. It solves many many
problems with a running cloud and there have been very few issues. Pay
close attention when upgrading versions of CEPH and do things in the right
order and you will be fine!

On Wed, Dec 7, 2016 at 7:51 AM, Vahric Muhtaryan 
wrote:

> Hello All,
>
> I would like to use ephemeral disks with ceph instead of on nova compute
> node. I saw that there is an option to configure it but find many different
> bugs and reports for its not working , not stable , no success at the
> instance creation time.
> Anybody In this list use ceph as an ephemeral storage without any problem ?
> Could you pls share your experiences pls ?
>
> Regards
> VM
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Audit Logging - Interested? What's missing?

2016-11-16 Thread David Medberry
rather, here:
https://openstackmountainwest2016.sched.org/event/8AkE/osdef-devops-driven-approach-to-securing-a-cloud-infrastructure-using-bigdata?iframe=no==yes=no

On Wed, Nov 16, 2016 at 5:07 PM, David Medberry <openst...@medberry.net>
wrote:

> more info here:
> http://www.openstackdaysmw.com/schedule/
>
> On Wed, Nov 16, 2016 at 5:06 PM, David Medberry <openst...@medberry.net>
> wrote:
>
>> We've added ELK to our cloud (but of course it largely relies on the
>> existing logging.) There will be a talk about this next month at OpenStack
>> Days Mountain West in SLC. I can provide a link to the slides after that
>> occurs.
>>
>> Our use of ELK is around added security, so ties in nicely with this use
>> case.
>>
>> On Wed, Nov 16, 2016 at 3:29 PM, Tom Fifield <t...@openstack.org> wrote:
>>
>>> Hi Ops,
>>>
>>> Was chatting with Department of Defense in Australia the other day, and
>>> one of their pain points is Audit Logging. Some bits of OpenStack just
>>> don't leave enough information for proper audit. So, thought it might be a
>>> good idea to gather people who are interested to brainstorm how to get it
>>> to a good level for all :)
>>>
>>> Does your cloud need good audit logging? What do you wish was there at
>>> the moment, but isn't?
>>>
>>>
>>> Regards,
>>>
>>>
>>> Tom
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Audit Logging - Interested? What's missing?

2016-11-16 Thread David Medberry
more info here:
http://www.openstackdaysmw.com/schedule/

On Wed, Nov 16, 2016 at 5:06 PM, David Medberry <openst...@medberry.net>
wrote:

> We've added ELK to our cloud (but of course it largely relies on the
> existing logging.) There will be a talk about this next month at OpenStack
> Days Mountain West in SLC. I can provide a link to the slides after that
> occurs.
>
> Our use of ELK is around added security, so ties in nicely with this use
> case.
>
> On Wed, Nov 16, 2016 at 3:29 PM, Tom Fifield <t...@openstack.org> wrote:
>
>> Hi Ops,
>>
>> Was chatting with Department of Defense in Australia the other day, and
>> one of their pain points is Audit Logging. Some bits of OpenStack just
>> don't leave enough information for proper audit. So, thought it might be a
>> good idea to gather people who are interested to brainstorm how to get it
>> to a good level for all :)
>>
>> Does your cloud need good audit logging? What do you wish was there at
>> the moment, but isn't?
>>
>>
>> Regards,
>>
>>
>> Tom
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Lightning Talks

2016-10-18 Thread David Medberry
Dear Readers,

We have TWO back-to-back (double sessions) of Lightning Talks for
Operators. And by "for Operators" I mean largely that Operators will be the
audience.

If you have an OpenStack problem, thingamajig, technique, simplification,
gadget, whatzit that readily lends itself to a Lightning talk. please email
me and put it into the etherpad here:

https://etherpad.openstack.org/p/BCN-ops-lightning-talks

There are two sessions... but I'd prefer to fill the first one and cancel
the second one. But if your schedule dictates that you are only available
for the second, we'll hold both.

(And in spite of my natural levity, it can be a serious talk, a serious
problem, or something completely frivolous but there might be tomatoes in
the audience so watch it.)

-dave

David Medberry
OpenStack Guy and your friendly moderator.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Kilo to Liberty upgrade

2016-10-03 Thread David Medberry
Definitely read (and re-read) the release notes here:

https://wiki.openstack.org/wiki/ReleaseNotes/Liberty

paying close attention to the Upgrade Notes, API changes, etc.

You might also search (google or otherwise) this distribution list for
history on this topic as many of us did this q

On Mon, Oct 3, 2016 at 12:45 PM, Eric Yang  wrote:

> Hi all,
>
>
>
> Can anybody please share your experience with upgrading a Kilo environment
> to Liberty? More specifically I have a Kilo environment deployed and
> managed under Fuel 7.0, and I am looking for a path to upgrade it to
> Liberty. I would certainly like to:
>
> -  Keep it under management of the Fuel master;
>
> -  Preserve the previous configuration as much as possible;
>
> -  Find a way to migrate the instances with minimum to no
> down-time.
>
> I am open to giving up on some of the above criteria if I can upgrade with
> the least disruption to the instance workloads.
>
>
>
> I am also using Ceph underneath Cinder/Swift/Glance, and runs Juniper
> OpenContrail as a Neutron plugin.
>
>
>
> Thanks in advance for any idea/suggestions,
>
> *Eric Yang*
>
> *Senior Solutions Architect*
>
>
>
> *[image: cid:image003.jpg@01D1AB68.B3283440]* 
>
> *Technica Corporation*
>
> 22970 Indian Creek Drive, Suite 500, Dulles, VA 20166
>
> *Direct:* 703.662.2068
>
> *Cell:* 703.608.0845
>
>
>
> technicacorp.com
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Kilo to Liberty upgrade

2016-10-03 Thread David Medberry
Definitely read (and re-read) the release notes here:

https://wiki.openstack.org/wiki/ReleaseNotes/Liberty

paying close attention to the Upgrade Notes, API changes, etc.

You might also search (google or otherwise) this distribution list for
history on this topic as many of us did this quite some time ago.



(Sorry hit "send" inadvertently before the note was done.)

On Mon, Oct 3, 2016 at 12:45 PM, Eric Yang  wrote:

> Hi all,
>
>
>
> Can anybody please share your experience with upgrading a Kilo environment
> to Liberty? More specifically I have a Kilo environment deployed and
> managed under Fuel 7.0, and I am looking for a path to upgrade it to
> Liberty. I would certainly like to:
>
> -  Keep it under management of the Fuel master;
>
> -  Preserve the previous configuration as much as possible;
>
> -  Find a way to migrate the instances with minimum to no
> down-time.
>
> I am open to giving up on some of the above criteria if I can upgrade with
> the least disruption to the instance workloads.
>
>
>
> I am also using Ceph underneath Cinder/Swift/Glance, and runs Juniper
> OpenContrail as a Neutron plugin.
>
>
>
> Thanks in advance for any idea/suggestions,
>
> *Eric Yang*
>
> *Senior Solutions Architect*
>
>
>
> *[image: cid:image003.jpg@01D1AB68.B3283440]* 
>
> *Technica Corporation*
>
> 22970 Indian Creek Drive, Suite 500, Dulles, VA 20166
>
> *Direct:* 703.662.2068
>
> *Cell:* 703.608.0845
>
>
>
> technicacorp.com
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ElasticSearch on OpenStack

2016-09-02 Thread David Medberry
omitted on more line:

give java heap 30GB, and leave the rest of the memory to the OS filesystem
cache so that Lucene can make best use of it.

On Fri, Sep 2, 2016 at 2:15 PM, David Medberry <openst...@medberry.net>
wrote:

> From Nathan (on TWC Cloud team):
>
> Nathan: The page at https://www.elastic.co/guide/en/elasticsearch/guide/
> current/heap-sizing.html gives you good advice on a maximum size for the
> elasticsearch VM's memory.
>
> Nathan: suggest you pick a flavor with 64GB RAM or less, then base other
> sizing things off of that (i.e. choose a flavor with 64GB of RAM and as
> many CPUs as possible for that RAM allocation, then base disk size on
> testing of your use case)
>
> On Fri, Sep 2, 2016 at 6:46 AM, David Medberry <openst...@medberry.net>
> wrote:
>
>> Hey Tim,
>> We've just started this effort. I'll see if the guy running the service
>> can comment today.
>>
>> On Fri, Sep 2, 2016 at 6:36 AM, Tim Bell <tim.b...@cern.ch> wrote:
>>
>>>
>>>
>>> Has anyone had experience running ElasticSearch on top of OpenStack VMs ?
>>>
>>>
>>>
>>> Are there any tuning recommendations ?
>>>
>>>
>>>
>>> Thanks
>>>
>>> Tim
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ElasticSearch on OpenStack

2016-09-02 Thread David Medberry
>From Nathan (on TWC Cloud team):

Nathan: The page at
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
gives you good advice on a maximum size for the elasticsearch VM's memory.

Nathan: suggest you pick a flavor with 64GB RAM or less, then base other
sizing things off of that (i.e. choose a flavor with 64GB of RAM and as
many CPUs as possible for that RAM allocation, then base disk size on
testing of your use case)

On Fri, Sep 2, 2016 at 6:46 AM, David Medberry <openst...@medberry.net>
wrote:

> Hey Tim,
> We've just started this effort. I'll see if the guy running the service
> can comment today.
>
> On Fri, Sep 2, 2016 at 6:36 AM, Tim Bell <tim.b...@cern.ch> wrote:
>
>>
>>
>> Has anyone had experience running ElasticSearch on top of OpenStack VMs ?
>>
>>
>>
>> Are there any tuning recommendations ?
>>
>>
>>
>> Thanks
>>
>> Tim
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Update on Nova scheduler poor performance with Ironic

2016-08-30 Thread David Medberry
Great writeup @Mathieu and thanks @sean and @jrolls!

-d

On Mon, Aug 29, 2016 at 3:34 PM, Mathieu Gagné  wrote:

> Hi,
>
> For those that attended the OpenStack Ops meetup, you probably heard
> me complaining about a serious performance issue we had with Nova
> scheduler (Kilo) with Ironic.
>
> Thanks to Sean Dague and Matt Riedemann, we found the root cause.
>
> It was caused by this block of code [1] which is hitting the database
> for each node loaded by the scheduler. This block of code is called if
> no instance info is found in the scheduler cache.
>
> I found that this instance info is only populated if the
> scheduler_tracks_instance_changes config [2] is enabled which it is by
> default. But being a good operator (wink wink), I followed the Ironic
> install guide which recommends disabling it [3], unknowingly getting
> myself into deep troubles.
>
> There isn't much information about the purpose of this config in the
> kilo branch. Fortunately, you can find more info in the master branch
> [4], thanks to the config documentation effort. This instance info
> cache is used by filters which rely on instance location to perform
> affinity/anti-affinity placement or anything that cares about the
> instances running on the destination node.
>
> Enabling this option will make it so Nova scheduler loads instance
> info asynchronously at start up. Depending on the number of
> hypervisors and instances, it can take several minutes. (we are
> talking about 10-15 minutes with 600+ Ironic nodes, or ~1s per node in
> our case)
>
> So Jim Roll jumped into the discussion on IRC and found a bug [5] he
> opened and fixed in Liberty. It makes it so Nova scheduler never
> populates the instance info cache if Ironic host manager is loaded.
> For those running Nova with Ironic, you will agree that there is no
> known use case where affinity/anti-affinity is used. (please reply if
> you know of one)
>
> To summarize, the poor performance of Nova scheduler will only show if
> you are running the Kilo version of Nova and you disable
> scheduler_tracks_instance_changes which might be the case if you are
> running Ironic too.
>
> For those curious about our Nova scheduler + Ironic setup, we have
> done the following to get nova scheduler to ludicrous speed:
>
> 1) Use CachingScheduler
>
> There was a great talk at the OpenStack Summit about why you would
> want to use it. [6]
>
> By default, the Nova scheduler will load ALL nodes (hypervisors) from
> database to memory before each scheduling. If you have A LOT of
> hypervisors, this process can take a while. This means scheduling
> won't happen until this step is completed. It could also mean that
> scheduling will always fail if you don't tweak service_down_time (see
> 3 below) if you have lot of hypervisors.
>
> This driver will make it so nodes (hypervisors) are loaded in memory
> every ~60 seconds. Since information is now pre-cached, the scheduling
> process can happen right away, it is super fast.
>
> There is a lot of side-effects to using it though. For example:
> - you can only run ONE nova-scheduler process since cache state won't
> be shared between processes and you don't want instances to be
> scheduled twice to the same node/hypervisor.
> - It can take ~1m before new capacity is recognized by the scheduler.
> (new or freed nodes) The cache is refreshed every 60 seconds with a
> periodic task. (this can be changed with scheduler_driver_task_period)
>
> In the context of Ironic, it is a compromise we are willing to accept.
> We are not adding Ironic nodes that often and nodes aren't
> created/deleting as often as virtual machines.
>
> 2) Run a single nova-compute service
>
> I strongly suggest you DO NOT run multiple nova-compute services. If
> you do, you will have duplicated hypervisors loaded by the scheduler
> and you could end up with conflicting scheduling. You will also have
> twice as much hypervisors to load in the scheduler.
>
> Note: I heard about multiple compute host support in Nova for Ironic
> with use of an hash ring but I don't have much details about it. So
> this recommendation might not apply to you if you are using a recent
> version of Nova.
>
> 3) Increase service_down_time
>
> If you have a lot of nodes, you might have to increase this value
> which is set to 60 seconds by default. This value is used by the
> ComputeFilter filter to exclude nodes it hasn't heard from. If it
> takes more than 60 seconds to list the list of nodes, you might guess
> what we will happen, the scheduler will reject all of them since node
> info is already outdated when it finally hits the filtering steps. I
> strongly suggest you tweak this setting, regardless of the use of
> CachingScheduler.
>
> 4) Tweak scheduler to only load empty nodes/hypervisors
>
> So this is a hack [7] we did before finding out about the bug [5] we
> described and identified earlier. When investigating our performance
> issue, we enabled debug logging and saw that 

Re: [Openstack-operators] Shelving

2016-08-18 Thread David Medberry
Shelve completely dissociates from a node.

On Thu, Aug 18, 2016 at 4:21 PM, Kris G. Lindgren <klindg...@godaddy.com>
wrote:

> Does shelving an instance also free up the instances reservation against
> that node?  If it doesn’t I assume that’s why it still counts against their
> quota?  IE Nova is still trying to keep a slot open for them on that
> server, so when you unshelv does it go back to the same node or does it go
> to a new node?
>
>
>
> ___
>
> Kris Lindgren
>
> Senior Linux Systems Engineer
>
> GoDaddy
>
>
>
> *From: *David Medberry <openst...@medberry.net>
> *Date: *Thursday, August 18, 2016 at 3:49 PM
> *To: *"Jonathan D. Proulx" <j...@csail.mit.edu>
> *Cc: *"openstack-operators@lists.openstack.org" <
> openstack-operators@lists.openstack.org>
> *Subject: *Re: [Openstack-operators] Shelving
>
>
>
>
>
> On Thu, Aug 18, 2016 at 3:12 PM, Jonathan D. Proulx <j...@csail.mit.edu>
> wrote:
>
>
> True they do consume IPs.
>
> In my configuration they do not consume any hypervisor disk.  I
> *think* this is true of all configurations once the 'shelved' systems
> are 'offloaded'.
>
>
> i concur, that's the intent of shelving as I understand it, to free up the
> hypervisor by moving off of the hypervisor entirely. So in our case with
> libvirt, there are no "shut off" instance names listed with "virsh list
> --all". They truly are "shelved" with just glance storage (once they are
> "shelve-offload"ed).
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Shelving

2016-08-18 Thread David Medberry
On Thu, Aug 18, 2016 at 3:12 PM, Jonathan D. Proulx 
wrote:

>
> True they do consume IPs.
>
> In my configuration they do not consume any hypervisor disk.  I
> *think* this is true of all configurations once the 'shelved' systems
> are 'offloaded'.


i concur, that's the intent of shelving as I understand it, to free up the
hypervisor by moving off of the hypervisor entirely. So in our case with
libvirt, there are no "shut off" instance names listed with "virsh list
--all". They truly are "shelved" with just glance storage (once they are
"shelve-offload"ed).
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Openstack] Map Nova flavor to glance image

2016-08-18 Thread David Medberry
Hi Jaison,

We do this in Horizon with a custom (locally changed) filter. I'll check to
see if it is upstreamed (it's certainly shareable, so I'll point you at a
git commit if it isn't all the way upstreamed once I get to the office.)

I know of no way of doing this directly in Nova at the api level. There is
a way to make the Murano and Trove instances "somewhat" hidden, by making
them private to certain projects that have that capability enabled thereby
not confusing non Murano, non Trove users but as soon as you add another
project to them, they will become visible everywhere within that project.

(As you might imagine this is a pretty common issue that we've directly
addressed with Trove but have seen no real solutions for yet.)

On Thu, Aug 18, 2016 at 5:45 AM, David Medberry <openst...@medberry.net>
wrote:

> Adding the Ops list.
> -- Forwarded message --
> From: Jaison Peter <urotr...@gmail.com>
> Date: Wed, Aug 17, 2016 at 10:29 PM
> Subject: [Openstack] Map Nova flavor to glance image
> To: OpenStack General <openst...@lists.openstack.org>
>
>
> Hi all,
>
> Is there any way to map a flavor to some specific glance images? For
> example if a user chooses flavor 'general_medium' then it shouldn't display
> any images used for trove or murano so that we can avoid the confusion.
> Right now, all images are displaying while choosing a flavor during
> instance launching. I think metadata is involved in this, but not sure how
> to do it.
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
> Post to : openst...@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Fwd: [Openstack] Map Nova flavor to glance image

2016-08-18 Thread David Medberry
Adding the Ops list.
-- Forwarded message --
From: Jaison Peter 
Date: Wed, Aug 17, 2016 at 10:29 PM
Subject: [Openstack] Map Nova flavor to glance image
To: OpenStack General 


Hi all,

Is there any way to map a flavor to some specific glance images? For
example if a user chooses flavor 'general_medium' then it shouldn't display
any images used for trove or murano so that we can avoid the confusion.
Right now, all images are displaying while choosing a flavor during
instance launching. I think metadata is involved in this, but not sure how
to do it.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Operators Meetup to plan Mid-Cycle

2016-07-19 Thread David Medberry
Last two weeks there appears to have been no operators meetup to continue
planning the mid-cycle. Is it completely planned now?

(I've been in IRC at 1400 on Tuesdays at #openstack-operators to crickets.)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Next Ops Midcycle NYC August 25-26

2016-06-22 Thread David Medberry
Would be sweet if that offer could be extended at least a week as we go
through the corp travel process. OTOH, $99 is almost cheap enough to buy
and not care I'll be doing that I guess.

On Wed, Jun 22, 2016 at 9:44 AM, Matt Jarvis 
wrote:

> Hi Mark
>
> Given we've not got the Eventbrite for the Ops Meetup live yet, is there
> any chance you could extend the early bird pricing or give operators who
> may be travelling for the Ops Meetup a discount code ? I suspect there may
> be quite a lot of interest for those travelling some distance.
>
> Matt
>
> On 22 June 2016 at 14:58, Mark Voelker  wrote:
>
>> Hi Ops,
>>
>> FYI for those that may not be aware, that’s also the week of OpenStack
>> East.  OpenStack East runs August 23-24 also in New York City (about ~15-20
>> minutes away from Civic Hall by MTA at the Playstation Theater).  If you’re
>> coming to town for the Ops Midcycle, you may want to make a week of it.
>> Earlybird pricing for OpenStack East is still available but prices increase
>> tomorrow:
>>
>> http://www.openstackeast.com/
>>
>> At Your Service,
>>
>> Mark T. Voelker
>> (wearer of many hats, one of which is OpenStack East steering committee
>> member)
>>
>>
>>
>> > On Jun 21, 2016, at 11:36 AM, Jonathan D. Proulx 
>> wrote:
>> >
>> > Hi All,
>> >
>> > The Ops Meetups Team has selected[1] New York City as the location of
>> the
>> > next mid-cycle meetup on August 25 and 26 2016 at Civic Hall[2]
>> >
>> > Many thanks to Bloomberg for sponsoring the location.  And thanks to
>> > BestBuy as well for their offer of the Seattle location.  The choice
>> > was very close and hopefully their offer will stand for our next North
>> > American meet-up.
>> >
>> > There's quite a bit of work to do to make this all happen in the
>> > next couple of months so it's still a great time to join the Ops
>> > Meetups Team[3] and help out.
>> >
>> > -Jon
>> >
>> > --
>> >
>> > [1]
>> http://eavesdrop.openstack.org/meetings/ops_meetups_team/2016/ops_meetups_team.2016-06-21-14.02.html
>> > [2]
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__civichall.org_about-2Dcivic-2Dhall_=CwICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=Q8IhPU-EIzbG5YDx5LYO7zEJpGZykn7RwFg-UTPWvDc=oAH-SSZ6EFikpmcyUpbf3984kyPBIuLJKkQadC6CKUw=36Kl-0b4WBC06xaYXo0V5AM2lHzvjhL48bBV48cz2is=
>> > [3] https://wiki.openstack.org/wiki/Ops_Meetups_Team
>> >
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> DataCentred Limited registered in England and Wales no. 05611763
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DNS searchdomains for your instances?

2016-06-20 Thread David Medberry
and of course that was the WRONG picture

https://www.dropbox.com/s/wxe9e9nu5cqgx9m/Screenshot%202016-06-20%2013.17.02.png?dl=0

On Mon, Jun 20, 2016 at 1:18 PM, David Medberry <openst...@medberry.net>
wrote:

> Each tenant in our neutron network has their own subnet and each subnet
> sets its own dns rules (see pic).
>
> https://www.dropbox.com/s/dgfgkdqijmrfweo/2016-06-19%2005.15.29.jpg?dl=0
>
> On Mon, Jun 20, 2016 at 1:07 PM, Kris G. Lindgren <klindg...@godaddy.com>
> wrote:
>
>> Hello all,
>>
>> Wondering how you guys are handling the dns searchdomains for your
>> instances in your internal cloud.  Currently we are updating the network
>> metadata template, on each compute node,  to include the dns-search-domains
>> options.  We (Josh Harlow) is working on implement the new network template
>> that nova created (in liberty) and are trying to get this added.  Currently
>> nova/neutron doesn't support any option to specify this
>> metadata/information anywhere.  I see some work on the neutron side to
>> allow setting of extra dhcp-opts network wide (currently only allowed on a
>> port) [1] [2].  But once that gets merged, then changes need to be made on
>> the nova side to pull that extra data into the network.json.
>>
>> So that begs the question how are other people handling this?
>>
>> [1] (spec) - https://review.openstack.org/#/c/247027/
>> [2] (code) - https://review.openstack.org/#/c/248931/
>>
>> ___
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] DNS searchdomains for your instances?

2016-06-20 Thread David Medberry
Each tenant in our neutron network has their own subnet and each subnet
sets its own dns rules (see pic).

https://www.dropbox.com/s/dgfgkdqijmrfweo/2016-06-19%2005.15.29.jpg?dl=0

On Mon, Jun 20, 2016 at 1:07 PM, Kris G. Lindgren 
wrote:

> Hello all,
>
> Wondering how you guys are handling the dns searchdomains for your
> instances in your internal cloud.  Currently we are updating the network
> metadata template, on each compute node,  to include the dns-search-domains
> options.  We (Josh Harlow) is working on implement the new network template
> that nova created (in liberty) and are trying to get this added.  Currently
> nova/neutron doesn't support any option to specify this
> metadata/information anywhere.  I see some work on the neutron side to
> allow setting of extra dhcp-opts network wide (currently only allowed on a
> port) [1] [2].  But once that gets merged, then changes need to be made on
> the nova side to pull that extra data into the network.json.
>
> So that begs the question how are other people handling this?
>
> [1] (spec) - https://review.openstack.org/#/c/247027/
> [2] (code) - https://review.openstack.org/#/c/248931/
>
> ___
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] NOVA build timeout: Time to fail a vm from build state

2016-06-15 Thread David Medberry
Ack. I'm picking my worst case with a 2T volume create and then doubling.

On Wed, Jun 15, 2016 at 11:25 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com
> wrote:

> On 6/15/2016 12:09 PM, David Medberry wrote:
>
>> So, there is a nova.conf setting:
>>
>> instance_build_timeout (default to 0, never timeout)
>>
>> Does anyone have a "good" value they use for this? In my mind it falls
>> very much into the specific-to-your-cloud-implementation bucket but just
>> wondered what folks were usign for this setting (if any).
>>
>> 10 minutes would be way longer than I'd want to wait for a build to fail
>> but that's probably what we will set this to
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
> Hmm, yeah definitely deployment specific. Also, if you're doing boot from
> volume where nova is creating the volume, that's going to add additional
> time depending on the size of the volume, whether the image is cached, etc.
> And there are separate timeouts in the compute manager for waiting for the
> volume to be available for attaching to the server.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] NOVA build timeout: Time to fail a vm from build state

2016-06-15 Thread David Medberry
So, there is a nova.conf setting:

instance_build_timeout (default to 0, never timeout)

Does anyone have a "good" value they use for this? In my mind it falls very
much into the specific-to-your-cloud-implementation bucket but just
wondered what folks were usign for this setting (if any).

10 minutes would be way longer than I'd want to wait for a build to fail
but that's probably what we will set this to
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Issue when trying to snapshot an instance

2016-06-03 Thread David Medberry
On Fri, Jun 3, 2016 at 6:22 AM, Grant Morley 
wrote:

>
> Turns out is in an issue with the keystone tokens that are timing out when
> the snapshot is taking place.


Yes, this is exactly the problem we have seen--token timeout. I didn't see
the size of this image but we've had that issue on any larger than 40G with
a two hour token time. In some (our) environments, you will also see issues
when the external url is using https via a load balancer. Switching to the
internal os-endpoint-type made a sizable improvement/speedup.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Meetup event sizes

2016-06-01 Thread David Medberry
On Wed, Jun 1, 2016 at 2:12 AM, Matt Jarvis 
wrote:

>
> The general consensus in the discussions we've had, and from the Austin
> summit sessions and the Manchester feedback session, is that between
> 150-200 attendees should be the maximum size.
>

Two comments, points

1) The last day of the Austin summit, ops had their own room. It was very
poorly organized and had about 40ish seats and another 10-20 people on the
floor. Even with just 40ish before it got SROd, it was difficult (again,
primarily because of the layout) to keep to one conversation and have
everyone interested participate. That said, I think the layouts we've had
for the Mid-Cycles I've been to are more amenable to discussion with kind
of podium and audience layout.

2) I think with that podium/audience layout 200 is doable 300 is probably
max. We'll still need quite a few 2ary rooms of at least 20 people size to
make progress in breakouts.

So, I'm 300ish max.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] About Live Migration and Bests practises.

2016-05-27 Thread David Medberry
In general with L-M on a libvirt/kvm based env, I would cc: daniel barrange
(barra...@redhat.com) and I've done that since I didn't seem him
specifically.

We have taken to GENERALLY SPEAKING doing a single VM at a time. If we have
high confidence, we'll go 5 at a time (but never more than that) and we use
ansible with the "-f 5" flag to handle this for us.

In later version (Lib, Mitaka) I believe that OpenStack Nova inherently
handles this better.

Due to the issues you have seen (and that we also saw in I, J, and K
releases) we do NOT use "nova host-evacuate-live" but I haven't tried it
since our Liberty upgrade. With the appropriate --max-servers 5 flag, it
may work just fine now. I'll report back after I give that a whirl in our
test environments.

As far as the "STATE" goes, there are many states you can end up in when
one fails (or more technically when it doesn't complete.) We end up with
ghost VMs and misplaced VMs. Ghost VMs are when there are really TWO VMs
out there (on both the source and destination node) which can easily leak
to disk corruption if both ever go active. Misplaced VMs occur when reality
(ps aux | grep [q]emu shows it on a node) and the nova database disagree
where the node is located.

Cleaning up in either case usually involves doing a virsh destroy of the
ghost(s) and then a nova reboot --hard.

* Note: We also use the term "invisible VMs" when nova list --all-tenants
doesn't show the VM but that is usually just the paging/marking stopping
when it gets to 1000 non-deleted VMs.

ONE MORE THING:
If you are using ephemeral rbd volumes and you migrate from Kilo to Liberty
AND HAVE CONFIGDRIVE FORCED ON, you will likely need a patched version of
Nova or need to manually create rbd based configdrives. Everything will
work fine until you stop then start an instance. It will run fine with the
/var/lib/nova/instances/$UUID_disk.config until such time as it is stopped
and then when it gets started again it assumed
rbd://instances/$UUID_disk.config to exist and will typically fail to start
in that case.
Ref: https://bugs.launchpad.net/nova/mitaka/+bug/1582684


On Fri, May 27, 2016 at 12:19 PM, David Bayle  wrote:

> Greetings,
>
> First thanks a lot for all the information provided regarding OpenStack
> and thanks for your hudge work on this topic. (Live Migration)
>
> We are operating an OpenStack setup running Kilo + Ceph +
> (ceph_patch_for_snapshot).
>
> We are still playing with live migration on Kilo and we had some questions
> about it:
>
> - first, when we ask from a live migrate from compute1 to Compute2, does
> it takes the exact same amount of RAM from compute1 and reserve it on
> compute2 ? or is there any little overhead ?
> - and for the second question, does the state NOSTATE of openstack,
> reveals that the migration state of KVM has been lost ? (kvm power state
> for example) or does it reveals that there was an issue copying the RAM of
> the instance from one compute to the other one.
>
> We faced some issues while trying to host-live-evacuate, or even if we do
> live migrate more than 4 to 5 instances at the same time, most of the time
> the live migration brakes and VMs get NOSTATE for power state in OpenStack:
> which is very disturbing because the only way to solve this (the only way
> that we know) is to restart the instance.
> (we could also edit the mysql database as proposed by the IRC chan of the
> community).
> By live migrating each instances one by one => gives no issue.
> More than this can result in live migration failure and NOSTATE in
> openstack power status.
>
> Is there anything that we are doing wrong ? we've seen host-live-evacuate
> working once or two times; but then when having around 15 VMs on a compute;
> the behavior is totally different.
> (and it doesn t seems we are maxing out any resources (but the network can
> be as we are using 1Gb/s management network)
> Here is an example of issue faced with a host-live-evacuate we get this on
> the source comput node:
>
> 2016-05-26 16:49:54.080 3963 WARNING nova.virt.libvirt.driver [-]
> [instance: 98b793a1-61fb-45c6-95b7-6c2bca10d6de] Error monitoring
> migration: internal error: received hangup / error event on socket
> 2016-05-26 16:49:54.080 3963 TRACE nova.virt.libvirt.driver [instance:
> 98b793a1-61fb-45c6-95b7-6c2bca10d6de] Traceback (most recent call last): 
> 2016-05-26
> 16:49:54.080 3963 TRACE nova.virt.libvirt.driver [instance:
> 98b793a1-61fb-45c6-95b7-6c2bca10d6de] File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5689,
> in _live_migration 2016-05-26 16:49:54.080 3963 TRACE
> nova.virt.libvirt.driver [instance: 98b793a1-61fb-45c6-95b7-6c2bca10d6de]
> dom, finish_event) 2016-05-26 16:49:54.080 3963 TRACE
> nova.virt.libvirt.driver [instance: 98b793a1-61fb-45c6-95b7-6c2bca10d6de]
> File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
> 5521, in _live_migration_monitor 2016-05-26 16:49:54.080 3963 TRACE
> 

Re: [Openstack-operators] Problems (simple ones) at scale... Invisible VMs.

2016-05-18 Thread David Medberry
I don't think --marker does at all what I want. The --limit -1 does doe
multiple successive queries (with a marker) automagically returning a
single list as CLI output to nova list. That really IS what I want (and
what some of our automation is written around.)

Thanks!

On Wed, May 18, 2016 at 5:26 PM, James Downs <e...@egon.cc> wrote:

> On Wed, May 18, 2016 at 04:37:42PM -0600, David Medberry wrote:
>
> > It seems to bypass it... or I'm running into a LOWER limit
> (undocumented).
> > So help on  limit -1 says it is still limited by osapi_max_limit
>
> You're looking for:
>
> --marker  The last server UUID of the previous page;
> displays list of servers after "marker".
>
> This is much faster than increasing the size of results, at least in
> sufficiently
> large environments.
>
> Cheers,
> -j
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Problems (simple ones) at scale... Invisible VMs.

2016-05-18 Thread David Medberry
It seems to bypass it... or I'm running into a LOWER limit (undocumented).
So help on  limit -1 says it is still limited by osapi_max_limit

I'll check my config for that value (likely the default) when I get home.

On Wed, May 18, 2016 at 4:18 PM, Kris G. Lindgren <klindg...@godaddy.com>
wrote:

> Nova has a config setting for the maximum number of results to be returned
> by a single call.  You can bump that up so that you can do a nova list
> —all-tenants and still see everything. However if I am reading the below
> correctly, then I didn't realize that the —limit –1 apparently by-passes
> that config option?
>
> ___
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
> From: David Medberry <openst...@medberry.net>
> Date: Wednesday, May 18, 2016 at 4:13 PM
> To: "openstack-operators@lists.openstack.org" <
> openstack-operators@lists.openstack.org>
> Subject: [Openstack-operators] Problems (simple ones) at scale...
> Invisible VMs.
>
> So, we just ran into an "at-sale" issue that shouldn't have been an issue.
>
> Many of the OpenStack CLI tools accept a limit parameter (to limit how
> much data you get back from a single query). However, much less well
> documented is that there is an inherent limit that you will run into at a
> 1000 VMs (not counting deleted ones). Many operators have already exceeded
> that limit and likely run into this. With nova cli and openstack client,
> you can simply pass in a limit of -1 to get around this (and though it will
> still make paged queries, you won't have "invisible" VMs which is what I've
> begun to call the ones that don't make it into the first/default page.
>
> I can't really call this a bug for Nova (but it is definitely a bug for
> Cinder which doesn't have a functional get me all of them command and is
> also limited at 1000 for a single call but you can never get the rest at
> least in our Liberty environment.)
>
> box:~# nova list  |tail -n +4 |head -n -1 |wc
>1000   16326  416000
> box:~# nova list --limit -1  |tail -n +4 |head -n -1 |wc
>1060   17274  440960
>
> (so I recently went over the limit of 1000)
>
> YMMV.
>
> Good luck.
>
> -d
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Problems (simple ones) at scale... Invisible VMs.

2016-05-18 Thread David Medberry
So, we just ran into an "at-sale" issue that shouldn't have been an issue.

Many of the OpenStack CLI tools accept a limit parameter (to limit how much
data you get back from a single query). However, much less well documented
is that there is an inherent limit that you will run into at a 1000 VMs
(not counting deleted ones). Many operators have already exceeded that
limit and likely run into this. With nova cli and openstack client, you can
simply pass in a limit of -1 to get around this (and though it will still
make paged queries, you won't have "invisible" VMs which is what I've begun
to call the ones that don't make it into the first/default page.

I can't really call this a bug for Nova (but it is definitely a bug for
Cinder which doesn't have a functional get me all of them command and is
also limited at 1000 for a single call but you can never get the rest at
least in our Liberty environment.)

box:~# nova list  |tail -n +4 |head -n -1 |wc
   1000   16326  416000
box:~# nova list --limit -1  |tail -n +4 |head -n -1 |wc
   1060   17274  440960

(so I recently went over the limit of 1000)

YMMV.

Good luck.

-d
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Nova] Significance of Error Vs Failed status

2016-05-11 Thread David Medberry
So I'm a big ol' -0- don't care on this. We've never used that list before
(but will now). Seems like it would be useful though to have it the same
for l-m and cold migration.

On Wed, May 11, 2016 at 9:27 AM, Andrew Laski <and...@lascii.com> wrote:

>
>
>
> On Wed, May 11, 2016, at 11:10 AM, David Medberry wrote:
>
> Kekane,
>
> Hi,
>
> This setting, how does it display in the "nova show $UUID" or in the
> "openstack server show $UUID"? Ie, I don't want a VM showing ERROR state if
> the VM itself is not in error. A failed migration doesn't leave the VM down
> (well, not always) but error generally implies it is down. If this is more
> of an internal status, then +1. I'll look at the code shortly but wanted to
> get a reply off first.
>
>
> To clarify, this is only about the state of a migration not an instance.
> If as an admin you list or show your migrations this would affect how that
> is displayed. Nothing about the instance, or how it's displayed, will
> change.
>
>
>
>
> ALSO: It would have been very very helpful to see "live-migration" in the
> subject line.
>
>
> -d
>
> On Wed, May 11, 2016 at 12:55 AM, Kekane, Abhishek <
> abhishek.kek...@nttdata.com> wrote:
>
> Hi Operators,
>
>
>
> Could you please provide your opinion on below mail. I need to discuss
> this in coming nova meeting (12 May, 2016).
>
>
>
> Thank you,
>
>
>
> Abhishek Kekane
>
>
>
> *From:* Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
> *Sent:* Monday, May 09, 2016 7:22 AM
> *To:* openstack-operators@lists.openstack.org
> *Subject:* [Openstack-operators] [Nova] Significance of Error Vs Failed
> status
>
>
>
> Hi All,
>
> In Liberty release, we had upstream [1] a security fix to cleanup orphaned
> instance files from compute nodes for resize operation. To fix this
> security issue, a new periodic task '_cleanup_incomplete_migrations’ was
> introduced that runs on each compute node which queries for deleted
> instances and migration status in “error” status. If there are any such
> instances, then it simply cleanup instance files on that particular compute
> node.
>
> Similar issue is reported in LP bug [2] for Live-migration operation and
> we would like to use the same periodic task to fix this problem. But in
> case of live migration, the migration status is set to “failed” instead of
> “error” status if migration fails for any reason. This change was
> introduced in patch [3] when migration object support was added for live
> migration. Due to this inconsistency, the periodic task will not pickup
> instances to cleanup orphaned instance files. To fix this problem, we
> simply want to set the migration status to “error” in patch [4] same as
> done for resize operation to bring consistency to the code.
>
> We have discussed about this issue in the nova meeting [5] and decided
> that to the client, migration status 'error' vs. 'failed' should be
> considered the same thing, it's a failure. From operators point of view, is
> there any significance of setting migration status to 'error' or 'failed',
> if yes what is it and what impact it will have if migration status is
> changed from 'failed' to 'error'. Please provide your opinions on the same.
>
>
>
> [1] https://review.openstack.org/#/c/219299
>
> [2] : https://bugs.launchpad.net/nova/+bug/1470420
>
> [3] https://review.openstack.org/#/c/183331
>
> [4] https://review.openstack.org/#/c/215483
>
> [5]
> http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2016-05-05.log.html#t2016-05-05T14:40:51
>
> Thank You,
>
> Abhishek
>
>
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
>
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> ___
> OpenStack-operat

Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-03 Thread David Medberry
The only reason I can think of is that they are doing nested VMs and don't
have the right nesting flag enabled in their base flag.

On Tue, May 3, 2016 at 9:01 AM, Daniel P. Berrange 
wrote:

> Hello Operators,
>
> One of the things that constantly puzzles me when reading the user
> survey results wrt hypervisor is the high number of respondants
> claiming to be using QEMU (as distinct from KVM).
>
> As a reminder, in Nova saying virt_type=qemu causes Nova to use
> plain QEMU with pure CPU emulation which is many many times slower
> to than native CPU performance, while virt_type=kvm causes Nova to
> use QEMU with KVM hardware CPU acceleration which is close to native
> performance.
>
> IOW, virt_type=qemu is not something you'd ever really want to use
> unless you had no other options due to the terrible performance it
> would show. The only reasons to use QEMU are if you need non-native
> architecture support (ie running arm/ppc on x86_64 host), or if you
> can't do KVM due to hardware restrictions (ie ancient hardware, or
> running compute hosts inside virtual machines)
>
> Despite this, in the 2016 survey 10% claimed to be using QEMU in
> production & 3% in PoC and dev, in 2014 it was even higher at 15%
> in prod & 12% in PoC and 28% in dev.
>
> Personally my gut feeling says that QEMU usage ought to be in very
> low single figures, so I'm curious as to the apparent anomoly.
>
> I can think of a few reasons
>
>  1. Respondants are confused as to the difference between QEMU
> and KVM, so are saying QEMU, despite fact they are using KVM.
>
>  2. Respondants are confused as to the difference between QEMU
> and KVM, so have mistakenly configured their nova hosts to
> use QEMU instead of KVM and suffering poor performance without
> realizing their mistake.
>
>  3. There are more people than I expect who are running their
> cloud compute hosts inside virtual machines, and thus are
> unable to use KVM.
>
>  4. There are more people than I expect who are providing cloud
> hosting for non-native architectures. eg ability to run an
> arm7/ppc guest image on an x86_64 host and so genuinely must
> use QEMU
>
> If items 1 / 2 are the cause, then by implication the user survey
> is likely under-reporting the (already huge) scale of the KVM usage.
>
> I can see 3. being a likely explanation for high usage of QEMU in a
> dev or PoC scenario, but it feels unlikely for a production deployment.
>
> While 4 is technically possible, Nova doesn't really do a very good
> job at mixed guest arch hosting - I'm pretty sure there are broken
> pieces waiting to bite people who try it.
>
> Does anyone have any thoughts on this topic ?
>
> Indeed, is there anyone here who genuinely use virt_type=qemu in a
> production deployment of OpenStack who might have other reasons that
> I've missed ?
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ETHERPAD for today's Ops Session

2016-04-29 Thread David Medberry
Subject: ETHERPAD for today's Ops Session
https://etherpad.openstack.org/p/AUS-ops-informal-meetup

Room is kind of full but we'll let anyone in including lots of interior
floor space.

Add your name at the bottom of the pad
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Anyone else use vendordata_driver in nova.conf?

2016-04-18 Thread David Medberry
Hi Ned, Jay,

We use it also and I have to agree, it's onerous to require users to add
that functionality back in. Where was this discussed?

On Mon, Apr 18, 2016 at 8:13 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) <
erh...@bloomberg.net> wrote:

> Requiring users to remember to pass specific userdata through to their
> instance at every launch in order to replace functionality that currently
> works invisible to them would be a step backwards. It's an alternative,
> yes, but it's an alternative that adds burden to our users and is not one
> we would pursue.
>
> What is the rationale for desiring to remove this functionality?
>
> From: jaypi...@gmail.com
> Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in
> nova.conf?
>
> On 04/18/2016 09:24 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> > I noticed while reading through Mitaka release notes that
> > vendordata_driver has been deprecated in Mitaka
> > (https://review.openstack.org/#/c/288107/) and is slated for removal at
> > some point. This came as somewhat of a surprise to me - I searched
> > openstack-dev for vendordata-related subject lines going back to January
> > and found no discussion on the matter (IRC logs, while available on
> > eavesdrop, are not trivially searchable without a little scripting to
> > fetch them first, so I didn't check there yet).
> >
> > We at Bloomberg make heavy use of this particular feature to inject
> > dynamically generated JSON into the metadata service of instances; the
> > content of the JSON differs depending on the instance making the request
> > to the metadata service. The functionality that adds the contents of a
> > static JSON file, while remaining around, is not suitable for our use
> case.
> >
> > Please let me know if you use vendordata_driver so that I/we can present
> > an organized case for why this option or equivalent functionality needs
> > to remain around. The alternative is that we end up patching the
> > vendordata driver directly in Nova when we move to Mitaka, which I'd
> > like to avoid; as a matter of principle I would rather see more
> > classloader overrides, not fewer.
>
> Wouldn't an alternative be to use something like Chef, Puppet, Ansible,
> Saltstack, etc and their associated config variable storage services
> like Hiera or something similar to publish custom metadata? That way,
> all you need to pass to your instance (via userdata) is a URI or
> connection string and some auth details for your config storage service
> and the instance can grab whatever you need.
>
> Thoughts?
> -jay
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Anyone else use vendordata_driver in nova.conf?

2016-04-18 Thread David Medberry
Ah, read the bug, we are using the default, not a custom driver. so
NEVERMIND.

On Mon, Apr 18, 2016 at 8:16 AM, David Medberry <openst...@medberry.net>
wrote:

> Hi Ned, Jay,
>
> We use it also and I have to agree, it's onerous to require users to add
> that functionality back in. Where was this discussed?
>
> On Mon, Apr 18, 2016 at 8:13 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) <
> erh...@bloomberg.net> wrote:
>
>> Requiring users to remember to pass specific userdata through to their
>> instance at every launch in order to replace functionality that currently
>> works invisible to them would be a step backwards. It's an alternative,
>> yes, but it's an alternative that adds burden to our users and is not one
>> we would pursue.
>>
>> What is the rationale for desiring to remove this functionality?
>>
>> From: jaypi...@gmail.com
>> Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in
>> nova.conf?
>>
>> On 04/18/2016 09:24 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
>> > I noticed while reading through Mitaka release notes that
>> > vendordata_driver has been deprecated in Mitaka
>> > (https://review.openstack.org/#/c/288107/) and is slated for removal at
>> > some point. This came as somewhat of a surprise to me - I searched
>> > openstack-dev for vendordata-related subject lines going back to January
>> > and found no discussion on the matter (IRC logs, while available on
>> > eavesdrop, are not trivially searchable without a little scripting to
>> > fetch them first, so I didn't check there yet).
>> >
>> > We at Bloomberg make heavy use of this particular feature to inject
>> > dynamically generated JSON into the metadata service of instances; the
>> > content of the JSON differs depending on the instance making the request
>> > to the metadata service. The functionality that adds the contents of a
>> > static JSON file, while remaining around, is not suitable for our use
>> case.
>> >
>> > Please let me know if you use vendordata_driver so that I/we can present
>> > an organized case for why this option or equivalent functionality needs
>> > to remain around. The alternative is that we end up patching the
>> > vendordata driver directly in Nova when we move to Mitaka, which I'd
>> > like to avoid; as a matter of principle I would rather see more
>> > classloader overrides, not fewer.
>>
>> Wouldn't an alternative be to use something like Chef, Puppet, Ansible,
>> Saltstack, etc and their associated config variable storage services
>> like Hiera or something similar to publish custom metadata? That way,
>> all you need to pass to your instance (via userdata) is a URI or
>> connection string and some auth details for your config storage service
>> and the instance can grab whatever you need.
>>
>> Thoughts?
>> -jay
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] BoF OVN call for ideas/discussion points

2016-04-13 Thread David Medberry
Hi,

There is a Birds of a Feather session on OVN at Austin.
If you've got experience or questions or issues with OVN, you can register
them here:

https://etherpad.openstack.org/p/AUS-BoF-OVN

and participate in the session:
Wednesday, April 27, 1:50pm-2:30pm

https://www.openstack.org/summit/austin-2016/summit-schedule/events/6871?goback=1

Thanks!
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Intel 10 GbE / bonding issue with hash policy layer3+4

2016-03-23 Thread David Medberry
I'd definitely post this somewhere that Ubuntu kernel folks would read it.
File a bug against that kernel. We're still on 3.13.0- stream so haven't
seen this ourselves.

On Wed, Mar 23, 2016 at 3:53 AM, Sascha Vogt  wrote:

> Hi all,
>
> I thought it might be of interest / get feedback from the operators
> community about an oddity we experienced with Intel 10 GbE NICs and LACP
> bonding.
>
> We have Ubuntu 14.04.4 as OS and Intel 10 GbE NICs with the ixgbe Kernel
> module. We use VLANS for ceph-client, ceph-data, openstack-data,
> openstack-client networks all on a single LACP bonding of two 10 GbE ports.
>
> As bonding hash policy we chose layer3+4 so we can use the full 20 Gb
> even if only two servers communicate with each other. Typically we check
> that by using iperf to a single server with -P 4 and see if we exceed
> the 10 Gb limit (just a few times to check).
>
> Due to Ubuntus default of installing the latest Kernel our new host had
> Kernel 4.2.0 instead of the Kernel 3.16 the other machines had and we
> noticed that iperf only used 10 Gb.
>
> > # cat /proc/net/bonding/bond0
> > Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
> >
> > Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> > Transmit Hash Policy: layer3+4 (1)
>
> This was shown on both - Kernel 3.16 and 4.2.0
>
> After downgrading to Kernel 3.16 we got the iperf results we expected.
>
> Does anyone have a similar setup? Anyone noticed the same things? To us
> this looks like a bug in the Kernel (ixgbe module?), or are we
> misunderstanding the hash policy layer3+4?
>
> Any feedback is welcome :) I have not yet posted this to the Kernel ML
> or Ubuntus ML yet, so if no one here is having a similar setup I'll move
> over there. I just thought OpenStack ops might be the place were it is
> most likely that someone has a similar setup :)
>
> Greetings
> -Sascha-
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] RFEs: communication channel and process

2016-03-21 Thread David Medberry
On Mon, Mar 21, 2016 at 5:03 PM, Matt Riedemann 
wrote:

>
> I'd also like to get a Nova/Ops cross-project liaison in Newton. In my
> mind, that person would attend the ops meeting(s), monitor the list, be in
> the #openstack-operators channel and be a point person for bringing issues
> from the operators community back to nova - via nova meetings, -dev ML,
> whatever. That person could help raise the flag on important items though.


I think we have enough interested and qualified candidates in Ops that are
nova centric that we could easily support this. (That said, travel to
Bristol and Manchester were difficult to arrange. Commitment up front
should indicate support for necessary travel though.)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] nova-conductor scale out

2016-03-15 Thread David Medberry
How many compute nodes do you have (that is triggering your controller node
limitations)?

We run nova-conductor on multiple control nodes. Each control node runs "N"
conductors where N is basically the HyperThreaded CPU count.

On Tue, Mar 15, 2016 at 8:44 AM, Gustavo Randich 
wrote:

> Hi,
>
> Simple question: can I deploy nova-conductor across several servers?
> (Icehouse)
>
> Because we are reaching a limit in our controller node
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-community] Recognising Ops contributions

2016-03-02 Thread David Medberry
On Wed, Mar 2, 2016 at 3:37 PM, Edgar Magana 
wrote:

> We want to make this a reality by gathering a list of criteria that we as
> a community feel that shows someone  has demonstrated technical
> contributions, using their skills as Ops. Our current ideas are as follows:
>
>- Moderating a session at an Ops meetup
>- Filing a detailed bug, tagged 'ops', that gets fixed
>- Filling out the user survey (including a deployment)
>- Making contributions to ops-tags and/or OSOps repositories
>- Being an active moderator on Ask OpenStack
>- Actively participating in a user commitee working group
>- Contributing a post to Superuser magazine
>- Giving a presentation or track chairing for the Operations track at
>the conference
>- Hosting OpenStack Meetups
>
> Here's what we would like to happen:
>
>1. We discuss and converge on these initial criteria and make a list
>of eligible members
>2. If we can pull this off in time, we've arranged to get some kind of
>mention of the status on your conference badge in Austin
>3. Assess how it goes for Austin and the six month period that
>follows, then iterate to success, including offering ATC-similar
>registration codes at the Barcelona summit
>
> We are really looking forward to receiving your feedback.
>
> Kind Regards,
> The OpenStack User Committee.
>

Off the top of my head, the one thing I see missing from this is more of
the influencing upstream (ie, participating in a serivce/project mid-cycle.
However, of the people I know personally that have done this, they also
meet many if not all of the other critieria. And of course, I'm unsure of
how to state that properly
 * Participated in a project sprint (or something like that.)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] OpenStack Contributor Awards

2016-03-01 Thread David Medberry
On Tue, Mar 1, 2016 at 8:30 AM, Tom Fifield  wrote:

> Excellent, excellent.
>
> What's the best place to buy Raspberry Pis these days?
>

Raspberry Pi 3 is available to buy today from our partners element14
 and RS
Components ,
and other resellers.
ref:
https://www.raspberrypi.org/blog/raspberry-pi-3-on-sale/

Pi 3 intro'd yesterday or day before.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] I have an installation question and possible bug

2016-01-25 Thread David Medberry
Also, is it possible your token timed out during the upload (thereby
truncating it)? Validate the byte size of the final uploaded (large) glance
image.

On Mon, Jan 25, 2016 at 10:59 AM, Abel Lopez  wrote:

> First question, what is your glance store?
> Also, it sounds like you created the large image from a running instance,
> is that correct? If so, was the instance suspended when you initiated the
> image-create?
>
>
> On Monday, January 25, 2016, Christopher Hull 
> wrote:
>
>> Hello all;
>>
>> I'm an experienced developer and I work at Cisco.  Chances are I've
>> covered the basics here,but just in case, check me.
>> I've followed the Kilo install instructions to the letter so far as I can
>> tell.   I have not installed Swift, but I think everything else, and my
>> installation almost works.   I'm having a little trouble with Glance.
>>
>> It seems that when I attempt to create a large image (that may or not may
>> be the issue), the checksum that Glance records in it's DB is incorrect.
>> Cirros image runs just fine.  CentOS cloud works.  But when I offload and
>> create an image from a big CentOS install (say 100gb), nova says the
>> checksum is wrong when I try to boot it.
>>
>> Install was on a fresh CentOS7 on new system I built, i7 32GB 7TB.
>> Plenty of speed and space.   And this system is dedicated to Openstack.
>>
>>
>> http://docs.openstack.org/kilo/install-guide/install/yum/content/index.html
>>
>>
>>
>> Here's a little test I ran.
>>
>> ===
>> Attempt to deploy image
>>
>>
>> nova boot --flavor m1.medium --image v4c-centos-volume1-img1 --nic
>> net-id=61a08e7c-8d4b-42c3-b963-eddcf98113a2 \
>>--security-group default --key-name demo-key
>> v4c-centos-volume1-instance1
>>
>>
>> 2016-01-02 16:37:27.764 4490 ERROR nova.compute.manager
>> [req-87feb5bf-0e29-432b-8f6c-aeac1fba4753 196b1dc42db94eb7bf210c2281b68e67
>> 3690e3975f6546d793b530dffa8f1a8d - - -] [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] Instance failed to spawn
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] Traceback (most recent call last):
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2461, in
>> _build_resources
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] yield resources
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2333, in
>> _build_and_run_instance
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]
>> block_device_info=block_device_info)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2378,
>> in spawn
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] admin_pass=admin_password)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2776,
>> in _create_image
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] instance, size,
>> fallback_from_host)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5894,
>> in _try_fetch_image_cache
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] size=size)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
>> 231, in cache
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] *args, **kwargs)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
>> 480, in create_image
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] prepare_template(target=base,
>> max_size=size, *args, **kwargs)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445,
>> in inner
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 

Re: [Openstack-operators] I have an installation question and possible bug

2016-01-25 Thread David Medberry
On Mon, Jan 25, 2016 at 4:15 PM, Christopher Hull 
wrote:

> I have not installed Swift.   Is that an issue?
>

No, not an issue.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Operational Director?

2015-11-24 Thread David Medberry
Tim, thanks for pointing out the User Committee. You are the second (or
third) one to mention that to me in this context and is another great way
to influence the direction OpenStack takes (and goes hand in hand with this
operator's list, the midcycle, and operator's summit meetings.)

-d
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Operational Director?

2015-11-23 Thread David Medberry
Hi all,

Isn't it time to get an operational director--or rather another/additional
operators as directors? I've nominated a few (unnamed here) and you may be
interested in nominationg others. Look for the email with subject line:
Notice of meeting to elect Individual Directors
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OPs Midcycle location discussion.

2015-11-17 Thread David Medberry
Following up to what Matt said, even for the service (nova, cinder, etc)
mid-cycles I've been in, typically only 1 or 2 folks participate remotely
and they make sure to have someone pay attention/alert them when their
topics are coming up. I don't think remote/virtual scales beyond 1-2 remote
participants in general (when there is critical mass in person.)

-d
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OPs Midcycle location discussion.

2015-11-17 Thread David Medberry
On Tue, Nov 17, 2015 at 11:45 AM, Erik McCormick  wrote:

> We're deciding not to innovate a solution to allow people to
> participate in a group that is attempting to provide innovative ideas.
> How ironic. I actually don't think it would require much innovation.
> The Ceph guys run their entire design summit remotely, and I'm certain
> that it way beyond one or two people.
>

Yes, going "whole hog" into a virtual session actually works reasonably
well. UDS have been like this for a few years (but there is dramatically
less participation than when it was f2f.) It works LESS well (IME) when
there are a large group local and a minority remote but works reasonably
well when that remote minority is 1-2 folks.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [eu] European Operators Meetup February 15th/16th 2016

2015-11-12 Thread David Medberry
Thanks to Canonical, Codethink, Midokura, DataCentred and the EU operators
that got this going! Nicely done.

On Wed, Nov 11, 2015 at 2:11 AM, Matt Jarvis 
wrote:

> Canonical, Codethink, Midokura and DataCentred
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenStack Tuning Guide

2015-11-04 Thread David Medberry
Great Kevin. Thanks for honchoing and sharing.

On Wed, Nov 4, 2015 at 3:56 PM, Kevin Bringard (kevinbri) <
kevin...@cisco.com> wrote:

> Hey all!
>
> Something that jumped out at me in Tokyo was how much it seemed that
> "basic" tuning stuff wasn't common knowledge. This was especially prevalent
> in the couple of rabbit talks I went to. So, in order to pool our
> resources, I started an Etherpad titled the "OpenStack Tuning Guide" (
> https://etherpad.openstack.org/p/OpenStack_Tuning_Guide). Eventually I
> expect this should go into the documentation project, and much of it may
> already exist in the operators manual (or elsewhere), but I thought that
> getting us all together to drop in our hints, tweaks, and best practices
> for tuning our systems to run OpenStack well, in real production, would be
> time well spent.
>
> It's a work in progress at the moment, and we've only just started, but
> please feel free to check it out. Feedback and community involvement is
> super welcome, so please don't hesitate to modify it as you see fit.
>
> Finally, I hate diverging resources, so if something like this already
> exists please speak up so we can focus our efforts on making sure that's up
> to date and well publicized.
>
> Thanks everyone!
>
> -- Kevin
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Informal Ops Meetup?

2015-10-29 Thread David Medberry
They all just moved up to the Prince Takanawa library. the "Prince" room
was getting too loud.

On Fri, Oct 30, 2015 at 10:38 AM, Balaji Narayanan (பாலாஜி நாராயணன்) <
li...@balajin.net> wrote:

> Hello Erik /Edgar,
>
> We are in the Prince room in building #44, closer to the projector screen.
>
> -balaji
>
> On 30 October 2015 at 10:08, Erik McCormick 
> wrote:
> > Which table are you all at?
> >
> > On Oct 29, 2015 11:53 PM, "Belmiro Moreira"
> >  wrote:
> >>
> >> +1
> >>
> >> Belmiro
> >>
> >>
> >>
> >>
> >> On Thursday, 29 October 2015, Kris G. Lindgren 
> >> wrote:
> >>>
> >>> We seem to have enough interest… so meeting time will be at 10am in the
> >>> Prince room (if we get an actual room I will send an update).
> >>>
> >>> Does anyone have any ideas about what they want to talk about?  I am
> >>> pretty much open to anything.  I started:
> >>> https://etherpad.openstack.org/p/TYO-informal-ops-meetup  for
> tracking of
> >>> some ideas/time/meeting place info.
> >>>
> >>> ___
> >>> Kris Lindgren
> >>> Senior Linux Systems Engineer
> >>> GoDaddy
> >>>
> >>> From: Sam Morrison 
> >>> Date: Thursday, October 29, 2015 at 6:14 PM
> >>> To: "openstack-operators@lists.openstack.org"
> >>> 
> >>> Subject: Re: [Openstack-operators] Informal Ops Meetup?
> >>>
> >>> I’ll be there, talked to Tom too and he said there may be a room we can
> >>> use else there is plenty of space around the dev lounge to use.
> >>>
> >>> See you tomorrow.
> >>>
> >>> Sam
> >>>
> >>>
> >>> On 29 Oct 2015, at 6:02 PM, Xav Paice  wrote:
> >>>
> >>> Suits me :)
> >>>
> >>> On 29 October 2015 at 16:39, Kris G. Lindgren 
> >>> wrote:
> 
>  Hello all,
> 
>  I am not sure if you guys have looked at the schedule for Friday… but
>  its all working groups.  I was talking with a few other operators and
> the
>  idea came up around doing an informal ops meetup tomorrow.  So I
> wanted to
>  float this idea by the mailing list and see if anyone was interested
> in
>  trying to do an informal ops meet up tomorrow.
> 
>  ___
>  Kris Lindgren
>  Senior Linux Systems Engineer
>  GoDaddy
> 
>  ___
>  OpenStack-operators mailing list
>  OpenStack-operators@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> >>>
> >>> ___
> >>> OpenStack-operators mailing list
> >>> OpenStack-operators@lists.openstack.org
> >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>>
> >>>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
>
>
> --
> http://balajin.net/blog
> http://flic.kr/balajijegan
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Rate limit an max_count

2015-09-09 Thread David Medberry
Your users should also have reasonable quotas set. If they can boot
thousands of instances, you may have a quota issue to address. (No problem
with the blueprint or need to set an overall limit though--just that you
should be able to address this without waiting for that to land.)

On Mon, Sep 7, 2015 at 8:48 AM, Jean-Daniel Bonnetot <
jean-daniel.bonne...@ovh.net> wrote:

> Hi,
>
> I try to limit the max instances a user can create per minute.
>
> With rate limit, we can limit the number of call/min on POST */servers for
> example.
> But the user can still use the max_count paramteter in his API call to
> boot dozen of thousand of instances and make the scheduler crazy.
>
> I’m pretty sure that there is a possiblity to limit the max_count and so
> define a max instances/min.
> Do you know something to do it?
>
> --
> Jean-Daniel
> @pilgrimstack
>
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Juno and Kilo Interoperability

2015-08-26 Thread David Medberry
Hi Eren,

I'm pretty sure NECTaR is doing diff versions at different sites in a
widely distributed way.

https://www.openstack.org/user-stories/nectar/

I've cc'd Sam as well. He's your man.

On Wed, Aug 26, 2015 at 5:24 AM, Eren Türkay er...@skyatlas.com wrote:

 Hello operators,

 I am wondering if anyone is using different versions of Openstack in
 different
 sites.

 We have our first site which is Juno, and we are now having another site
 where
 we are planning to deploy Kilo. Does anyone have experience with different
 versions of installation? Particularly, our Horizon and other clients will
 be
 Juno, but they will talk to secondary site which is Kilo. Inferring from
 the
 release notes, Kilo API looks backward compatible with Juno, so I'm a
 little
 optimistic about it but still I'm not sure.

 Any help is appreciated,
 Eren

 --
 Eren Türkay, System Administrator
 https://skyatlas.com/ | +90 850 885 0357

 Yildiz Teknik Universitesi Davutpasa Kampusu
 Teknopark Bolgesi, D2 Blok No:107
 Esenler, Istanbul Pk.34220


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Juno and Kilo Interoperability

2015-08-26 Thread David Medberry
and we generally run a much newer Horizon and Keystone than the rest of the
services. We had Horizon and Keystone on POST KILO (master after Kilo
released) while still running Juno on all other services. Worked fine.

On Wed, Aug 26, 2015 at 11:09 AM, David Medberry openst...@medberry.net
wrote:

 Hi Eren,

 I'm pretty sure NECTaR is doing diff versions at different sites in a
 widely distributed way.

 https://www.openstack.org/user-stories/nectar/

 I've cc'd Sam as well. He's your man.

 On Wed, Aug 26, 2015 at 5:24 AM, Eren Türkay er...@skyatlas.com wrote:

 Hello operators,

 I am wondering if anyone is using different versions of Openstack in
 different
 sites.

 We have our first site which is Juno, and we are now having another site
 where
 we are planning to deploy Kilo. Does anyone have experience with different
 versions of installation? Particularly, our Horizon and other clients
 will be
 Juno, but they will talk to secondary site which is Kilo. Inferring from
 the
 release notes, Kilo API looks backward compatible with Juno, so I'm a
 little
 optimistic about it but still I'm not sure.

 Any help is appreciated,
 Eren

 --
 Eren Türkay, System Administrator
 https://skyatlas.com/ | +90 850 885 0357

 Yildiz Teknik Universitesi Davutpasa Kampusu
 Teknopark Bolgesi, D2 Blok No:107
 Esenler, Istanbul Pk.34220


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [magnum] Trying to use Magnum with Kilo

2015-08-18 Thread David Medberry
Distributions are respectively responsible for what they package or don't
package (though if they package the service they SHOULD IMHO package a
functioning client.) There is an overall project (or two) to get to a
single common client.



On Tue, Aug 18, 2015 at 2:09 PM, Mike Smith mism...@overstock.com wrote:

 Thanks.  I’ll check it out.

 Can anyone out there tell me how projects like python-magnumclient and the
 openstack-magnum software itself get picked up by the RDO folks?  I’d like
 to see those be picked up in their distro but I’m not sure where that work
 takes place.  Do project developers typically package up their projects and
 make them available to the RDO maintainers or do RedHat folks pick up
 sources from the projects, do the packaging, and make those packages
 available?

 We can start building our own packages for this of course, but as
 operators prefer not to because of all the dependency overhead.   Unless
 it’s something we can do to help get the packages into the RDO repos (i.e.
 become a package maintainer as a way of contributing)

 Mike Smith
 Principal Engineer / Cloud Team Lead
 Overstock.com



 On Aug 18, 2015, at 2:47 PM, David Medberry openst...@medberry.net
 wrote:

 http://git.openstack.org/cgit/openstack/python-magnumclient

 On Tue, Aug 18, 2015 at 12:21 PM, Mike Smith mism...@overstock.com
 wrote:

 I’m trying to use Magnum on our Openstack Kilo cloud which runs CentOS 7
 and RDO.   Since the Magnum RPMs are present in RDO, I’m using RPMs built
 by one of the Magnum developers (available at
 https://copr-be.cloud.fedoraproject.org/results/sdake/openstack-magnum/)

 Once I got rid of a conflicting UID that it tries to use for the magnum
 user, I’m able to start up the services.   However, following along with
 the Magnum documentation that exists (
 http://docs.openstack.org/developer/magnum/), the next step is to use
 the “magnum” command to define things like bay models and bays.

 However, the “magnum” command doesn’t seem to exist.  I’m not sure if
 it’s supposed to exist as a symlink to something else?

 Is anyone else out there using Magnum with RDO Kilo?  I’d love to chat
 with someone else that has worked through these issues.

 Thanks,
 Mike

 Mike Smith
 Principal Engineer / Cloud Team Lead
 Overstock.com http://overstock.com/




 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




 --

 CONFIDENTIALITY NOTICE: This message is intended only for the use and
 review of the individual or entity to which it is addressed and may contain
 information that is privileged and confidential. If the reader of this
 message is not the intended recipient, or the employee or agent responsible
 for delivering the message solely to the intended recipient, you are hereby
 notified that any dissemination, distribution or copying of this
 communication is strictly prohibited. If you have received this
 communication in error, please notify sender immediately by telephone or
 return email. Thank you.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [magnum] Trying to use Magnum with Kilo

2015-08-18 Thread David Medberry
http://git.openstack.org/cgit/openstack/python-magnumclient

On Tue, Aug 18, 2015 at 12:21 PM, Mike Smith mism...@overstock.com wrote:

 I’m trying to use Magnum on our Openstack Kilo cloud which runs CentOS 7
 and RDO.   Since the Magnum RPMs are present in RDO, I’m using RPMs built
 by one of the Magnum developers (available at
 https://copr-be.cloud.fedoraproject.org/results/sdake/openstack-magnum/)

 Once I got rid of a conflicting UID that it tries to use for the magnum
 user, I’m able to start up the services.   However, following along with
 the Magnum documentation that exists (
 http://docs.openstack.org/developer/magnum/), the next step is to use the
 “magnum” command to define things like bay models and bays.

 However, the “magnum” command doesn’t seem to exist.  I’m not sure if it’s
 supposed to exist as a symlink to something else?

 Is anyone else out there using Magnum with RDO Kilo?  I’d love to chat
 with someone else that has worked through these issues.

 Thanks,
 Mike

 Mike Smith
 Principal Engineer / Cloud Team Lead
 Overstock.com




 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [all] OpenStack voting by the numbers

2015-07-29 Thread David Medberry
Nice writeup maish! very nice.

On Wed, Jul 29, 2015 at 10:27 AM, Maish Saidel-Keesing mais...@maishsk.com
wrote:

 Some of my thoughts on the Voting process.


 http://technodrone.blogspot.com/2015/07/openstack-summit-voting-by-numbers.html

 Guess which category has the most number of submissions??
 ;)

 --
 Best Regards,
 Maish Saidel-Keesing

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] adding postinstall scripts in cloud init

2015-07-23 Thread David Medberry
Yep, vendor data is also covered in the cloud-init docs:

http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#vendor-data

On Thu, Jul 23, 2015 at 10:48 AM, Fox, Kevin M kevin@pnnl.gov wrote:

  Vendor data can do that. See the json metadata plugin to the nova
 metadata server and the vendor data section of cloud init.

 Thanks,
 Kevin

 --
 *From:* Kris G. Lindgren
 *Sent:* Thursday, July 23, 2015 8:38:33 AM
 *To:* Openstack Guru; openstack-operators@lists.openstack.org
 *Subject:* Re: [Openstack-operators] adding postinstall scripts in cloud
 init

   Do you mean outside of the standard supplying user_data when the VM
 boots?  Or do you mean that you (as the cloud provider) want every vm to
 always do x,y,z and to leave user_data open to your end users?
  

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.


   From: Openstack Guru openstackg...@gmail.com
 Date: Thursday, July 23, 2015 at 8:57 AM
 To: openstack-operators@lists.openstack.org 
 openstack-operators@lists.openstack.org
 Subject: [Openstack-operators] adding postinstall scripts in cloud init

   Hello every one,

  I would like to add some command's in cloud init to execute while
 building an instance. Please advice best way to accomplish this?

  Thanks in advance.



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] adding postinstall scripts in cloud init

2015-07-23 Thread David Medberry
Kris's answer is correct--use user_data unless you have a reason not to.
You could munge the user_data if you wanted to do something additional.
You could also munge the cloud-image you are using to do something custom
and/or fork cloud-init.

ref:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/End_User_Guide/user-data.html
http://cloudinit.readthedocs.org/en/latest/topics/datasources.html
http://cloudinit.readthedocs.org/en/latest/topics/examples.html

Be the Guru, use the source!

On Thu, Jul 23, 2015 at 9:57 AM, Openstack Guru openstackg...@gmail.com
wrote:

 Hello every one,

 I would like to add some command's in cloud init to execute while building
 an instance. Please advice best way to accomplish this?

 Thanks in advance.



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova cells v2 and operational impacts

2015-07-21 Thread David Medberry
Also, if there is feedback, getting it in today or tomorrow would be most
effective.

Michael, this plan works for me/us. TWC. -d

On Tue, Jul 21, 2015 at 9:45 AM, Michael Still mi...@stillhq.com wrote:

 Heya,

 the nova developer mid-cycle meetup is happening this week. We've been
 talking through the operational impacts of cells v2, and thought it
 would be a good idea to mention them here and get your thoughts.

 First off, what is cells v2? The plan is that _every_ nova deployment
 will be running a new version of cells. The default will be a
 deployment of a single cell, which will have the impact that existing
 single cell deployments will end up having another mysql database that
 is required by cells. However, you wont be required to bring up any
 additional nova services at this point [1], as cells v2 lives inside
 the nova-api service.

 The advantage of this approach is that cells stops being a weird
 special case run by big deployments. We're forced to implement
 everything in cells, instead of the bits that a couple of bigger
 players cared enough about, and we're also forced to test it better.
 It also means that smaller deployments can grow into big deployments
 much more easily. Finally, it also simplifies the nova code, which
 will reduce our tech debt.

 This is a large block of work, so cells v2 wont be fully complete in
 Liberty. Cells v1 deployments will effective run both cells v2 and
 cells v1 for this release, with the cells v2 code thinking that there
 is a single very large cell. We'll continue the transition for cells
 v1 deployments to pure cells v2 in the M release.

 So what's the actual question? We're introducing an additional mysql
 database that every nova deployment will need to possess in Liberty.
 We talked through having this data be in the existing database, but
 that wasn't a plan that made us comfortable for various reasons. This
 means that operators would need to do two db_syncs instead of one
 during upgrades. We worry that this will be annoying to single cell
 deployments.

 We therefore propose the following:

  - all operators when they hit Liberty will need to add a new
 connection string to their nova.conf which configures this new mysql
 database, there will be a release note to remind you to do this.
  - we will add a flag which indicates if a db_sync should imply a sync
 of the cells database as well. The default for this flag will be true.

 This means that you can still do these syncs separately if you want,
 but we're not forcing you to remember to do it if you just want it to
 always happen at the same time.

 Does this sound acceptable? Or are we over thinking this? We'd
 appreciate your thoughts.

 Cheers,
 Michael

 1: there is some talk about having a separate pool of conductors to
 handle the cells database, but this wont be implemented in Liberty.

 --
 Rackspace Australia

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Paid OpenStack / Ceph consulting

2015-07-19 Thread David Medberry
Nathan,

From the description, RDO Juno and Ceph, it sounds like you should just
hire Red Hat consultants as that is their stack.

Alternatively, you could just ask your questions here and or in ask
Openstack and maybe skip the whole consultancy step.
On Jul 19, 2015 11:34 AM, Nathan Stratton nat...@robotics.net wrote:

 I have a small 24 server RDO OpenStack Juno / Ceph Firefly cluster on
 Centos 7 that is for the most part working well. I have run into a few
 problems tho that I have not been able to resolve myself so I am looking
 for a consultant to help. Most of the consulting groups I have run into
 ONLY support a specific distribution (aka theirs). That is great, but I
 would like to stick with Open Source projects I am currently using.

 Does anyone know of consulting resources that will work with Open Source
 components already installed rather then forcing a specific commercial
 distribution?

 
 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Puzzling issue: Unacceptable CPU info: CPU doesn't have compatibility

2015-07-17 Thread David Medberry
HI Daniel,

Yep found that all out.

Now I'm struggling through the NUMA mismatch. NUMA as there are two cpus.
The old CPU was a 10 core 20 thread thus 40 cpus, {0-9,20-29} and then
{10-19,30-39} on the other cell. The new CPU is a 12 core 24 thread.
Apparently even in kilo, this results in a mismatch if I'm running a 2 VCPU
guest and trying to migrate from new to old. I suspect I have to disable
NUMA somehow (filter, etc) but it is entirely non-obvious. And of course
I'm doing this again in OpenStack nova (not direct libvirt) so I'm going to
do a bit more research and then file a new bug. This also may be fixed in
Kilo but Im not finding it (and it may be fixed in Liberty already and
just need a backport.)

My apologies for not following up to the list once I found the Kilo
solution to the original problem.

On Fri, Jul 17, 2015 at 6:10 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Fri, Jul 17, 2015 at 01:07:56PM +0100, Daniel P. Berrange wrote:
  On Thu, Jul 09, 2015 at 12:00:15PM -0600, David Medberry wrote:
   Hi,
  
   When trying to live-migrate between two distinct CPUs, I kind of expect
   there to be issues. Which is why openstack supports the
 cpu_mode=custom,
   cpu_model=MODELNAME flags for libvirt.
  
   When I set those to some Lowest Common Denominator (and restart
   everything), I still git the issue. I've set both systems to
 SandyBridge
   and tested as well as Conroe. The actual CPUs are Ivy Bridge and
 Haswell
   (newer than SandyBridge and supersets thereof.)
  
   The Older-Newer migration works fine (even without setting a
 cpu_model)
   but the newer to older never works.
  
   Specfics:
   OpenStack Juno.2
   LibVirt: 1.2.2
  
   Older: model name : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (Ivy
 Bridge)
   Newer: model name : Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz (Haswell)
  
   Daniel, Operators: Any ideas?
 
  In versions of Nova prior to Liberty, nova did an incorrect CPU model
  comparison. It checks the source *host* CPU model against the dest
  host CPU model, instead of checking the *guest* CPU model against the
  dest host CPU model.
 
  This is fixed in Liberty, provided you have the cpu_mode=custom and
  cpu_modelk=MODELNAME parameters set. Unfortunately the fix will only
  work for guests that are launched under Liberty codebase as it needed
  a database addition. So if you have existing running guests from Juno
  those need restarting after upgrade.

 Sigh,  s/Liberty/Kilo/ in everything I wrote here

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Puzzling issue: Unacceptable CPU info: CPU doesn't have compatibility

2015-07-17 Thread David Medberry
Hi Aubrey,

I'm actually wondering if this is a new regression bug INTRODUCED in Kilo
(as part of the NUMA work). I'll be testing that a bit too by altering my
Juno architecture a bit (monkeying with kernel MAXCPUS to see if I can get
into a similar situation in Juno but with identical hardware.)

The best info I have found so far is Daniel's howto (in the openstack docs)
for creating a test scenario for numa:

http://docs.openstack.org/developer/nova/devref/testing/libvirt-numa.html
(and related pages)

On Fri, Jul 17, 2015 at 7:10 AM, Aubrey Wells awe...@digiumcloud.com
wrote:

 I ran into the different core count thing a while back too and its not
 fixed in Kilo (that's where I discovered it). I posted to the mailing list
 and didn't get any feedback on it, but as I was just looking in the
 archives to send you the link to the hack I found to fix it, I noticed that
 it silently failed to post to the mailing list. I'll add the text of my
 email below, maybe someone will have some ideas. Original message follows.

 ===

 Greetings,
 Trying to decide if this is a bug or just a config option that I can't
 find. The setup I'm currently testing in my lab with is two compute nodes
 running Kilo, one has 40 cores (2x 10c with HT) and one has 16 cores (2x 4c
 + HT). I don't have any CPU pinning enabled in my nova config, which seems
 to have the effect of setting in libvirt.xml a vcpu cpuset element like (if
 created on the 40c node):

 vcpu
 cpuset=1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,391/vcpu

 And then if I migrate that instance to the 16c node, it will bomb out with
 an exception:

 Live Migration failure: Invalid value
 '0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38' for 'cpuset.cpus':
 Invalid argument

 Which makes sense, since that node doesn't have any vcpus after 15 (0-15).

 I can fix the symptom by commenting out a line in
 nova/virt/libvirt/config.py (circa line 1831) so it always has an empty
 cpuset and thus doesn't write that line to libvirt.xml:
 # vcpu.set(cpuset, hardware.format_cpu_spec(self.cpuset))

 And the instance will happily migrate to the host with less CPUs, but this
 loses some of the benefit of openstack trying to evenly spread out the
 core usage on the host, at least that's what I think the purpose of that
 is.

 I'd rather fix it the right way if there's a config option I don't see or
 file a bug if its a bug.

 What I think should be happening is that when it creates the libvirt
 definition on the destination compute node, it write out the correct cpuset
 per the specs of the hardware its going on to.

 If it matters, in my nova-compute.conf file, I also have cpu mode and
 model defined to allow me to migrate between the two different
 architectures to begin with (the 40c is Sandybridge and the 16c is Westmere
 so I set it to the lowest common denominator of Westmere):

 cpu_mode=custom
 cpu_model=Westmere

 Any help is appreciated.



 On Fri, Jul 17, 2015 at 8:58 AM, David Medberry openst...@medberry.net
 wrote:

 HI Daniel,

 Yep found that all out.

 Now I'm struggling through the NUMA mismatch. NUMA as there are two cpus.
 The old CPU was a 10 core 20 thread thus 40 cpus, {0-9,20-29} and then
 {10-19,30-39} on the other cell. The new CPU is a 12 core 24 thread.
 Apparently even in kilo, this results in a mismatch if I'm running a 2 VCPU
 guest and trying to migrate from new to old. I suspect I have to disable
 NUMA somehow (filter, etc) but it is entirely non-obvious. And of course
 I'm doing this again in OpenStack nova (not direct libvirt) so I'm going to
 do a bit more research and then file a new bug. This also may be fixed in
 Kilo but Im not finding it (and it may be fixed in Liberty already and
 just need a backport.)

 My apologies for not following up to the list once I found the Kilo
 solution to the original problem.

 On Fri, Jul 17, 2015 at 6:10 AM, Daniel P. Berrange berra...@redhat.com
 wrote:

 On Fri, Jul 17, 2015 at 01:07:56PM +0100, Daniel P. Berrange wrote:
  On Thu, Jul 09, 2015 at 12:00:15PM -0600, David Medberry wrote:
   Hi,
  
   When trying to live-migrate between two distinct CPUs, I kind of
 expect
   there to be issues. Which is why openstack supports the
 cpu_mode=custom,
   cpu_model=MODELNAME flags for libvirt.
  
   When I set those to some Lowest Common Denominator (and restart
   everything), I still git the issue. I've set both systems to
 SandyBridge
   and tested as well as Conroe. The actual CPUs are Ivy Bridge and
 Haswell
   (newer than SandyBridge and supersets thereof.)
  
   The Older-Newer migration works fine (even without setting a
 cpu_model)
   but the newer to older never works.
  
   Specfics:
   OpenStack Juno.2
   LibVirt: 1.2.2
  
   Older: model name : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (Ivy
 Bridge)
   Newer: model name : Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
 (Haswell)
  
   Daniel, Operators: Any ideas?
 
  In versions of Nova prior to Liberty, nova did an incorrect CPU model

[Openstack-operators] Move to Inbox More 1 of 5, 707 Collapse all Print all In new window Puzzling issue: Unacceptable CPU info: CPU doesn't have compatibility

2015-07-09 Thread David Medberry
Hi,

When trying to live-migrate between two distinct CPUs, I kind of expect
there to be issues. Which is why openstack supports the cpu_mode=custom,
cpu_model=MODELNAME flags for libvirt.

When I set those to some Lowest Common Denominator (and restart
everything), I still git the issue. I've set both systems to SandyBridge
and tested as well as Conroe. The actual CPUs are Ivy Bridge and Haswell
(newer than SandyBridge and supersets thereof.)

The Older-Newer migration works fine (even without setting a cpu_model)
but the newer to older never works.

Specfics:
OpenStack Juno.2
LibVirt: 1.2.2

Older: model name : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (Ivy Bridge)
Newer: model name : Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz (Haswell)

And to clarify: the nova-compute error is the subject line:

2015-07-09 17:55:02.485 8651 ERROR oslo.messaging._drivers.common
[req-48a16da3-41e0-43ee-99c8-43d178273101 ] ['Traceback (most recent call
last):\n', '  File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
134, in _dispatch_and_reply\nincoming.message))\n', '  File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt,
args)\n', '  File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt,
**new_args)\n', '  File
/usr/lib/python2.7/dist-packages/nova/exception.py, line 88, in wrapped\n
   payload)\n', '  File
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line
82, in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', '
 File /usr/lib/python2.7/dist-packages/nova/exception.py, line 71, in
wrapped\nreturn f(self, context, *args, **kw)\n', '  File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 331, in
decorated_function\nkwargs[\'instance\'], e, sys.exc_info())\n', '
 File /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py,
line 82, in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n',
'  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line
319, in decorated_function\nreturn function(self, context, *args,
**kwargs)\n', '  File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 4860, in
check_can_live_migrate_destination\nblock_migration,
disk_over_commit)\n', '  File
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4999,
in check_can_live_migrate_destination\n
 self._compare_cpu(source_cpu_info)\n', '  File
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 5177,
in _compare_cpu\nraise exception.InvalidCPUInfo(reason=m % {\'ret\':
ret, \'u\': u})\n', InvalidCPUInfo: Unacceptable CPU info: CPU doesn't
have compatibility.\n\n0\n\nRefer to
http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult\n;]

The nova client reports:
dmbp:~ dmedberry$ nova live-migration 7181cea1-ebbf-4f05-9316-80eef0216648
ERROR (BadRequest): No valid host was found.  (HTTP 400) (Request-ID:
req-48a16da3-41e0-43ee-99c8-43d178273101)

Daniel, Operators? Ring any bells? Any ideas?

-d
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Move to Inbox More 1 of 5, 707 Collapse all Print all In new window Puzzling issue: Unacceptable CPU info: CPU doesn't have compatibility

2015-07-09 Thread David Medberry
Ah, this has been fixed in Kilo. Sorry for the spurious noise.

Ref:
https://bugs.launchpad.net/nova/+bug/1082414
https://bugs.launchpad.net/nova/+bug/1433933
and
https://git.openstack.org/cgit/openstack/nova/commit/?id=79a0755597f4983367eb0caf4669ffb881b4f720
https://git.openstack.org/cgit/openstack/nova/commit/?id=5653bd291665bcdecad46ed6654a04c49e4b1dda

On Thu, Jul 9, 2015 at 1:23 PM, David Medberry openst...@medberry.net
wrote:

 Hi,

 When trying to live-migrate between two distinct CPUs, I kind of expect
 there to be issues. Which is why openstack supports the cpu_mode=custom,
 cpu_model=MODELNAME flags for libvirt.

 When I set those to some Lowest Common Denominator (and restart
 everything), I still git the issue. I've set both systems to SandyBridge
 and tested as well as Conroe. The actual CPUs are Ivy Bridge and Haswell
 (newer than SandyBridge and supersets thereof.)

 The Older-Newer migration works fine (even without setting a cpu_model)
 but the newer to older never works.

 Specfics:
 OpenStack Juno.2
 LibVirt: 1.2.2

 Older: model name : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (Ivy Bridge)
 Newer: model name : Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz (Haswell)

 And to clarify: the nova-compute error is the subject line:

 2015-07-09 17:55:02.485 8651 ERROR oslo.messaging._drivers.common
 [req-48a16da3-41e0-43ee-99c8-43d178273101 ] ['Traceback (most recent call
 last):\n', '  File
 /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
 134, in _dispatch_and_reply\nincoming.message))\n', '  File
 /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
 177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt,
 args)\n', '  File
 /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
 123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt,
 **new_args)\n', '  File
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 88, in wrapped\n
payload)\n', '  File
 /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line
 82, in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', '
  File /usr/lib/python2.7/dist-packages/nova/exception.py, line 71, in
 wrapped\nreturn f(self, context, *args, **kw)\n', '  File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 331, in
 decorated_function\nkwargs[\'instance\'], e, sys.exc_info())\n', '
  File /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n',
 '  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line
 319, in decorated_function\nreturn function(self, context, *args,
 **kwargs)\n', '  File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 4860, in
 check_can_live_migrate_destination\nblock_migration,
 disk_over_commit)\n', '  File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4999,
 in check_can_live_migrate_destination\n
  self._compare_cpu(source_cpu_info)\n', '  File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 5177,
 in _compare_cpu\nraise exception.InvalidCPUInfo(reason=m % {\'ret\':
 ret, \'u\': u})\n', InvalidCPUInfo: Unacceptable CPU info: CPU doesn't
 have compatibility.\n\n0\n\nRefer to
 http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult\n;]

 The nova client reports:
 dmbp:~ dmedberry$ nova live-migration 7181cea1-ebbf-4f05-9316-80eef0216648
 ERROR (BadRequest): No valid host was found.  (HTTP 400) (Request-ID:
 req-48a16da3-41e0-43ee-99c8-43d178273101)

 Daniel, Operators? Ring any bells? Any ideas?

 -d

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Scaling the Ops Meetup

2015-07-06 Thread David Medberry
On Mon, Jul 6, 2015 at 11:28 AM, Jonathan Proulx j...@jonproulx.com wrote:

 More tracks makes it harder for small to medium size sites to cover.
 Not saying we shouldn't expand parallelism but we should be cautious.

 My site is a private university cloud with order of 100 hypervisors,
 we're more or less happy to send 2 people to summits and one to mid
 cycles, at least that's what I've gotten them to pay for in the past.
 Obviously we don't come close to covering summits.  The dual track
 (for one attendee) in PHL was OK and conflicts weren't too bad.

 The obvious alternative if we need more sessions would be to go longer
 and honestly I'm not keen on that either and would probably prefer
 wider over longer.


+1 on wider vs longer. if we do go longer, let's limit it to half-day
expansion (so folks can fly in or out that half day.)
Of course if it is in Timbuktu, that 1/2 day won't buy much in terms of
maximizing commute time.

https://en.wikipedia.org/wiki/Timbuktu
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Scaling the Ops Meetup

2015-07-06 Thread David Medberry
On Thu, Jul 2, 2015 at 9:23 AM, Tom Fifield t...@openstack.org wrote:

 Venue selection process.

 At the moment, there's a few of us who work hard in the shadows to make
 the best choice we can from a range of generous offers :)


Maybe you could host in Taiwan Tom or Tim could host in Geneva/CERN?
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Scaling the Ops Meetup

2015-07-06 Thread David Medberry
On Mon, Jul 6, 2015 at 12:14 PM, Anita Kuno ante...@anteaya.info wrote:

 Right now developers are asking for details so they can decide/plan on
 attending the next event.

 Are you close to deciding a location and/or perhaps some dates?


Yep, this is becoming a big issue. Several others are just going to stomp
all over August as they schedule their meetups.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Scaling the Ops Meetup

2015-07-02 Thread David Medberry
On Thu, Jul 2, 2015 at 9:23 AM, Tom Fifield t...@openstack.org wrote:

 Venue selection process.

 At the moment, there's a few of us who work hard in the shadows to make
 the best choice we can from a range of generous offers :)

 Many thanks. I know this is a bit of a PITA.


 In our brave new world, I think this should be a bit more open, what do
 you think?


Don't care if it is more open. I wish it would be more timely. If making it
more open makes the decision and locale c more timely, all for open.



 What kind of structure do we need to make the best decision?


The perfect is the enemy of the good (or something like malapropically
paraphrased.)
We like to say, JFDI.
Name a spot, name a limit, make a reservation tool (or use an existing one
like eventbrite), consider having pocket overflow amount you / someone
judicially administers.

-d
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


  1   2   >